text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
As demand for opioid remedy skyrockets, police train for overdose treatment with Naloxone
Zachariah Hughes, Alaska Public Media
Lt. Steve Adams with the Alaska State Troopers showing off how to use Narcan at a training in Anchorage, Aug. 18, 2017 (Photo: Zachariah Hughes – Alaska Public Media)
In the ongoing effort to curb overdose deaths from opioids like heroin, police across Alaska are getting trained to use a new tool. Naloxone — a medication that rapidly reverses the effects of an opioid overdose — has long been used by emergency medics, but now it's being deployed to police departments and non-profits at the front line of the state's opioid crisis. Even with millions of new federal dollars being spent, the demand in outpacing the supply.
A roomful of police listened as Lieutenant Steve Adams showed them how to use Narcan, a brand of nalaxone that comes in a small bottle you could fit inside a purse or jacket pocket. Adams showed how to properly use the nasal spray if someone's overdosing.
"So the signs of an overdose include: their face is gonna be clammy to the touch, the body's gonna be limp, fingernails and lips turn blue," Adams said. "So what do you do? Turn them on their back, tilt the head back, insert the tip into their nostril — doesn't matter, left or right, it's gonna get there. Press it, make sure it's all out. Call 911, then roll them over into this rescue position."
Adams works on drug enforcement with the State Troopers, but his audience today is from a patchwork of agencies. The training is part of a push to get more doses of naloxone to people who are often the first ones to find somebody overdosing.
"As law-enforcement we'll find somebody passed out, or we'll get a call that somebody's passed out in a public restroom in a stall or passed out in a vehicle in a parking garage," Adams said.
Naloxone blocks opioid receptors in the brain, and almost immediately counteracts an overdose — although it may take several doses. There are almost no negative side-effects, even if its given to someone by accident who isn't on opioids. The medication has been around since the 60s, and widely used by medical professionals. But now public health and addiction recovery advocates want to make it as common as defibrillators or epipens: a simple medical intervention the public can use. And Adams said state troopers are in a perfect position to help give the mandatory 15-minute training to dispense Narcan in communities across Alaska.
"Our goal is to provide it to every Alaska State Trooper, and to provide it to every other agency — federal, state, and local — who would like to have it," Adams said.
One of those agencies is the police and fire department at the Ted Stevens International Airport. Sergeant Daniel Juarez said at least once a month officers will find someone who has overdosed on a nearby roadway or in a terminal bathroom before a flight.
"Well, a lot of times we're going to be there usually before paramedics, and the sooner we can administer possibly a life-saving treatment we'd like to do that," Juarez said.
The governor's disaster declaration on the opioid crisis, along with a lesislative bill last year have increased access to the medication. Non-profits like MyHouse in Wasilla distribute Narcan kits. The Alaska AIDS Assistance Association, which runs the state's largest needle exchange, has given out 300 to 400 of the kits, and routinely requests more. Even nurses and security guards from the Anchorage School District were recently trained.
Narcan isn't cheap. Over the counter at a pharmacy it costs about $150 for a two-dose pack. But the state buys that same amount $75 with federal money from the Department of Health and Human Services through a $4.2 million five-year grant. The kits are given to individuals for free. Andy Jones helps administer the state grant. He said they originally aimed to distribute five thousand Narcan kits this year.
"Since February 15th we have distributed 6300 kits, and the pace continues to get faster and faster because more and more overdose response programs in communities are coming online," Jones said.
Gathering the data is difficult. Jones said so far they've counted at least 39 saves. But those figures depend on medics, police, or volunteers submitting a form that says they used Narcan. Jones admits there are likely many administering the medication and not generating a record.
In 2016, an estimated 88 Alaskans died from opioid overdoses.
In Anchorage, the paramedics with the Anchorage Fire Department remain one of the main bulwarks against overdoses. According to figures from AFD, since the start of the year they've used naloxone an average of 35 times a month — more than once a day although those numbers are skewed slightly by a surge in overdoses in May that are believed to have been caused by heroin laced with the powerful synthetic opioid fentynol.
But they can't access the federal grant, and are using their budgets to pay for the increasingly used medication. AFD Assistant Chief Scheunemann said last year the department spent just over $14,000 on naloxone. This year, they'd almost hit that amount by the end of the summer.
Previous articleAlgo Nuevo September 10, 2017
Next articleSkagway Assembly formally voices support for police chief following loss of state certification
Zachariah Hughes reports on city & state politics, arts & culture, drugs, and military affairs in Anchorage and South Central Alaska. zhughes [at] alaskapublic (dot) org | 907.550.8424 | @ZachHughesAK About Zachariah | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 263 |
package com.wizzardo.tools.cache;
import java.lang.ref.WeakReference;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedQueue;
/**
* @author: wizzardo
* Date: 2/12/14
*/
public class CacheCleaner extends Thread {
public interface OnCacheAddedListener {
void onAdd(Cache cache);
}
private Set<WeakReference<Cache>> caches = Collections.newSetFromMap(new ConcurrentHashMap<WeakReference<Cache>, Boolean>());
private volatile long wakeup = -1;
private volatile boolean sleeping = false;
private final static CacheCleaner instance;
private final Queue<OnCacheAddedListener> listeners = new ConcurrentLinkedQueue<OnCacheAddedListener>();
static {
instance = new CacheCleaner();
instance.start();
}
private CacheCleaner() {
setDaemon(true);
setName(this.getClass().getName());
}
static void addCache(Cache cache) {
instance.caches.add(new WeakReference<Cache>(cache));
for (OnCacheAddedListener listener : instance.listeners) {
listener.onAdd(cache);
}
}
public static void addListener(OnCacheAddedListener listener) {
instance.listeners.add(listener);
}
public static int size() {
return instance.caches.size();
}
public static Iterable<Cache> iterable() {
return new Iterable<Cache>() {
@Override
public Iterator<Cache> iterator() {
final Iterator<WeakReference<Cache>> iterator = instance.caches.iterator();
return new Iterator<Cache>() {
Cache next;
@Override
public boolean hasNext() {
if (next != null)
return true;
while (iterator.hasNext() && next == null) {
next = iterator.next().get();
if (next == null || next.isDestroyed()) {
iterator.remove();
if (next != null)
next = null;
}
}
return next != null;
}
@Override
public Cache next() {
if (hasNext()) {
try {
return next;
} finally {
next = null;
}
} else {
throw new NoSuchElementException();
}
}
@Override
public void remove() {
throw new IllegalStateException();
}
};
}
};
}
static void updateWakeUp(long wakeup) {
// System.out.println("updateWakeUp");
if (instance.wakeup < wakeup && instance.wakeup > 0)
return;
synchronized (instance) {
if (instance.wakeup < wakeup && instance.wakeup > 0)
return;
// System.out.println("set wakeup after " + (wakeup - System.currentTimeMillis()));
instance.wakeup = wakeup;
if (instance.sleeping) {
// System.out.println("notify");
instance.notify();
}
}
}
@Override
public void run() {
// Map.Entry<? extends Holder<?, ?>, Long> entry = null;
// Holder<?, ?> h;
while (true) {
// System.out.println();
// System.out.println("cleaning");
Long time = System.currentTimeMillis();
Iterator<WeakReference<Cache>> iterator = caches.iterator();
Long wakeup = time + (24 * 3600 * 1000);
while (iterator.hasNext()) {
Cache<?, ?> cache = iterator.next().get();
if (cache == null || cache.isDestroyed()) {
iterator.remove();
continue;
}
long l = cache.refresh(time);
if (l > 0 && l < wakeup)
wakeup = l;
}
// System.out.println("can sleep for " + (wakeup - time));
while (wakeup.compareTo(time = System.currentTimeMillis()) > 0) {
synchronized (this) {
if (this.wakeup < wakeup && this.wakeup > time)
wakeup = this.wakeup;
else
this.wakeup = wakeup;
sleeping = true;
// System.out.println("going to sleep for " + (wakeup - time));
if (wakeup - time > 0)
try {
// System.out.println("sleep for: " + (wakeup - time));
this.wait(wakeup - time);
} catch (InterruptedException ignored) {
}
// System.out.println("wake up, can sleep " + (this.wakeup - System.currentTimeMillis()) + "ms more");
sleeping = false;
}
}
}
}
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 7,145 |
\section{Introduction}
It is very useful to have a straightforward framework to find the
masses and mixing angles of a generalized neutrino mass
matrix. In this work special emphasis is given on the
diagonalization procedure of the most general $3\times3$ complex symmetric
effective neutrino mass matrix ($m_\nu$).
Starting from a most general $m_\nu$ we calculate
three masses directly
(without any approximation) in terms of the elements of $m_\nu$.
Knowing the mass eigenvalues, three mixing angles
and the Dirac CP phase are also obtained. Apart from the Dirac
CP phase the total diagonalization matrix consists of three unphysical
phases and two Majorana phases.
Eliminating the unphysical phases, extraction of the
Majorana phases (for generalized $m_\nu$) are also done. We would like to
emphasis that those expressions
are readily applicable in case of
any symmetric or broken symmetric mass matrix.
More importantly,
the diagonalization is exact and the corresponding neutrino observables are
calculated in an exact form without assuming any approximate procedure
regarding diagonalization.
To illustrate, we employ the
obtained expressions in the context of a neutrino mass matrix
derived from a broken symmetry.
\par
In the field of neutrino physics, it is now a challenging task
to build a suitable model which can accommodate
neutrino oscillation experimental data comprising
solar\cite{Aharmim:2008kc,Aharmim:2009gd}, atmospheric\cite{Wendell:2010md}
and recent reactor neutrino \cite{t2k,reno,db,dc} experiments
as well as the
constraint on the sum of the three neutrino masses arising from
cosmological data\cite{Ade:2013zuv,Bennett:2012fp}.
Furthermore, for Majorana type neutrino,
an additional constraint on the $|m_{\nu_ {ee}}|$ element of
the neutrino mass matrix
\cite{Tortola:2012te,Giuliani:2010zz,Rodejohann:2012xd}
is also necessary to take into account.
Popular paradigm is to invoke some symmetries or
ansatz\cite{Morisi:2012fg,King:2013eh,Smirnov:2013uba},
{\it viz.} $A_4$\cite{alta1},
$\mu\tau$ symmetry\cite{Fuku}-\cite{Adhikary:2009kz},
scaling ansatz\cite{Adhikary:2012kb}-\cite{sc3}, to generate
nondegenerate mass eigenvalues\cite{Merle:2006du} and
$\theta_{23}, \theta_{12}\neq 0$ with $\theta_{13}=0$ at the leading order
and nonzero $\theta_{13}$\cite{Gluza}-\cite{Dutta:2013xla} is generated
by further breaking of such symmetries or ansatz.
Contrary to the above idea, in the present work, we explore a typical symmetry, cyclic permutation symmetry\cite{Koide:2000zi,Damanik:2007cs,Damanik:2010rv}, in which it is
possible to generate all three mixing angles nonzero at the leading order,
however, the mass eigenvalues become degenerate.
To circumvent this loop hole, we break the symmetry in such a
way that the degeneracy in mass eigenvalues is
lifted but the mixing angles are still compatible with the extant data.
\par
We consider standard $SU(2)_L\times U(1)_Y$ model with three right handed neutrinos $N_{e_R}$, $N_{\mu_R}$,
$N_{\tau_R}$ and invoke type-I seesaw mechanism to generate
light neutrino masses. We further impose a cyclic permutation
symmetry on both left and right chiral neutrino fields as
\begin{eqnarray}
\nu_{e_L}\rightarrow \nu_{\mu_L} \rightarrow \nu_{\tau_L}
\rightarrow \nu_{e_L},\nonumber\\
N_{e_R}\rightarrow N_{\mu_R} \rightarrow N_{\tau_R} \rightarrow N_{e_R}.\label{cyclic}
\end{eqnarray}
Cyclic permutation symmetry is a subgroup of $S_3$ permutation symmetry\cite{alta2} with three of its elements as
$\{ P_0, P_{123},P_{132}\}$\footnote{Permutation of three objects $\{a, b, c\}$ form $S_3$ group.
There are six elements: $P_0$, $P_{12}$, $P_{13}$, $P_{23}$, $P_{123}$, $P_{132}$. Their operations are as follows:
$P_0(a,b,c)\rightarrow(a,b,c)$, $P_{12}(a,b,c)\rightarrow(b,a,c)$, $P_{13}(a,b,c)\rightarrow(c,b,a)$, $P_{23}(a,b,c)\rightarrow(a,c,b)$,
$P_{123}(a,b,c)\rightarrow(c,a,b)$, $P_{132}(a,b,c)\rightarrow(b,c,a)$ .}. One of the motivation to study the $S_3$ symmetry is to
realize the well known Tribimaximal (TBM) mixing pattern.
The paper is organized as follows:
In section \ref{gs}, we present the most general solution
of a complex $3\times 3$
symmetric mass matrix to obtain three masses, three mixing angles and the Dirac CP phase.
Expressions for the Majorana phases are given in section \ref{majo}.
Section \ref{cyclic_s} deals with a convenient parametrization and diagonalization of the
proposed cyclic symmetry invariant Majorana neutrino mass matrix.
Expression of $m_\nu$ in parametric form due to broken cyclic symmetry and
corresponding
numerical results and phenomenological
discussions on the allowed parameter ranges
are presented in Section \ref{brk_s}. Section \ref{summary} contains a
summary of the present work.
\section{General Solution}\label{gs}
In this section we calculate the
exact algebraic expressions for the masses and mixing
angles of the most general complex symmetric neutrino mass
matrix ($m_\nu$) which is written in terms of real ($a_i$) and
imaginary ($b_i$) parts as
\begin{equation}
m_\nu=\left( \begin{array}{ccc}
a_1+i b_1 & a_2+i b_2 & a_3+i b_3 \cr
a_2+i b_2 & a_4+i b_4 & a_5+i b_5 \cr
a_3+i b_3 & a_5+i b_5 & a_6+i b_6
\end{array}\right).\label{gen_m}
\end{equation}
\subsection{Mass Eigenvalues}
It is well known that any complex symmetric mass matrix can be diagonalized
by a unitary transformation as
\begin{equation}
U^\dagger m_\nu U^*={\rm diag}(m_1,~m_2,~m_3)
\end{equation}
where $U$ is a unitary matrix and $m_i$'s
($i=1,~2,~3$) are real positive masses. However, the columns of
$U$ can not be the eigenvectors of $m_\nu$
because
\begin{equation}
m_\nu U^*=U{\rm
diag}(m_1,~m_2,~m_3)
\end{equation}
is essentially in the form
\begin{equation}
m_\nu
\left|m_i\right>^*=m_i\left|m_i\right>
\label{neg}
\end{equation}
by considering $|m_i \rangle$ as columns of $U$. Since, the states in the
l.h.s and r.h.s of eq.(\ref{neg}) are different, it is not possible to
utilize the equation of the type $Det(m_\nu -\lambda {\bf
I})= 0$ to obtain the masses $m_i$.
It is therefore necessary to construct a hermitian matrix
$h$ as $h=m_\nu m_\nu^\dagger$. Explicit expressions of
the elements of $h$ matrix in terms of mass matrix parameters $a_i$ and $b_i$
are provided in Appendix \ref{a1}.
The squared mass eigenvalues are obtained by direct diagonalization of
$h$ matrix as
\begin{equation}
U^\dagger h U= {\rm diag}(m_1^2,~m_2^2,~m_3^2)
\end{equation}
where the matrix $U$ is
constructed with the eigenvectors of $h$. It is now straightforward to write
down the characteristic equation as $Det(h-\lambda I)=0$
to find the eigenvalues.
This gives a cubic equation
\begin{equation}
a\lambda^3 +b \lambda^2 + c \lambda + d =0 \label{cubic}
\end{equation}
where the coefficients $a$, $b$, $c$, $d$ are expressed in terms of
the elements of
$h$ matrix and spelt out in Appendix \ref{a2}.
The nature of the roots in eq.(\ref{cubic}) depend on the sign of the discriminant $\Delta$ where
\begin{equation}
\Delta=18abcd-4b^3d+ b^2 c^2 -4 a c^3 -27 a^2 d^2.
\end{equation}
Depending on the sign of $\Delta$
two cases arise as \\
{\bf Case I:} $\Delta\ge 0$ $\Rightarrow$ All roots are real. The roots are
distinct for $\Delta> 0$ and degenerate roots occur for $\Delta=0$.\\
{\bf Case II:} $\Delta<0$ $\Rightarrow$ One of the root is real and the other two are complex conjugate to each other.\\\\
Since hermitian matrix has real roots we stick to the condition $\Delta\ge 0$.
The general expressions of the three roots of eq.(\ref{cubic}) are given by
\begin{eqnarray}
\lambda_1&=&-\frac{b}{3a} -\frac{1}{3a}\sqrt[3]{\frac{1}{2}(2 b^3 -9abc+27a^2d +\sqrt{-27a^2\Delta})}\nonumber\\
&&-\frac{1}{3a}\sqrt[3]{\frac{1}{2}(2 b^3 -9abc+27a^2d -\sqrt{-27a^2\Delta})}\label{x1}\\\nonumber\\
\lambda_2&=&-\frac{b}{3a} -\frac{1+i\sqrt{3}}{6a}\sqrt[3]{\frac{1}{2}(2 b^3 -9abc+27a^2d +\sqrt{-27a^2\Delta})}\nonumber\\
&&-\frac{1-i\sqrt{3}}{6a}\sqrt[3]{\frac{1}{2}(2 b^3 -9abc+27a^2d -\sqrt{-27a^2\Delta})}\label{x2}\\\nonumber\\
\lambda_3&=&-\frac{b}{3a} -\frac{1-i\sqrt{3}}{6a}\sqrt[3]{\frac{1}{2}(2 b^3 -9abc+27a^2d +\sqrt{-27a^2\Delta})}\nonumber\\
&&-\frac{1+i\sqrt{3}}{6a}\sqrt[3]{\frac{1}{2}(2 b^3 -9abc+27a^2d -\sqrt{-27a^2\Delta})}\label{x3} .
\end{eqnarray}
Subject to the condition $\Delta \ge 0$ eq.(\ref{x1}) is simplified as
\begin{equation}
\lambda_1=-\frac{b}{3a}-\frac{1}{3\sqrt[3]{2}a}(\sqrt[3]{x+iy}+\sqrt[3]{x-iy})\label{x11}
\end{equation}
where $x=2b^3-9abc+27a^2d$, $y=3\sqrt{3}a\sqrt{\Delta}$.\\
Substituting $x=r\cos 3\theta$, $y=r\sin 3\theta$ in eq.(\ref{x11}) the complex part cancels out and $\lambda_1$ is simplified to
\begin{eqnarray}
\lambda_1=-\frac{b}{3a}-\frac{2\sqrt[3]{r}}{3\sqrt[3]{2}a}\cos \theta .\label{fst}
\end{eqnarray}
Following similar substitutions in eq.(\ref{x2}) and eq.(\ref{x3}) we get the simplified roots as
\begin{eqnarray}
\lambda_2=-\frac{b}{3a}+\frac{\sqrt[3]{r}}{3\sqrt[3]{2}a}(\cos \theta -\sqrt{3}\sin \theta)\\
\lambda_3=-\frac{b}{3a}+\frac{\sqrt[3]{r}}{3\sqrt[3]{2}a}(\cos \theta +\sqrt{3}\sin \theta).
\end{eqnarray}
The mapping of ($\lambda_1$, $\lambda_2$, $\lambda_3$) to ($m_1^2$, $m_2^2$, $m_3^2$) is done through utilization
of the experimental data.
\subsection{Mixing Angles and Dirac CP phase}
In the above section we have calculated the mass eigenvalues by
directly solving the characteristic equation. In other words,
the matrix $h$ is diagonalized through a rotation by a unitary matrix $U$,
which is known as mixing matrix,
as
\begin{eqnarray}
U^\dagger h U &=&diag(m_1^2, m_2^2, m_3^2)=D
\end{eqnarray}
or,
\begin{equation}
h U =U D \label{uij}.
\end{equation}
Eq.(\ref{uij}) is our key equation to get generalized expression of $U_{ij}$. Comparing l.h.s and r.h.s of eq.(\ref{uij})
we get 9 equations, and these 9 equations are clubbed in three equations in the following way
\begin{eqnarray}
&&(h_{11}-m_i^2)U_{1i}+h_{12}U_{2i}+h_{13}U_{3i}=0 \label{eq1}\\
&&h_{12}^\ast U_{1i}+(h_{22}-m_i^2)U_{2i}+h_{23}U_{3i}=0\\
&&h_{13}^\ast U_{1i}+h_{23}^\ast U_{2i}+(h_{33}-m_i^2)U_{3i}=0
\end{eqnarray}
where $i=1,2,3$. The unitary property of the $U$ matrix further constrains the elements as
\begin{equation}
|U_{1i}|^2+|U_{2i}|^2+|U_{3i}|^2=1. \label{uni}
\end{equation}
Thus utilizing eq.(\ref{eq1}) to eq.(\ref{uni}) we get rowwise elements of $U$ as
\begin{eqnarray}
&&U_{1i}=\frac{(h_{22}-m_i^2)h_{13}-h_{12}h_{23}}{N_i}\nonumber\\%=\frac{P_{1i}}{N_i}\\\nonumber\\
&&U_{2i}=\frac{(h_{11}-m_i^2)h_{23}-h_{12}^\ast h_{13}}{N_i}\nonumber\\%=\frac{P_{2i}}{N_i}\\\nonumber\\
&&U_{3i}=\frac{|h_{12}|^2-(h_{11}-m_i^2)(h_{22}-m_i^2)}{N_i
\end{eqnarray}
where $N_i$ is the normalization constant given by
\begin{eqnarray}
|N_i|^2&=&|(h_{22}-m_i^2)h_{13}-h_{12}h_{23}|^2+|(h_{11}-m_i^2)h_{23}-h_{12}^\ast h_{13}|^2+\nonumber\\
&&(|h_{12}|^2-(h_{11}-m_i^2)(h_{22}-m_i^2))^2 .
\end{eqnarray}
The $U$ matrix obtained here in general can have three phases and three
mixing angles. This can be understood easily by looking at the $h$ matrix. The
$h$ matrix
has six modulii and three phases in three off diagonal elements. After
diagonalization we have three real positive eigenvalues and a unitary matrix
$U$ in which remaining six parameters (three angles and three phases)
are contained. Rotating the $h$ matrix
by a diagonal phase matrix $P$: $h'=P^\dagger h P$ we can absorb atmost
two phases from two off diagonal
elements and the survived phase in rest off diagonal elements will be same as
the phase of $h_{12}h_{23}h_{31}$\footnote{With $P={\rm
diag}(e^{i\alpha_1},~e^{i\alpha_2},~e^{i\alpha_3})$ we have
$h'_{12}=e^{i(\alpha_2-\alpha_1)}h_{12}$,
$h'_{13}=e^{i(\alpha_3-\alpha_1)}h_{13}$,
$h'_{23}=e^{i(\alpha_3-\alpha_2)}h_{23}$. $h_{12}'$, $h'_{13}$ can be made
real with the choice $\alpha_2-\alpha_1=-{\rm arg}h_{12}$,
$\alpha_3-\alpha_1=-{\rm arg}h_{13}$ which in turn fixes
$\alpha_3-\alpha_2={\rm arg}h_{12}-{\rm arg}h_{13}$ and stops further
absorption of phase. Hence survived phase in $h'_{23}$ will be ${\rm
arg}h_{12}+{\rm arg}h_{23}-{\rm arg}h_{13}\equiv {\rm arg}h_{12}h_{23}h_{31}$.}, term.
Phase of the quantity $h_{12}h_{23}h_{31}$ is independent of phase rotation i.e, phase of
$h'_{12}h'_{23}h'_{31}$ is same as the phase of $h_{12}h_{23}h_{31}$. Now, unitary matrix
with three angles and single phase in CKM type parametrization
(following PDG\cite{Beringer:1900zz} convention) is
\begin{equation}
U^{\rm{CKM}}= \left( \begin{array}{ccc} c_{12} c_{13}&
s_{12} c_{13}&
s_{13} e^{-i\delta}\cr
-s_{12} c_{23}-c_{12} s_{23} s_{13} e^{i\delta}& c_{12} c_{23}-
s_{12} s_{23} s_{13} e^{i\delta}&
s_{23} c_{13}\cr
s_{12} s_{23} -c_{12} c_{23} s_{13} e^{i\delta}&
-c_{12} s_{23} -s_{12} c_{23} s_{13} e^{i\delta}&
c_{23} c_{13}\cr
\end{array}\right)
\end{equation}
with $c_{ij} = \cos\theta_{ij}$, $s_{ij} = \sin\theta_{ij}$ and $\delta$ is the Dirac CP phase.
Obtained solution of $U_{ij}$ elements in (\ref{uij}) may contain
unwanted phases which only can appear as the overall
phase factor in elements of the $U$ matrix.
Hence, we can directly compare their modulus
with the modulus of $U^{\rm{CKM}}_{ij}$: $|U^{\rm{CKM}}_{ij}|=|U_{ij}|$.
This gives the expressions of
three mixing angles as
\begin{eqnarray}
\tan \theta_{23}=\frac{|U_{23}|}{|U_{33}|}\\\nonumber\\
\tan \theta_{12}=\frac{|U_{12}|}{|U_{11}|}\\\nonumber\\
\sin \theta_{13}=|U_{13}|\label{last}.
\end{eqnarray}
To obtain the $\delta$ phase we utilize the phase rotation independent quantity
$h_{12}h_{23}h_{31}$. Obviously, absence of phase factor in
$h_{12}h_{23}h_{31}$ makes the $h$ matrix real symmetric
under phase rotation.
Therefore, ${\rm Im}(h_{12}h_{23}h_{31})$
must be proportional to $\sin\delta$:
\begin{eqnarray}
{\rm Im}(h_{12}h_{23}h_{31})=\frac{(m_2^2-m_1^2)(m_3^2-m_2^2)(m_3^2-m_1^2)\sin2\theta_{12}\sin2\theta_{23}\sin2\theta_{13}
\cos\theta_{13}\sin\delta}{8}\nonumber\\
\end{eqnarray}
which can be easily inverted to obtain the phase $\delta$. Thus,
from $h$ we are able to find out three masses, three mixing
angles and the Dirac CP phase in terms of the elements of neutrino mass matrix. Our next goal is to find out remaining two Majorana
phases which we will explore in the next section.
\section{Majorana Phases}\label{majo}
In this section we explicitly calculate the Majorana phases assuming the three neutrino masses, three mixing angles
and the Dirac CP
phase are calculable in terms of the elements of neutrino mass matrix. For a complex symmetric
$m_\nu$ matrix there are twelve independent
parameters arising from six complex elements. These twelve parameters are counted as
(i) three masses, (ii) three mixing angles,
(iii) one Dirac CP phase, (iv) two Majorana phases and (v) three unphysical phases.
These three unphysical phases take crucial part in diagonalization. Now, the unitary
matrix with three angles and six phases can be parametrized as:
\begin{equation}
U_{\rm tot}= P_\phi U^{\rm PMNS}
\end{equation}
where
\begin{equation}
U^{\rm {PMNS}}= U^{\rm{CKM}}
\left( \begin{array}{ccc}e^{\frac{i\alpha_M}{2}}&0&0\cr
0&e^{\frac{i\beta_M}{2}}&0\cr
0&0&1
\end{array}\right)
\end{equation}
and
\begin{equation}
P_\phi=\left( \begin{array}{ccc} e^{i\phi_1}&0&0\cr
0&e^{i\phi_2}&0\cr
0&0&e^{i\phi_3}
\end{array}\right).
\end{equation}
$P_\phi$ is the unphysical phase matrix with unphysical phases $\phi_{1,2,3}$.
Phase matrix in extreme right of the $U^{\rm PMNS}$ matrix contains two Majorana phases $\alpha_M$ and $\beta_M$.
Now $m_\nu$ can be diagonalized as
\begin{eqnarray}
U_{tot}^\dagger m_\nu U_{tot}^*={\rm diag}(m_1,~m_2,~m_3)
\end{eqnarray}
which can be inverted as
\begin{equation}
m_\nu=U_{tot}{\rm diag}(m_1,~m_2,~m_3)U_{tot}^T.\label{inv}
\end{equation}
Equating both sides of eq.(\ref{inv}) elements of $m_\nu$ matrix can be written in terms of masses, mixing angles and phases as
\begin{eqnarray}
(m_\nu)_{11}&=&e^{2i\phi_1}(c_{12}^2c_{13}^2m_1e^{i\alpha_M}+s_{12}^2c_{13}^2m_2e^{i\beta_M}+m_3s_{13}^2e^{-2i\delta})\\
(m_\nu)_{12}&=&e^{i(\phi_1+\phi_2)}c_{13}\{-m_1e^{i\alpha_M}(c_{12}s_{12}c_{23}+c_{12}^2s_{13}s_{23}e^{i\delta})
+m_2e^{i\beta_M}(c_{12}s_{12}c_{23}-s_{12}^2s_{13}s_{23}e^{i\delta})
\nonumber\\&&+m_3s_{13}s_{23}e^{-i\delta}\}\\
(m_\nu)_{13}&=&e^{i(\phi_1+\phi_3)}c_{13}\{m_1e^{i\alpha_M}(-c_{12}^2c_{23}s_{13}e^{i\delta}+c_{12}s_{12}s_{23})
-m_2e^{i\beta_M}(c_{12}s_{12}s_{23}+s_{12}^2s_{13}c_{23}e^{i\delta})
\nonumber\\&&
+m_3s_{13}c_{23}e^{-i\delta}\}\nonumber\\
(m_\nu)_{22}&=&e^{2i\phi_2}\{m_1e^{i\alpha_M}(s_{12}c_{23}+c_{12}s_{23}s_{13}e^{i\delta})^2
+m_2e^{i\beta_M}(c_{12}c_{23}-s_{12}s_{13}s_{23}e^{i\delta})^2\nonumber\\&&+m_3c_{13}^2s_{23}^2\}\\
(m_\nu)_{23}&=&e^{i(\phi_2+\phi_3)}[m_1e^{i\alpha_M}\{c_{12}s_{12}s_{13}(c_{23}^2-s_{23}^2)e^{i\delta}+c_{12}^2c_{23}s_{23}
s_{13}^2e^{2i\delta}-s_{12}^2s_{23}c_{23}\}\nonumber\\&&
-m_2e^{i\beta_M}\{c_{12}s_{12}(c_{23}^2-s_{23}^2)s_{13}e^{i\delta}+c_{12}^2s_{23}c_{23}-
s_{12}^2s_{13}^2s_{23}c_{23}e^{2i\delta}\}\nonumber\\&&+m_3c_{13}^2c_{23}s_{23}]\\
(m_\nu)_{33}&=&e^{2i\phi_3}\{m_1e^{i\alpha_M}(c_{12}c_{23}s_{13}e^{i\delta}-s_{12}s_{23})^2
+m_2e^{i\beta_M}(s_{12}c_{23}s_{13}e^{i\delta}+c_{12}s_{23})^2\nonumber\\&&+m_3c_{13}^2c_{23}^2\}.
\end{eqnarray}
We now extract $\alpha_M$ and $\beta_M$
eliminating unwanted $\phi_i$ phases.
Modulus $|(m_\nu)_{ij}|$ of all elements are free from $\phi_i$ phases.
The combinations such as
$\frac{[(m_\nu)_{ij}]^2}{(m_\nu)_{ii}(m_\nu)_{jj}}$ ($i\ne j$)
are also independent of those
$\phi_i$ phases. Neglecting terms of O($s_{13}^2$) and higher order, we find, among all the $|(m_\nu)_{ij}|$ terms, the term
$|(m_\nu)_{11}|$ has the simplest structure and independent of $\phi_i$.
We can easily extract $\beta_M-\alpha_M$ from this term as
\begin{eqnarray}
\cos(\beta_M-\alpha_M)=\frac{|(m_\nu)_{11}|^2-c_{12}^4m_1^4-s_{12}^4m_2^2}{2c_{12}^2s_{12}^2m_1m_2}.
\end{eqnarray}
To find the individual value of Majorana phases, we consider the term $\frac{[(m_\nu)_{23}]^2}{(m_\nu)_{22}(m_\nu)_{33}}$ which
looks simpler by neglecting terms like $(c_{23}^2-s_{23}^2)s_{13}$, $s_{13}^2$ and their higher power. Substituting Majorana phase difference
$\beta_M-\alpha_M$ in the term $\frac{[(m_\nu)_{23}]^2}{(m_\nu)_{22}(m_\nu)_{33}}$ we can construct two different complex equations only with
$\alpha_M$ and $\beta_M$ respectively. It is straightforward to find out two Majorana phases with the chain of expressions in a
generic form as
\begin{eqnarray}
\tan{\theta_j}=\frac{Y'_jW_j-W'_jY_j}{X_jW'_j-W_jX'_j}
\end{eqnarray}
where $j=1,2$ and $\theta_1=\alpha_M$, $\theta_2=\beta_M$ with
\begin{eqnarray}
X_1&=&A_i-\{D_r\sin(\beta_M-\alpha_M)+
D_i\cos(\beta_M-\alpha_M)+F_r\sin2(\beta_M-\alpha_M)+\nonumber\\
&& F_i\cos2(\beta_M-\alpha_M)+E_i\}\nonumber\\
X'_1&=&\{D_r\cos(\beta_M-\alpha_M)-D_i\sin(\beta_M-\alpha_M)+F_r\cos2(\beta_M-\alpha_M)-\nonumber\\
&& F_i\sin2(\beta_M-\alpha_M)+E_r\}-A_r\nonumber\\
Y_1&=&A_r+\{D_r\cos(\beta_M-\alpha_M)-D_i\sin(\beta_M-\alpha_M)+F_r\cos2(\beta_M-\alpha_M)-\nonumber\\
&& F_i\sin2(\beta_M-\alpha_M)+E_r\}\nonumber\\
Y'_1&=&A_i+\{D_r\sin(\beta_M-\alpha_M)+D_i\cos(\beta_M-\alpha_M)+F_r\sin2(\beta_M-\alpha_M)+\nonumber\\
&& F_i\cos2(\beta_M-\alpha_M)+E_i\}\nonumber\\
W_1&=&B_r+C_r\cos(\beta_M-\alpha_M)-C_i\sin(\beta_M-\alpha_M)\nonumber\\
W'_1&=&B_i+C_r\sin(\beta_M-\alpha_M)+C_i\cos(\beta_M-\alpha_M)
\end{eqnarray}
and
\begin{eqnarray}
X_2&=&A_i-\{D_i\cos(\beta_M-\alpha_M)-D_r\sin(\beta_M-\alpha_M)+E_i\cos2(\beta_M-\alpha_M)-\nonumber\\
&& E_r\sin2(\beta_M-\alpha_M)+F_i\}\nonumber\\
X'_2&=&\{D_r\cos(\beta_M-\alpha_M)+D_i\sin(\beta_M-\alpha_M)+E_r\cos2(\beta_M-\alpha_M)+\nonumber\\
&&E_i\sin2(\beta_M-\alpha_M)+F_r\}-A_r\nonumber\\
Y_2&=&A_r+\{D_r\cos(\beta_M-\alpha_M)+D_i\sin(\beta_M-\alpha_M)+E_r\cos2(\beta_M-\alpha_M)+\nonumber\\
&&E_i\sin2(\beta_M-\alpha_M)+F_r\}\nonumber\\
Y'_2&=&A_i+\{D_i\cos(\beta_M-\alpha_M)-D_r\sin(\beta_M-\alpha_M)+E_i\cos2(\beta_M-\alpha_M)-\nonumber\\
&&E_r\sin2(\beta_M-\alpha_M)+F_i\}\nonumber\\
W_2&=&C_r+B_r\cos(\beta_M-\alpha_M)+B_i\sin(\beta_M-\alpha_M)\nonumber\\
W'_2&=&C_i+B_i\cos(\beta_M-\alpha_M)-B_r\sin(\beta_M-\alpha_M)
\end{eqnarray}
where suffix $i$ and $r$ stand for imaginary and real part respectively.
The complex quantities $A$, $B$, $C$, $D$, $E$ and $F$ are defined as follows
\begin{eqnarray}
A&=&m_3^2[Z-1]\nonumber\\
B&=&m_3m_1\left[Zs_{12}^2\frac{1+t_{23}^4}{t_{23}^2}-Z\sin2\theta_{12}s_{13}e^{i\delta}\frac{t_{23}^2-1}{t_{23}}+2s_{12}^2\right]\nonumber\\
C&=&m_3m_2\left[Zc_{12}^2\frac{1+t_{23}^4}{t_{23}^2}+Z\sin2\theta_{12}s_{13}e^{i\delta}\frac{t_{23}^2-1}{t_{23}}+2c_{12}^2\right]\nonumber\\
D&=&m_1m_2\left[2Zc_{12}^2s_{12}^2+\sin2\theta_{12}\cos2\theta_{12}s_{13}e^{i\delta}\frac{t_{23}^2-1}{t_{23}}-2s_{12}^2c_{12}^2\right]\nonumber\\
E&=&m_1^2\left[Zs_{12}^4+Zs_{12}^2\sin2\theta_{12}s_{13}e^{i\delta}\frac{t_{23}^2-1}{t_{23}}-s_{12}^4\right]\nonumber\\
F&=&m_2^2\left[Zc_{12}^4-Zc_{12}^2\sin2\theta_{12}s_{13}e^{i\delta}\frac{t_{23}^2-1}{t_{23}}-c_{12}^4\right]
\end{eqnarray}
with $t_{23}=\tan\theta_{23}$ and $Z=\frac{[(M_\nu)_{23}]^2}{(M_\nu)_{22}(M_\nu)_{33}}$. Again in the expressions
of $B$, $C$, $D$, $E$ and $F$ terms containing $s_{13}(t_{23}^2-1)e^{i\delta}$ is propotional to $s_{13}(c_{23}^2-s_{23}^2)$.
Dropping those terms one can further simplify the expressions of $B$, $C$, $D$, $E$ and $F$ keeping other
dominating terms. This simplification makes expressions of $A$ to $F$ free
from the Dirac phase and
their complex nature is only due to $Z$ parameter.
Thus, apart from the masses,
finally, we gather
complete information of the $U^{\rm{PMNS}}$ matrix containing mixing angles and physical phases from a general three
generation Majorana
neutrino mass matrix.
\section{Cyclic Symmetry}\label{cyclic_s}
\subsection{Basic Formalism}
The most general leptonic mass term of the Lagrangian in the present model is
\begin{equation}
-\mathcal{L}_{\rm mass}=(m_{\ell})_{ll'} \overline{l_L} l'_R + m_{D_{ll'}}\overline{\nu_{lL}} N_{l'R} +
M_{R_{ll'}} \overline{N^c_{lL}} N_{l'R}
\end{equation}
where $l,~l'=e,~\mu,~\tau$.
We demand that the neutrino part of the Lagrangian is invariant under the cyclic permutation symmetry as given in eq.(\ref{cyclic}).
The symmetry invariant Dirac neutrino mass matrix $m_D$ takes the form
\begin{equation}
m_D=\left( \begin{array}{ccc}
y_1 & y_2 & y_3\\ y_3 & y_1 & y_2 \\ y_2 & y_3& y_1 \\
\end{array}\right) \label{md}
\end{equation}
where in general all the entries are complex.
Without loss of generality, we consider a basis in which the right
handed neutrino mass matrix $M_R$ and charged lepton mass matrix $m_\ell$ are
diagonal. Further, imposition of cyclic symmetry dictates the texture of $M_R$ as
\begin{equation}
M_R=\left( \begin{array}{ccc}
m & 0 & 0\\0 & m& 0\\0& 0& m\\
\end{array}\right).
\label{mr}
\end{equation}
Now, within the framework of type-I seesaw mechanism the
effective neutrino mass matrix $m_\nu$,
\begin{equation}
m_\nu=-m_D M_R^{-1} m_D^{T}
\end{equation}
takes the following form with cyclic symmetric $m_D$(eq.(\ref{md}))
and $M_R$(eq.(\ref{mr})) as
\begin{equation}
m_\nu=-\frac{1}{m}\left( \begin{array}{ccc}
y_1^2+y_2^2+y_3^2 & y_1 y_2+y_2 y_3+y_3 y_1 &y_1 y_2+y_2 y_3+y_3 y_1 \\
y_1 y_2+y_2 y_3+y_3 y_1 & y_1^2+y_2^2+y_3^2 & y_1 y_2+y_2 y_3+y_3 y_1\\
y_1 y_2+y_2 y_3+y_3 y_1 &y_1 y_2+y_2 y_3+y_3 y_1 & y_1^2+y_2^2+y_3^2 \\
\end{array}\right).
\label{effm}
\end{equation}
\subsection{Parametrization and Diagonalization}\label{param}
With a suitable choice of parametrization the effective neutrino mass matrix
given in eq.(\ref{effm}) can be rewritten as
\begin{equation}
m_\nu=m_0 \left( \begin{array}{ccc}
1+p^2e^{2i\alpha}+q^2e^{2i\beta} & pe^{i\alpha}+qe^{i\beta}+pqe^{i(\alpha+\beta)}&
pe^{i\alpha}+qe^{i\beta}+pqe^{i(\alpha+\beta)}\\
pe^{i\alpha}+qe^{i\beta}+pqe^{i(\alpha+\beta)} & 1+p^2e^{2i\alpha}+q^2e^{2i\beta} & pe^{i\alpha}
+qe^{i\beta}+pqe^{i(\alpha+\beta)}\\
pe^{i\alpha}+qe^{i\beta}+pqe^{i(\alpha+\beta)} & pe^{i\alpha}+qe^{i\beta}+pqe^{i(\alpha+\beta)} &
1+p^2e^{2i\alpha}+q^2e^{2i\beta}\\
\end{array}\right)
\end{equation}
where we have parametrized the different elements ($y_1$, $y_2$, $y_3$) of $m_\nu$ in terms of $p$, $q$ and two phases $\alpha$,
$\beta$ accordingly
\begin{eqnarray}
m_0=-\frac{y_3^2}{m}
\quad pe^{i\alpha}=\frac{y_1}{y_3}
\quad qe^{i\beta}=\frac{y_2}{y_3}.\label{parametrization}
\end{eqnarray}
Denoting
\begin{eqnarray}
&&P=1+p^2e^{2i\alpha}+q^2e^{2i\beta}\nonumber\\
&&Q=pe^{i\alpha}+qe^{i\beta}+pqe^{i(\alpha+\beta)} \label{p1}
\end{eqnarray}
$m_\nu$ is written in a convenient form as
\begin{equation}
m_\nu=m_0\left( \begin{array}{ccc}
P & Q & Q \\Q & P & Q \\ Q & Q & P\\
\end{array}\right).
\end{equation}
We construct the matrix $h(=m_\nu m_\nu^\dagger)$ to calculate the mixing angles and mass eigenvalues. Expression of $h$
obtained as
\begin{equation}
h=m_\nu m_\nu^\dagger=m_0^2\left( \begin{array}{ccc}
A & B & B\\B & A & B \\B & B & A\\
\end{array}\right)\label{h}
\end{equation}
where
\begin{eqnarray}
&&A=|P|^2+2|Q|^2\nonumber\\
&&B=|Q|^2+P Q^\ast+P^\ast Q .
\end{eqnarray}
Diagonalizing the matrix $h$ given in eq.(\ref{h})
through $U^\dagger h U={\rm diag}(m_1^2,~m_2^2,~m_3^2)$ we get the mass squared eigenvalues as
\begin{eqnarray}
&&m_1^2=m_0^2(A-B)\nonumber\\
&&m_2^2=m_0^2(A+2B)\nonumber\\
&&m_3^2=m_0^2(A-B).
\end{eqnarray}
However, there is a problem of unique determination
of the diagonalization matrix $U$ due to the degeneracy
in the eigenvalues ($m_1^2=m_3^2\ne m_2^2$).
Any vector in the plane orthogonal to the unique eigenvector of
eigenvalue $m_2^2$ can be an eigen vector
of $m_1^2$ or $m_3^2$. One can choose two mutually
orthogonal eigenvectors on that plane
for the eigenvalues $m_1^2$ and $m_3^2$. So, in effect, we can
have the $U$ matrix of the above case with these three eigenvectors.
But, choice of eigenvectors for $m_1^2$ and $m_3^2$ on the
degenerate plane is arbitrary. Any other two orthogonal
combinations of these two eigenvectors
are equally good for construction of
the $U$ matrix for the same eigenvalues.
So, the diagonalization matrix can not be unique and hence the derived
mixing angles are also not unique.
Here, one observation is that the eigenvetor of
$m^2_2$: $(1/{\sqrt 3},~1/{\sqrt 3},~1/{\sqrt 3})$ coincides with
the 2nd column of TBM mixing matrix. Due to degeneracy
$m_1^2=m_3^2$, one of the possible choice of diagonalization
matrix could be the well known TBM mixing matrix.
However, it is also possible to generate all three mixing angles nonzero
by proper combination of eigenvectors corresponding to the degenerate eigenvalues.
Furthermore, in order to accommodate solar and atmospheric neutrino
mass squared differences it is necessary to break the
symmetry to remove the degeneracy between the mass eigenvalues.
\section{Breaking of cyclic symmetry}\label{brk_s}
In this scheme, we break the cyclic symmetry in the right chiral neutrino sector only.
Retaining the flavour diagonal texture of $M_R$, we introduce only two
symmetry breaking parameters $\epsilon_1$ and $\epsilon_2$ in any two diagonal
entries. (It is sufficient to incorporate
two symmetry breaking parameters
to achieve all the eigenvalues of $M_R$ different).
This can be done in three ways as\\
(i)$M_R=diag \left( \begin{array}{ccc} m, & m+\epsilon_1, & m+\epsilon_2 \end{array}\right) $,\\
(ii)$M_R=diag \left( \begin{array}{ccc} m+\epsilon_1, & m+\epsilon_2, & m \end{array}\right) $,\\
(iii)$M_R=diag \left( \begin{array}{ccc} m+\epsilon_1, & m, & m+\epsilon_2 \end{array}\right) $.
\par
It is to be noted that instead of perturbative approach, we directly diagonalize the broken symmetric mass matrix with the help
of the results obtained in section \ref{gs}. Let us first consider case (i) where symmetry breaking occurs at \textquoteleft22\textquoteright
~and \textquoteleft33\textquoteright ~elements. Using the expression of $M_R$ given in (i)
and $m_D$ as given in eq.(\ref{md}), the effective neutrino mass matrix is obtained due to type-I seesaw mechanism as
{\small
\begin{equation}
m_\nu=-\frac{y_3^2}{m}\left( \begin{array}{ccc}
\frac{y_1^2}{y_3^2}+\frac{y_2^2}{y_3^2}\frac{1}{(1+\epsilon_1^\prime)}+\frac{1}{(1+\epsilon_2^\prime)}
& \frac{y_1}{y_3}+\frac{y_1}{y_3}\frac{y_2}{y_3}\frac{1}{(1+\epsilon_1^\prime)}+\frac{y_2}{y_3}\frac{1}{(1+\epsilon_2^\prime)}
& \frac{y_1}{y_3}\frac{y_2}{y_3}+\frac{y_2}{y_3}\frac{1}{(1+\epsilon_1^\prime)}+\frac{y_1}{y_3}\frac{1}{(1+\epsilon_2^\prime)} \\
\frac{y_1}{y_3}+\frac{y_1}{y_3}\frac{y_2}{y_3}\frac{1}{(1+\epsilon_1^\prime)}+\frac{y_2}{y_3}\frac{1}{(1+\epsilon_2^\prime)} &
1+\frac{y_1^2}{y_3^2}\frac{1}{(1+\epsilon_1^\prime)}+\frac{y_2^2}{y_3^2}\frac{1}{(1+\epsilon_2^\prime)} &
\frac{y_2}{y_3}+\frac{y_1}{y_3}\frac{1}{(1+\epsilon_1^\prime)}+\frac{y_1}{y_3}\frac{y_2}{y_3}\frac{1}{(1+\epsilon_2^\prime)}\\
\frac{y_1}{y_3}\frac{y_2}{y_3}+\frac{y_2}{y_3}\frac{1}{(1+\epsilon_1^\prime)}+\frac{y_1}{y_3}\frac{1}{(1+\epsilon_2^\prime)}
& \frac{y_2}{y_3}+\frac{y_1}{y_3}\frac{1}{(1+\epsilon_1^\prime)}+\frac{y_1}{y_3}\frac{y_2}{y_3}\frac{1}{(1+\epsilon_2^\prime)} &
\frac{y_2^2}{y_3^2}+\frac{y_1^2}{y_3^2}\frac{1}{(1+\epsilon_2^\prime)}+\frac{1}{(1+\epsilon_1^\prime)}\\
\end{array}\right)
\end{equation}}
where we have defined $\epsilon_1^\prime=\frac{\epsilon_1}{m}, \epsilon_2^\prime=\frac{\epsilon_2}{m}$.
We rewrite $m_\nu$ as
\begin{equation}
m_\nu=m_0\left( \begin{array}{ccc}
p^2e^{2i\alpha}+\frac{q^2e^{2i\beta}}{(1+\epsilon_1^\prime)}+\frac{1}{(1+\epsilon_2^\prime)}&
pe^{i\alpha}+\frac{qe^{i\beta}}{(1+\epsilon_2^\prime)}+\frac{pq e^{i(\alpha+\beta)}}{(1+\epsilon_1^\prime)}&
\frac{pe^{i\alpha}}{(1+\epsilon_2^\prime)}+\frac{qe^{i\beta}}{(1+\epsilon_1^\prime)}+pqe^{i(\alpha+\beta)}\\
pe^{i\alpha}+\frac{qe^{i\beta}}{(1+\epsilon_2^\prime)}+\frac{pq e^{i(\alpha+\beta)}}{(1+\epsilon_1^\prime)} &
1+\frac{p^2e^{2i\alpha}}{(1+\epsilon_1^\prime)}+\frac{q^2e^{2i\beta}}{(1+\epsilon_2^\prime)} &
\frac{pe^{i\alpha}}{(1+\epsilon_1^\prime)}+qe^{i\beta}+\frac{pqe^{i(\alpha+\beta)}}{(1+\epsilon_2^\prime)}\\
\frac{pe^{i\alpha}}{(1+\epsilon_2^\prime)}+\frac{qe^{i\beta}}{(1+\epsilon_1^\prime)}+pqe^{i(\alpha+\beta)} &
\frac{pe^{i\alpha}}{(1+\epsilon_1^\prime)}+qe^{i\beta}+\frac{pqe^{i(\alpha+\beta)}}{(1+\epsilon_2^\prime)} &
\frac{p^2e^{2i\alpha}}{(1+\epsilon_2^\prime)}+q^2e^{2i\beta}+\frac{1}{(1+\epsilon_1^\prime)}\\
\end{array}\right) \label{brk_m
\end{equation}
where we mimic the parametrization previously
shown in eq.(\ref{parametrization}).
The other two cases, case (ii) and (iii) also produce the same form of $m_\nu$ given in eq.(\ref{brk_m}) with
a different set of parametrizations given by
\vskip 0.1in
\noindent
\textbullet Case (ii)
\begin{equation}
m_0=-\frac{y_1^2}{m},\quad pe^{i\alpha}=\frac{y_2}{y_1},
\quad qe^{i\beta}=\frac{y_3}{y_1}
\end{equation}
\vskip 0.1in
\noindent
\textbullet Case(iii)
\begin{equation}
m_0=-\frac{y_2^2}{m}, \quad pe^{i\alpha}=\frac{y_3}{y_2},
\quad qe^{i\beta}=\frac{y_1}{y_2}.
\end{equation}
\subsection{Numerical results and phenomenological discussions}
It is now straightforward to calculate the eigenvalues and mixing angles of the above mass matrix $m_\nu$.
The coefficients $a$, $b$, $c$ and $d$ of the general characteristic equation (eq.(\ref{cubic})) can be written in terms of
Lagrangian parameters ($p$, $q$, $\alpha$, $\beta$) through the substitution of elements of general $m_\nu$ (eq.(\ref{gen_m}))
by the corresponding elements of broken symmetric $m_\nu$ (eq.(\ref{brk_m})). Substituting these values in eq.(\ref{x1}),
(\ref{x2}) and (\ref{x3}) it is possible to calculate three eigenvalues. The mapping of ($\lambda_1$, $\lambda_2$, $\lambda_3$) to
($m_1^2$, $m_2^2$, $m_3^2$) is done by utilizing neutrino oscillation
experimental data shown in Table \ref{t1}.
\begin{table}[!ht]
\caption{Input experimental values \cite{Tortola:2012te}
\label{input}
\begin{center}
\begin{tabular}{|c|c|}
\hline
{ Quantity} & { $3\sigma$ ranges/other constraint}\\
\hline
$\Delta m_{21}^2$ & $7.12<\Delta m_{21}^2(10^{5}~ eV^{-2})<8.20$\\
$|\Delta m_{31}^2|(N)$ & $2.31<\Delta m_{31}^2(10^{3}~ eV^{-2})<2.74$\\
$|\Delta m_{31}^2|(I)$ & $2.21<\Delta m_{31}^2(10^{3}~ eV^{-2})<2.64$\\
$\theta_{12}$ & $31.30^\circ<\theta_{12}<37.46^\circ$\\
$\theta_{23}$ & $36.86^\circ<\theta_{23}<55.55^\circ$\\
$\theta_{13}$ & $7.49^\circ<\theta_{13}<10.46^\circ$\\
$\delta$ & $0-2\pi$\\
\hline
\end{tabular}
\end{center}
\end{table}\label{t1}
\noindent
\par
Before proceeding to carry out the numerical analysis
few remarks are in order :
\vskip 0.1in
\noindent
i)Taking into account different cosmological experiments with recent
PLANCK satellite experimental results \cite{Ade:2013zuv} the upper limit of the
sum of the three neutrino masses can vary mostly within the range as
$\Sigma m_i(=m_1+m_2+m_3)<(0.23 - 1.11) eV$ \cite{Giusarma:2013pmn}. A combined analysis of
PLANCK, WMAP low $l$ polarization, gravitaional lensing and
results of prior on the Hubble constant $H_0$ from
Hubble space telescope
data corresponds to the higher value of $\Sigma m_i$
whereas inclusion of SDSS DR8 result with the above combination
sharply reduce the upper
limit of $\Sigma m_i$ at the above mentioned lower edge.
However, in our set up individual masses of the neutrinos and
sum of the neutrino masses are
considered as predictions of this model.
We investigate to check the viability of the sum of the three neutrino masses
in view of the upper bound provided by the extant cosmological data.
\vskip 0.1in
\noindent
ii) Another constrain arises from $\beta\beta_{0\nu}$ decay experiments
\cite{Tortola:2012te,Giuliani:2010zz,Rodejohann:2012xd} on the matrix
element $|m_{\nu_{ee}}|(=m_{\nu_{11}})$. At present
lots of experiments are running/proposed among them EXO-200
Collaboration \cite{Auger:2012ar}
has quoted a range on the upper limit of $|m_{\nu_{ee}}|$ as
$|m_{\nu_{ee}}|<(0.14 - 0.38)$eV.
In the present work, we are not restricting the value of $|m_{\nu_{ee}}|$
rather treat it also as a prediction to testify the present model in the
foreseeable future.
\vskip 0.1in
\noindent
We have varied the symmetry breaking parameters $\epsilon_1^\prime$, $\epsilon_2^\prime$ in the range
$-0.1<\epsilon_1^\prime,\epsilon_2^\prime<0.1$ to keep the symmetry breaking effect small. With such values of $\epsilon_{1,2}^\prime$
and taking neutrino experimental data\cite{Tortola:2012te,Fogli:2012ua,GonzalezGarcia:2012sz} given in Table \ref{t1} as input,
we find admissible parameter space of the model. The allowed
region of the $p$ vs $q$ parametric plane is shown in left panel of figure \ref{w1}, wherefrom the allowed ranges of $p$ and $q$ can be read
as $0.27<p<2.09$, $0.44<q<2.21$.
The two phase parameters $\alpha$ and $\beta$ are varied
as $-180^\circ<\alpha,\beta<180^\circ$ and the allowed parameter space in $\alpha$ vs $\beta$ plane is shown in right panel of
figure \ref{w1}. Two tiny disconnected patches are allowed and one is mirror image to the other. The allowed ranges of
$\alpha$, $\beta$ obtained as $-161.12^\circ<\alpha<-89.35^\circ$ with $91.09^\circ<\beta<166.53^\circ$ and
$90.80^\circ<\alpha<161.02^\circ$ with $-166.35^\circ<\beta<-92.11^\circ$. Next in figure \ref{w2}, in the left panel we plot
$\Sigma m_i$ vs $|m_{\nu_{ee}}|$ and the ranges obtained as
$0.076 eV<\Sigma m_i<0.23 eV$,
$0.002eV<|m_{\nu_{ee}}|<0.069eV$.
The upper limit of $\Sigma m_i$ obtained from figure \ref{w2}
marginally touches the most optimistic cosmological upper
bound $0.23$ eV, however, the lower limit is
very far to probe in the near future. On the otherhand, both
the higher and lower values
$|m_{\nu_{ee}}|$ is well within the upper bound of
running/proposed experiments (for example KamLAND+Zen, EXO).
In the right panel of figure \ref{w2}, $m_1$ vs $m_{2,3}$ plot is given and it is clear from the plot that the mass
ordering is normal ($m_1<m_2<m_3$). The ranges of individual mass eigenvalues obtained as
$0.0122 eV<m_1<0.0720 eV$, $0.0143eV<m_2<0.0730eV$, $0.0495eV<m_3<0.09 eV$. Thus, the testability of the present model crucially relies upon the
determination of the neutrino mass hierarchy by future neutrino experiments. We have successively plotted the variation of
Jarlskog invariant $J_{\rm CP}$
\footnote{$J_{\rm CP}=\frac{{\rm Im}(h_{12}h_{23}h_{31})}{(m_2^2-m_1^2)(m_3^2-m_2^2)(m_3^2-m_1^2)}=\frac{\sin2\theta_{12}\sin2\theta_{23}\sin2\theta_{13}
\cos\theta_{13}\sin\delta}{8}$} with the Dirac CP phase ($\delta$) in the left panel of figure \ref{w3} and Majorana phases
$\alpha_M$ vs $\beta_M$ in the right panel of figure \ref{w3}. We see that $-0.044<J_{CP}<0.044$ and all values of $\delta$ lies within the
range $-90^\circ$ to $90^\circ$ whereas Majorana phases admit almost all values in the range
$-90^\circ<\alpha_M,\beta_M<90^\circ$. Before concluding this section we like to comment on the necessity of the
two breaking parameters $\epsilon_1^\prime$ and $\epsilon_2^\prime$. It is seen from the present analysis that in the present model it is possible to explain
the neutrino oscillation data with either of the $\epsilon^\prime_i$ parameter equal to zero.
\begin{figure}
\vspace{-.5cm}
\includegraphics[width=6.5cm,height=6cm,angle=0]{p_q_w.png}
\hspace{1cm}
\includegraphics[width=6.5cm,height=6cm,angle=0]{alph_beta_w.png}
\caption{(colour online) Plot of the allowed parameter space in $p$, $q$ (left) plane and $\alpha$, $\beta$ (right) plane satisfying input data
shown in Table \ref{t1} }
\label{w1}
\end{figure}
\\
\begin{figure}
\vspace{-.2cm}
\includegraphics[width=6.5cm,height=6cm,angle=0]{smass_w.png}
\hspace{1cm}
\includegraphics[width=6.5cm,height=6cm,angle=0]{m1_m2_w.png}
\caption{(colour online) Plot of $\Sigma m_i$ vs $|m_{\nu_{ee}}|$ (left), $m_1$ vs $m_{2,3}$ (right)
satisfying input data shown in Table \ref{t1} }
\label{w2}
\end{figure}
\\
\begin{figure*}
\includegraphics[width=6.5cm,height=6cm,angle=0]{jcpvsd_w.png}
\hspace{1cm}
\includegraphics[width=6.5cm,height=6cm,angle=0]{majo.png}
\caption{(colour online) Plot of $\delta$ vs $J_{CP}$ (left) and $\alpha_M$ vs $\beta_M$ (right) satisfying input data shown in Table \ref{t1} }
\label{w3}
\end{figure*}
\newpage
\section{Summary}\label{summary}
The main objective of this paper is to develop a simple methodology to obtain
exact mass eigenvalues, mixing angles, the Majorana phases
and the Dirac CP phase of a general complex symmetric Majorana neutrino mass matrix
without any approximation. The hermitian matrix $h$ constructed from
$m_\nu$ ($h=m_\nu m_\nu^\dagger$) is solved to get the squared mass eigenvalues. The elements of the diagonalization
matrix $U$ and hence,
three mixing angles and the Dirac CP phase $\delta$ are calculated by solving the set of eigenvalue
equations. Since $m_\nu$ has twelve independent parameters, the total diagonalization matrix which diagonalizes $m_\nu$, should contain
five more phase parameters apart from the Dirac CP phase (The other six parameters are three mass squared values and three mixing angles.).
The above mentioned five phase parameters contain three unphysical phases and two Majorana phases. General expressions for the Majorana phases
are obtained by eliminating those unphysical ones.\\
We demonstrate this general and exact methodology in the context of a neutrino
mass matrix obtained from a cyclic symmetry transformation
invoking type-I seesaw mechanism.
The symmetry invariant structure of the effective neutrino mass matrix leads to degeneracy in the mass eigenvalues and thereby, prohibited by the experimental
data.
The symmetry is broken in the right handed neutrino sector only
in order to fulfill the phenomenological demands
of nonzero mass squared differences and mixing angles.
All the physical parameters
(three mixing angles, one Dirac CP phase, two Majorana phases)
of the total diagonalization matrix ($U_{tot}$) and the mass
eigenvalues of the broken symmetric mass matrix are readily expressed
in terms of the
Lagrangian parameters through the utilization of the results obtained from
general diagonalization procedure.
For completeness of the analysis, we explore the parameter space and
it is revealed that the mass hierarchy of the neutrinos is normal and inverted hierarchy is completely ruled out.
Plots of the allowed parameter space show that this model is
capable of producing those observables (mixing
angles, solar and atmospheric mass squared differences) within
experimentally constrained ranges. Finally, the exact expressions obtained for
physical parameters can be directly applicable in any (symmetry invariant
or broken) neutrino mass matrix.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,246 |
Q: Mailchimp API : Batch Delete subscribers Is there any reference to the PHP warpper that I can use to perform batch delete subscribers. we have around 100k+ spam subscribers in Mailchimp list that we need to delete using batch delete.
Thanks
A: There is no official PHP wrapper for API v3, but you can use third-party wrappers such as this one from DrewM. He has provided good documentation on how to use it.
Here is an example of how you can create a batch operation to delete (not unsubscribe) each spam address in the $spamAddresses array. Of course, you'd have to populate the array first.
<?php
include('mailchimp-api-master/src/MailChimp.php');
include('mailchimp-api-master/src/Batch.php');
use \DrewM\MailChimp\MailChimp;
use \DrewM\MailChimp\Batch;
$apiKey = '********************************';
$listId = '**********';
$spamAddresses = [];
$MailChimp = new MailChimp($apiKey);
$Batch = $MailChimp->new_batch();
//Loop through array of spam addresses.
for($i = 0; $i < sizeof($spamAddresses); $i++){
$subscriberHash = $MailChimp->subscriberHash($spamAddresses[$i]);
$Batch->delete("op$i", "lists/$listId/members/$subscriberHash");
}
//Execute batch operation.
$result = $Batch->execute();
echo $result['id'];
?>
Make sure to grab the batch ID that's stored in $result['id'] if you want to check up on the status of the batch operation later, as DrewM's example shows in his documentation:
$MailChimp->new_batch($batch_id);
$result = $Batch->check_status();
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,518 |
{"url":"https:\/\/mhealthgroup.github.io\/MIMSunit\/reference\/iir.html","text":"iir function takes a multi-channel signal and applies an IIR filter to the signal.\n\niir(df, sr, cutoff_freq, order = 4, type = \"high\", filter_type = \"butter\")\n\n## Arguments\n\ndf dataframe. The input multi-channel signal. The first column is timestamps in POSXlct format. The rest columns are signal values. number. Sampling rate in Hz of the input signal. number or numerical vector. The cutoff frequencies in Hz. If the IIR filter is a bandpass or bandstop filter, it will be a 2-element numerical vector specifying the low and high end cutoff frequencies c(low, high). number. The order of the filter. Default is 4. string. Filtering type, one of \"low\" for a low-pass filter, \"high\" for a high-pass filter, \"stop\" for a stop-band (band-reject) filter, or \"pass\" for a pass-band filter. string. IIR filter type, one of \"butter\" for butterworth filter, \"chebyI\" for Chebyshev Type I filter, or \"ellip\" for Elliptic filter.\n\n## Value\n\ndataframe. Filtered signal.\n\n## Details\n\nThis function filters the input multi-channel signal by applying an IIR filter. See wiki for the explanation of the filter. The implementations of IIR filters can be found in butter, cheby1, and ellip.\n\nFor Chebyshev Type I, Type II and Elliptic filter, the passband ripple is fixed to be 0.05 dB. For Elliptic filter, the stopband ripple is fixed to be -50dB.\n\n## How is it used in MIMS-unit algorithm?\n\nThis function has been used as the main filtering method in MIMS-unit algorithm. Specifically, it uses a 0.5 - 5 Hz bandpass butterworth filter during filtering.\n\nOther filtering functions: bandlimited_interp(), bessel(), remove_average()\n\n## Examples\n\n # Use sample data\ndf = sample_raw_accel_data\n\n# View input\nillustrate_signal(df, plot_maxed_out_line = FALSE)\n\n# Apply filtering that uses the same setting as in MIMSunit algorithm\noutput = iir(df, sr=80, cutoff_freq=c(0.2, 5), type='pass')\n\n# View output\nillustrate_signal(output, plot_maxed_out_line = FALSE)","date":"2021-06-14 23:47:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6756488084793091, \"perplexity\": 7860.101672416764}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487614006.8\/warc\/CC-MAIN-20210614232115-20210615022115-00110.warc.gz\"}"} | null | null |
Nothobartsia aspera är en snyltrotsväxtart som först beskrevs av Felix de Silva Avellar Brotero, och fick sitt nu gällande namn av M. Bolliger och U. Molau. Nothobartsia aspera ingår i släktet Nothobartsia och familjen snyltrotsväxter. Inga underarter finns listade i Catalogue of Life.
Källor
Snyltrotsväxter
aspera | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,351 |
\section{Cal-PIT (HPD) and Cal-HPD}
\label{app:cal_hpd}
{\underline{\texttt{Cal-PIT} (HPD)}}
\texttt{Cal-PIT} can also be used to compute High Predictive Sets (HPDs) instead of prediction intervals.
The oracle (1-$\alpha$)-level HPD set is defined as
$$\text{HPD}_\alpha({\mathbf{x}})=\{y: f(y|{\mathbf{x}})\geq t_{{\mathbf{x}},\alpha}\},$$
where $ t_{{\mathbf{x}},\alpha}$ is such that
$\int_{y \in \text{HPD}_\alpha({\mathbf{x}})} f(y|{\mathbf{x}})dy=1-\alpha$.
HPDs are the smallest prediction sets that have coverage $1-\alpha$, and thus they may be more precise (smaller set size) than quantile-based intervals, while maintaining the conditional coverage at the nominal level (see Appendix D for an example with a bimodal predictive distribution).
The \texttt{Cal-PIT} estimate of
$\text{HPD}_\alpha({\mathbf{x}})$ is given by
$$C_\alpha({\mathbf{x}})=\{y: \widetilde f(y|{\mathbf{x}})\geq \widetilde t_{{\mathbf{x}},\alpha}\},$$
where $\widetilde t_{{\mathbf{x}},\alpha}$ is such that
$\int_{y \in C_\alpha({\mathbf{x}})} \widetilde f(y|{\mathbf{x}})dy=1-\alpha$
and $\widetilde f$ is the
\texttt{Cal-PIT} calibrated CDE (Algorithm \ref{alg:Cal-PIT}).
\begin{Remark} [cal-HPD]
Alternatively, one can directly use HPD values, defined as
\begin{align*}
\hat H(y; {\mathbf{x}}) := \int_{\left\{y: \hat{f}(y'|{\mathbf{x}}) \leq \hat{f}(y|{\mathbf{x}})\right\}} \hat{f}(y'|{\mathbf{x}}) dy',
\end{align*}
to recalibrate HPD prediction sets (rather than using PIT values). The idea is to
estimate the local HPD coverage at each ${\mathbf{x}}$,
$h^{\hat f}(\gamma;{\mathbf{x}}) := \mathbb{P}(\hat H(Y;{\mathbf{x}}) \leq \gamma | {\mathbf{x}}),$
by regression, analogous to estimating the PIT-CDF in \texttt{Cal-PIT}.
Let $\hat h^{\hat f}(\gamma;{\mathbf{x}})$ be such an estimate. The recalibrated $(1-\alpha)$-level HPD set at a location ${\mathbf{x}}$ is given by the $(1-\alpha^*({\mathbf{x}}))$-level HPD set of the original density $\hat f(y|{\mathbf{x}})$, where
$\alpha^*({\mathbf{x}})$ is such that $\hat h^{\hat f}(\alpha^*({\mathbf{x}});{\mathbf{x}}) = \alpha$. This framework however does not yield full PDs. Moreover, although the approach corrects HPDs sets, aiming for conditional coverage, the constructed sets will not be optimal if the initial model $\widehat f$ is misspecified.
\\\\
\end{Remark}
In this work, we only report results for \texttt{Cal-PIT}(INT) and \texttt{Cal-PIT}(HPD); we do not report results for Cal-HPD.
\clearpage
\section{Algorithm for \texttt{Cal-PIT}}\label{app:algorithm}
\begin{algorithm}[!ht]
\caption{
\texttt{Cal-PIT}
}\label{alg:Cal-PIT}
\algorithmicrequire \ {\small initial CDE $\hat f(y|{\mathbf{x}})$;
calibration set $\mathcal{D}=\{({\mathbf{x}}_1,y_1),\ldots,({\mathbf{x}}_n,y_n)\}$; oversampling factor $K$;
test points $\mathcal{V}=\{{\mathbf{x}}_1,\ldots,{\mathbf{x}}_m\}$; grid $G$ of values $\gamma \in (0,1)$ for evaluating PIT-CDF after training;
nominal miscoverage level $\alpha$, flag \texttt{HPD} (true if computing HPD sets)
}\\
\algorithmicensure \ {\small calibrated CDF $\widetilde{F}(y|{\mathbf{x}})$, \texttt{Cal-PIT} interval $C({\mathbf{x}})$, calibrated CDE $\widetilde{f}(y|{\mathbf{x}})$, for all ${\mathbf{x}} \in \mathcal{V}$}\\
\begin{algorithmic}[1]
\State \codecomment{Learn PIT-CDF from augmented and upsampled calibration data $\mathcal{D'}$}
\State Set $\mathcal{D'} \gets \emptyset$
\For{$i$ in $\{1,...,n\}$}
\For{$j$ in $\{1,...,K\}$}
\State Draw $\gamma_{i,j} \sim U(0,1)$
\State Compute $W_{i,j} \gets {\mathbb I} \left({\rm PIT}(Y_i; {\mathbf{X}}_i) \leq \gamma_{i,j} \right)$
\State Let $\mathcal{D'} \gets \mathcal{D'} \cup \{\left({\mathbf{X}}_i,Y_i, W_{i,j} \right)\}$
\EndFor
\EndFor
\State Use $\mathcal{D'}$ to learn $\hat r^{\hat f}(\gamma;{\mathbf{x}}) := \hat{\mathbb{P}} \left( {\rm PIT}(Y;{\mathbf{x}}) \leq \gamma \ \middle| \ {\mathbf{x}} \right)$ via a regression of $W$ on ${\mathbf{X}}$ and $\gamma$, which is monotonic w.r.t. $\gamma$.
\\
\State \codecomment{Calibration using PIT-CDF}
\For{${\mathbf{x}} \in \mathcal{V}$ }
\State Set $\mathcal{S} \gets \emptyset$
\For{$\gamma \in G$}
\State Compute $\beta \gets \hat r^{\hat f}(\gamma;{\mathbf{x}})$
\State Let $\widetilde F^{-1}\left(\beta|{\mathbf{x}} \right) \gets \hat F^{-1}(\gamma|{\mathbf{x}})$
\State $S \gets S \cup \left\{ \left(\widetilde F^{-1}(\gamma|{\mathbf{x}}), \ \beta \right)\right\}$
\EndFor
\State Apply interpolating (or smoothing) splines to $S$ to obtain $\widetilde{F}(\cdot|{\mathbf{x}})$ and $\widetilde{F}^{-1}(\cdot|{\mathbf{x}})$
\State \codecomment{ Construct Cal-\texttt{PIT} interval with conditional coverage $1-\alpha$}
\State Compute $C({\mathbf{x}}) \gets [\widetilde F^{-1}(0.5\alpha|{\mathbf{x}}); \widetilde F^{-1}(1-0.5\alpha|{\mathbf{x}}) ]$.
\State \codecomment{ Construct recalibrated CDF and CDE}
\State Evaluate $\widetilde F(y|{\mathbf{x}})$ at the same $y$-values as the initial CDE $\widehat f(y|{\mathbf{x}})$
\State Differentiate $\widetilde F(y|{\mathbf{x}})$ to obtain recalibrated PDF $\widetilde f(y|{\mathbf{x}})$
\State Renormalize $\widetilde f(y|{\mathbf{x}})$ according to \citet[Section 2.2]{izbicki2016nonparametric}
\If{\texttt{HPD}}
\State Obtain HPD sets
$C({\mathbf{x}})=\{y: \widetilde f(y|{\mathbf{x}})\geq \widetilde t_{{\mathbf{x}},\alpha}\}$,
where $\widetilde t_{{\mathbf{x}},\alpha}$ is such that
$\int_{y \in C_\alpha({\mathbf{x}})} \widetilde f(y|{\mathbf{x}})dy=1-\alpha $
\EndIf
\EndFor
\State \textbf{return} $\widetilde{F}(y|{\mathbf{x}})$, $C({\mathbf{x}})$, $\widetilde f(y|{\mathbf{x}})$, \ for all ${\mathbf{x}} \in \mathcal{V}$
\end{algorithmic}
\end{algorithm}
\clearpage
\section{Proofs} \label{app:proofs}
\begin{Lemma}
\label{lemma:equality_cumulative}
Let $G$ and $H$ be two cumulative distribution functions such that $G$ dominates $H,$
and let $\mu_G$ and
$\mu_H$ be their associated measures over $\mathbb{R}$.
Then, for every fixed $y \in \mathbb{R}$,
$$\mu_H\left(\{y' \in \mathbb{R}:y'\leq y\}\right)=
\mu_H\left(\{y' \in \mathbb{R}:G(y')\leq G(y)\}\right).$$
\end{Lemma}
\begin{proof}
Fix $y \in \mathbb{R}$ and let $A=\{y' \in \mathbb{R}:y'\leq y\}$
and $B=\{y' \in \mathbb{R}:G(y')\leq G(y)\}$.
Because $A \subseteq B$, then
\begin{align}
\label{eq:larger}
\mu_H(A)\leq
\mu_H(B).
\end{align}
Now, notice that
$\mu_G(B \cap A^c)=0$. From this and the assumption that $G$ dominates $H$, conclude that
$\mu_H(B \cap A^c)=0$. It follows that
\begin{align}
\label{eq:smaller}
\mu_H(B)=\mu_H(B \cap A)+\mu_H(B \cap A^c) \leq \mu_H(A)+0=\mu_H(A).
\end{align}
From Equations \ref{eq:larger} and \ref{eq:smaller}, conclude that
$\mu_H(A)=
\mu_H(B)$.
\end{proof}
\begin{Lemma} \label{lemma:relationshipFandR}
Fix $y \in \mathbb{R}$
and let $\gamma:= \widehat F(y|{\mathbf{x}})$. The, under Assumptions \ref{assump:continuity} and \ref{assump:dominates},
$\widetilde F(y|{\mathbf{x}})=\widehat r^{\widehat f}(\gamma;{\mathbf{x}})$ and
$ F(y|{\mathbf{x}})= r^{\widehat f}(\gamma;{\mathbf{x}})$.
\end{Lemma}
\begin{proof}
Notice that $\gamma= \widehat F(y|{\mathbf{x}})$ implies that $y=\widehat F^{-1}(\gamma|{\mathbf{x}})$. It follows that, by construction,
\begin{align*}
\tilde F(y|{\mathbf{x}})=
\tilde F\left(\widehat F^{-1}(\gamma|{\mathbf{x}})|{\mathbf{x}}\right)=\widehat r^{\widehat f}(\gamma;{\mathbf{x}}).
\end{align*}
Moreover,
\begin{align*}
F(y|{\mathbf{x}})&={\mathbb P}(Y \leq y|{\mathbf{x}})=
{\mathbb P}\left(\widehat F(Y|{\mathbf{x}}) \leq \widehat F(y|{\mathbf{x}})|{\mathbf{x}}\right)\notag & \text{Assumption \ref{assump:dominates} and Lemma \ref{lemma:equality_cumulative}}\\
&={\mathbb P}\left(\text{PIT}(Y;{\mathbf{x}}) \leq \widehat F(y|{\mathbf{x}})|{\mathbf{x}}\right)={\mathbb P}\left(\text{PIT}(Y;{\mathbf{x}}) \leq \gamma|{\mathbf{x}}\right)\notag &\\
&=r^{\widehat f}(\gamma;{\mathbf{x}}), &
\end{align*}
which concludes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:rate}]
Consider the change of variables $\gamma=\widehat F(y|{\mathbf{x}})$, so that $d \gamma=\widehat f(y|{\mathbf{x}})dy$.
Lemma \ref{lemma:relationshipFandR} implies that $\widetilde F(y|{\mathbf{x}})=\widehat r^{\widehat f}(\gamma;{\mathbf{x}})$ and
$ F(y|{\mathbf{x}})= r^{\widehat f}(\gamma;{\mathbf{x}})$. It follows from that and Assumption \ref{assump:bounded} that
\begin{align*}
\int \int \left(\widetilde F(y|{\mathbf{x}})-F(y|{\mathbf{x}}) \right)^2 dP(y,{\mathbf{x}})&\leq K \int \int \left(\widetilde F(y|{\mathbf{x}})-F(y|{\mathbf{x}}) \right)^2 \widehat f(y|{\mathbf{x}}) dyP({\mathbf{x}}) \\
&=K \int \int \left(\widehat r^{\widehat f}(\gamma;{\mathbf{x}})-r^{\widehat f}(\gamma;{\mathbf{x}}) \right)^2 d\gamma dP({\mathbf{x}}).
\end{align*}
The conclusion follows from Assumption \ref{assump:convergence_rate}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:consistency}]
From Lemma \ref{lemma:relationshipFandR},
\begin{align*}
\sup_{{\mathbf{x}} \in \mathcal{X},y \in \mathbb{R}} &| \tilde F(y|{\mathbf{x}})- F(y|{\mathbf{x}})| \\&=\sup_{{\mathbf{x}} \in \mathcal{X},\gamma \in [0,1]} | \widehat r^{\widehat f}(\gamma;{\mathbf{x}})- r^{\widehat f}(\gamma;{\mathbf{x}})| \xrightarrow[n \longrightarrow\infty]{\enskip \text{a.s.} \enskip} 0,
\end{align*}
where the last step follows from Assumption \ref{assump:uniform_consistency}.
It then follows from Assumption \ref{assump:continuity} that
$$\sup_{{\mathbf{x}} \in \mathcal{X},\gamma \in [0,1]} | \tilde F^{-1}(\gamma|{\mathbf{x}})- F^{-1}(\gamma|{\mathbf{x}})| \xrightarrow[n \longrightarrow\infty]{\enskip \text{a.s.} \enskip} 0,$$
and, in particular,
$$\sup_{{\mathbf{x}} \in \mathcal{X},\alpha \in \{.5\alpha,1-.5\alpha\}} | \tilde F^{-1}(\alpha|{\mathbf{x}})- F^{-1}(\alpha|{\mathbf{x}})| \xrightarrow[n \longrightarrow\infty]{\enskip \text{a.s.} \enskip} 0,$$
from which the conclusion of the theorem follows.
\end{proof}
\subsection{Theory for \texttt{Cal-PIT} HPD sets}
\label{sec:hpds}
For every ${\mathbf{x}} \in \mathcal{X}$,
let $C_\alpha({\mathbf{x}})=\{y: \widetilde f(y|{\mathbf{x}})\geq \widetilde t_{{\mathbf{x}},\alpha}\}$,
where $\widetilde t_{{\mathbf{x}},\alpha}$ is such that
$\int_{y \in C_\alpha({\mathbf{x}})} \widetilde f(y|{\mathbf{x}})dy=1-\alpha $ be the \texttt{Cal-PIT} HPD-set. Similarly, let
$\text{HPD}_\alpha({\mathbf{x}})=\{y: f(y|{\mathbf{x}})\geq t_{{\mathbf{x}},\alpha}\}$,
where $ t_{{\mathbf{x}},\alpha}$ is such that
$\int_{y \in \text{HPD}_\alpha({\mathbf{x}})} f(y|{\mathbf{x}})dy=1-\alpha $ be the true HPD-set.
The next theorem shows that if the probabilistic classifier is well estimated, then \texttt{Cal-PIT} HPD sets are exactly equivalent to oracle HPD sets.
\begin{thm}[Fisher consistency \texttt{Cal-PIT} HPD-sets]
\label{thm:Fisherconsistency}
Fix ${\mathbf{x}} \in \mathcal{X}$. If $\widehat r(\gamma;{\mathbf{x}})= r(\gamma;{\mathbf{x}})$ for every $\gamma \in [0,1]$,
$ C_\alpha({\mathbf{x}})=
\text{HPD}_\alpha({\mathbf{x}})$
and ${\mathbb P}(Y \in C_\alpha({\mathbf{X}})|{\mathbf{x}})=1-\alpha$.
\end{thm}
\begin{proof}[Proof of Theorem \ref{thm:Fisherconsistency}] Fix $y \in \mathbb{R}$ and
let $\gamma=\widehat F(y|{\mathbf{x}})$, so that $y=\widehat F^{-1}(\gamma|{\mathbf{x}})$. It follows that
\begin{align*}
\tilde F(y|{\mathbf{x}})&=
\tilde F\left(\widehat F^{-1}(\gamma|{\mathbf{x}})|{\mathbf{x}}\right)=\widehat r(\gamma;{\mathbf{x}})=
r(\gamma;{\mathbf{x}})\\
&={\mathbb P} \left(\widehat F(Y|{\mathbf{x}}) \leq \widehat F(y|{\mathbf{x}}) | {\mathbf{x}},\gamma \right)=
{\mathbb P} \left(Y \leq y | {\mathbf{x}},\gamma \right)\\
&=F(y|{\mathbf{x}}),
\end{align*}
and therefore $\tilde f(y|{\mathbf{x}})=f(y|{\mathbf{x}})$ for almost every $y \in \mathbb{R}$. It follows that $C_\alpha({\mathbf{x}})=\text{HPD}_\alpha({\mathbf{x}})$.
The claim about conditional coverage follows from the definition of the HPD.
\end{proof}
\clearpage
\section{ Data Sets for Examples 1 and 2}
\label{app:toy_examples}
\subsection*{Example 1} The data for Example 1 (Section~\ref{sec:example_iid}) consist of two groups with different spreads:
\begin{align*}
\hat\beta \sim N(0,1)^3, \hat\gamma \sim N(0,1)^3,\ \ \ \
\beta = \frac{\hat\beta}{\|\hat\beta\|_2}, \gamma = \frac{\hat\gamma}{\|\hat\gamma\|_2},\ \ \ \
\epsilon_1 \sim N(0,1), \epsilon_2 \sim N(0,1),\\
X_{0} \sim \text{Bern}(0.2), X_{1,2} \overset{\textrm{i.i.d.}}{\sim} \textrm{Unif}[-5,5]^2,\ \ \ \
Y = \begin{cases} 0.3 \beta^T X \epsilon_{1}, & X_{0}=0, X_{1}<0 \\
0.3 \gamma^T X \epsilon_{1} + 3 \epsilon_{2}, & X_{0}=1, X_{1}<0 \\
X_{1} + 0.3 \beta^T X \epsilon_{1,i}, & X_{0}=0, X_{1}>0 \\ -X_{1} + 0.3 \gamma^T X \epsilon_{1,i} + 3 \epsilon_{2}, & X_{0}=1, X_{1}>0 \end{cases}
\end{align*}
The center panel of Fig.~\ref{fig:ex_1_data} shows a slice of the data, plotting response $Y$ against variable $X_1$. Observe that the true conditional density is bimodal for $X_1 > 0$, so the most efficient prediction sets in this feature subspace would not be single intervals, but rather pairs of intervals. To illustrate this, we perform a modified version of this experiment, where we sample $X_1 \overset{\textrm{i.i.d.}}{\sim} \textrm{Unif}[0,5]$, and do not train on $X_0$ so that $f(y|{\mathbf{x}})$ is bimodal. Fig.~\ref{fig:cond-cov-twocov} shows that both \texttt{Cal-PIT (INT)} and \texttt{Cal-PIT (HPD)} achieve approximate conditional coverage, while Fig.~\ref{fig:set-sizes-hpd-twocov} shows that \texttt{Cal-PIT (HPD)} yields smaller prediction sets than \texttt{Cal-PIT (INT)}. Because HPD sets can capture the bimodality in the data while intervals cannot, this is a case where \texttt{Cal-PIT (HPD)} has better conditional efficiency.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{./figures/ex_1_data.pdf}
\caption{\small Visualization of one random instance of the data used for Example 1 (Section~\ref{sec:example_iid}). There are three covariates ($X_0, X_1, X_2$), and a target variable $Y$. The analytic form of the true data distribution is defined in Section~\ref{app:toy_examples}. The data set consists of two groups with different spreads. The minority group has a larger spread. The first covariate is categorical and indicates which group the data point belongs to. $Y$ splits into two branches for $X_{1}>0$; that is, the true CDE is bimodal in this region.
\label{fig:ex_1_data}}
\end{figure}
\begin{figure}[h!]
\begin{minipage}[ht]{0.5\linewidth}
\baselineskip=13pt
\caption{\small Proportion of test points with correct conditional coverage. With 5000 training and calibration points each, both \texttt{Cal-PIT (INT)} and \texttt{Cal-PIT (HPD)} achieve approximate conditional coverage.}
\label{fig:cond-cov-twocov}
\end{minipage}
\begin{minipage}[ht]{0.47\linewidth}
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/ex1_app_coverage_pct_n10000.pdf}
\end{center}
\end{minipage}
\end{figure}
\begin{figure}[h!]
\begin{minipage}[ht]{0.47\linewidth}
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/set_sizes_hpd_twocovariates.pdf}
\end{center}
\end{minipage}
\begin{minipage}[ht]{0.5\linewidth}
\baselineskip=13pt
\caption{\small Average prediction set sizes for test points which is a measure of conditional efficiency along with the ideal ``Oracle Band''. \texttt{Cal-PIT (HPD)} captures the fork in the data, and so has smaller prediction sets than \texttt{Cal-PIT (INT)} which produces intervals.}
\label{fig:set-sizes-hpd-twocov}
\end{minipage}
\end{figure}
\subsection*{Example 2}
For Example 2 (Section~\ref{sec:example_misspecified}), the training distribution is Gaussian,
\begin{align*}
Y_0|X \sim \mathcal{N}(\mu=X, \sigma=2),
\end{align*}
while the two target distributions have skew and kurtosis:
\begin{align*}
Y_1 | X \sim \text{sinh-arcsinh}(\mu=X, \sigma=2-|X|, \gamma=X, \tau=1), \\
Y_2 | X \sim \text{sinh-arcsinh}(\mu=X, \sigma=2, \gamma=0, \tau=1-X/4).
\end{align*}
The family of sinh-arcsinh normal distributions \citep{jones2009sinh,jones2019sinhnormal} has been suggested before by \citet{barnes2021adding} as a flexible parametric model that supports estimation of the type of heteroscedastic, asymmetric uncertainties often observed in climate data.
\clearpage
\section{Photometric Redshift CDEs}\label{app:photo-z}
As described in Section~\ref{sec:photo-z}, due to the noisy and limited information about redshift contained in galaxy images, galaxies with similar imaging data may have different redshifts and vice versa. We want this property to be captured in photo-$z$ PDs, requiring them to be multimodal. As we do not know the ``ground truth'' CDEs, we generally have to rely on indirect methods to assess coverage. Here we provide a rudimentary but direct demonstration that the CDEs we predict are indeed meaningful. We compare the CDEs of the five galaxies shown in Fig.~\ref{fig:photo-z-local} with the distribution of true redshifts of other galaxies with similar imaging data. We identify those counterparts by searching for other galaxies in the training set whose whose colors and magnitudes (rescaled by subtracting the mean and dividing by the standard deviation for each feature) lie within an Euclidean distance of 0.5 units of our selected galaxies. Fig.~\ref{fig:photo-z-local-hist} shows their redshift distribution as an inverse-distance weighted histogram along with their CDEs. We observe that the histograms show bimodal distributions when our inferred CDEs are bimodal and unimodal when the inferred distribution is unimodal, matching expectations.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{figures/photo_z_P-P_local_with_hist.pdf}
\caption{Comparison of photo-$z$ CDEs for the galaxies shown in Fig.~\ref{fig:photo-z-local} with the distribution of true redshifts of other galaxies having similar imaging properties. We observe that the histograms show bimodal distributions only when our inferred CDEs are bimodal.}
\label{fig:photo-z-local-hist}
\end{figure}
\begin{figure}[h!]
\begin{minipage}[ht]{0.5\linewidth}
\baselineskip=13pt
\caption{\small Distribution of the Cram\'er-von Mises (CvM) Statistic (i.e., mean squared difference) between the local PIT CDF of each galaxy in the test set and the CDF of a Uniform distribution. As the ``ground truth'' CDEs are unknown, we assess conditional coverage by training regression models to predict the local PIT CDFs on the calibration and validation sets. We observe a significant decrease in the value of CvM statistic for the entire test set, with the average value decreasing by $\sim 4.5\times$. The value of CDE loss~\citep{izbicki2017converting} which is another independent measure of conditional coverage decreases from $-0.84$ to $-10.71$ after recalibration. }
\label{fig:photo-z-metric-comparison}
\end{minipage}
\begin{minipage}[ht]{0.47\linewidth}
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/photo_z_metric_comparison.pdf}
\end{center}
\end{minipage}
\end{figure}
\clearpage
\section{Training a regression model to learn $\hat r^{\hat f}(\gamma;{\mathbf{x}})$} \label{app:hyperparameters}
The success of \texttt{Cal-PIT} depends entirely on learning an accurate representation of $\hat r^{\hat f}(\gamma;{\mathbf{x}})$. One can in principle choose any regression algorithm and pair it with \ref{alg:Cal-PIT}. We use monotonic neural networks from \citet{Wehenkel2019UMNN} as our regression method as we find this architecture gives reasonably good results for all of our experiments. The network is constrained to be monotonic w.r.t. the coverage level ($\alpha$) and uses identical sets of fully connected sub-networks to learn the monotonic dependence and the unconstrained dependence separately, with the two results merged in the final layer of the network. It is known that neural networks struggle with categorical inputs and in that case, tree-based regression methods or an additional embedding step might produce better results.
For synthetic example-1 and the photometric redshift demonstration we use a network architecture with 3 hidden layers with 512 nodes each and for synthetic example-2 we use a network architecture with 3 hidden layers with 128 nodes each (see Section~\ref{app:example_TCs} for the details on example 3). We use the reLU activation function \cite{glorot2011relu} for all the hidden layers and the AdamW optimizer~\cite{AdamW2019} with an initial learning rate of 0.001 and weight decay parameter set to 0.01. We follow a multiplicative weight decay schedule given by the rule: $\mathrm{learning\ rate\ (epoch)} = \mathrm{initial\ learning\ rate}\times0.95^{\mathrm{epoch}}$. Following assumption~\ref{assump:mse}, we use minimize the mean squared error to train the models. The data used to train the model is split into 90:10 partitions where 90\% of the data is used to optimize the loss function and 10\% of the data is used to calculate a validation mean squared error loss every epoch on a fixed grid of $\alpha$. To prevent our model from over-fitting we stop training once the validation loss does not decrease for 10 epochs and save the model with the best validation loss. We use a batch size of 2048 throughout and oversample our training data by a factor ($K$) of 50.
We used PyTorch\cite{Pytorch2019} to create and train our neural network models and trained them on a single Nvidia A100 GPU. If a value of any hyperparameter is not explicitly mentioned here in the text, it implies that we used the default values set in PyTorch. Training times for all our experiments range from a few minutes to about an hour at maximum.
\section{Details on Example 3}
\label{app:example_TCs}
\subsection{Tropical Cyclone Data}
We use TC intensity and location data from NOAA's HURDAT2 best track database \citep{landsea2013TCs}, and GOES longwave infrared imagery from NOAA's MERGIR database \citep{janowiak2020NOAA}. HURDAT2 best tracks are provided at 6-hour time resolution, while the GOES IR imagery is available at 30-minute×4-km resolution over both the North Atlantic (NAL) and Eastern North Pacific (ENP) basins from 2000–2020. Every thirty minutes during the lifetime of a storm, we record a $\sim$800 km×800 km "stamp" of IR imagery surrounding the TC location, showing cloud-top temperatures for the storm. Figure \ref{fig:GOES} (left) shows two such stamps.
\begin{figure}[htb]
\begin{minipage}[ht]{0.65\linewidth}
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/GOES_v3.png}
\end{center}
\end{minipage}
\begin{minipage}[ht]{0.34\linewidth}
\baselineskip=13pt
\caption{
{\small \textit{Left:} The raw data is a sequence of TC-centered cloud-top temperature images from GOES. \textit{Center:} We convert each GOES image into a radial profile. \textit{Right:} The 24-hour sequence of consecutive radial profiles, sampled every 30 minutes, defines a structural trajectory or Hovm\"{o}ller diagram. These trajectories serve as high-dimensional inputs for predicting TC intensity. Figure from \citep{mcneely2022TCs}.}
}\label{fig:GOES}
\end{minipage}
\end{figure}
\\\\
The radial profile, defined as $T(r) = \frac{1}{2\pi} \int_0^{2\pi} T_b(r, \theta)d\theta$, captures the structure of cloud-top temperatures $T_b$ as a function of radius $r$ from the TC center and serves as an easily interpretable description of the depth and location of convection near the TC core \citep{mcneely2020goes, sanabia2014TCs}. The radial profiles are computed at 5-km resolution from 0-400km (d = 80) (Figure \ref{fig:GOES}, center). Finally, at each time t we stack the preceding 24 hours (48 profiles) into a structural trajectory, $\S_{<t}$, consisting of an image of the most recent 48 rows of the data. We visualize these
summaries over time with Hovm\"{o}ller diagrams (\cite{hovmoller1949}; see Figure \ref{fig:GOES}, right).
\\\\
Figure \ref{fig:true_rad} shows an example sequence of observed radial profiles every 30 minutes for a real TC, along with observed wind speed $Y$.
We interpolate $Y$, which is available every 6 hours, to a 30 minute resolution.
Our goal is to create a synthetic example which has a similar dependency structure as actual TCs.
\begin{figure}[htb]
\begin{minipage}[ht]{0.5\linewidth}
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/true_rad_ex1_Teddy.pdf}
\end{center}
\end{minipage}
\begin{minipage}[ht]{0.48\linewidth}
\caption{
\small \textit{Left:} Observed radial profiles ${\mathbf{X}}_t$ over time for Hurricane Teddy 2020. These are recorded every 30 mins. \textit{Right:} Observed wind speed values $Y_t$, recorded every 6 hours but interpolated on the same 30 min grid.
}\label{fig:true_rad}
\end{minipage}
\end{figure}
\begin{figure}[htb]
\begin{minipage}[ht]{0.5\linewidth}
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/recon_rad_ex1_Teddy.pdf}
\end{center}
\end{minipage}
\begin{minipage}[ht]{0.48\linewidth}
\caption{
\small \textit{Left:} PCA-reconstructed radial profiles over time for Hurricane Teddy 2020, Figure ~\ref{fig:true_rad}. We obtain a decent reconstruction by using the first 3 PCs. \textit{Right:} Observed wind speed values for the TC, recorded every 6 hours but interpolated on the same 30 min grid.
}\label{fig:pca_rad}
\end{minipage}
\end{figure}
\begin{figure}[h!]
\begin{minipage}[ht]{0.35\linewidth}
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/PCA_components.pdf}
\end{center}
\end{minipage}
\begin{minipage}[ht]{0.55\linewidth}
\baselineskip=13pt
\caption{\small Top 3 PCA components, or empirical orthogonal functions (EOFs), for TC radial profiles.}
\label{fig:pca_components}
\end{minipage}
\end{figure}
\\\\
\subsection{Synthetic Model for High-Dimensional Sequence Data}
Using the radial profiles from all TC data, we perform a principal component analysis (PCA). Figure \ref{fig:pca_components} shows the first 3 principal components, or empirical orthogonal functions (EOFs). Figure \ref{fig:pca_rad} shows the reconstruction of the TC from Figure \ref{fig:true_rad} using just these 3 EOFs. To create the synthetic data in Example 3, we use a similar reconstruction scheme:
Let $\Delta PC_t := PC_t - PC_{t-30m}$ be the 30-minute change in a PC coefficient at time $t$ for observed data. We fit a vector autoregression (VAR) model to $(\Delta PC1_t, \Delta PC2_t, \Delta PC3_t)$ to capture the dependence of each component on its own lags as well as the lags of the other components. The model chosen by the BIC criterion has order 3, for a lag of 90 minutes.
With the fitted VAR model, we can jointly simulate synthetic time series data for $PC1, PC2, PC3$. A TC structural trajectory is constructed by multiplying simulated time series of PCA coefficients with their corresponding eigenvectors (Figure~\ref{fig:pca_components}).
\subsection{Synthetic Model for Intensities}
To model the the time evolution of intensities $Y$, we fit a time series regression of intensity change on its past values together with PC coefficients for present and past TC structure.
Let $Z := \textrm{logit}(Y / 200)$ so that simulated values of intensities $Y$ are reasonable, i.e. fall between 0 and 200. We then define $\Delta Z_t = Z_t - Z_{t-6h}$. Finally, we fit the following linear regression model for $\Delta Z$:
\begin{align}
\Delta Z_t = &\beta_0 + \beta_1 Z_{t-6h} + \beta_2 \Delta Z_{t-6h} + \beta_3 PC1_t + \beta_4 PC2_t \beta_5 PC3_t + \beta_6 PC1_{t-6h} \nonumber\\
&+ \beta_7 PC2_{t-6h} + \beta_8 PC3_{t-6h} + \beta_9 PC1_{t-12h} + \beta_{10} PC2_{t-12h} + \beta_{11} PC3_{t-18h} \nonumber\\
&+ \beta_{12} PC2_{t-24h} + \epsilon_t
\label{eq:TC_model}
\end{align}
where $\epsilon_t$ is Gaussian noise with mean 0 and standard deviation set to the root mean squared error between the real and predicted radial profiles in the training set. Note that $\Delta Z_t$ has dependencies on its own lagged values as well as lagged values of $PC_t$.
\\\\
Figure \ref{fig:TC_data} in Section~\ref{sec:example_TCs} shows an example TC with simulated radial profiles that update every 30 minutes, with accompanying simulated wind speed $Y$ every 30 minutes.
\\\\
As a sanity check, we check that the marginal distributions of the simulated and real wind speed values ($Y$) look similar, as shown in Figure \ref{fig:TC_marginal_intensity}.
\begin{figure}[h!]
\begin{minipage}[ht]{0.6\linewidth}
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/generated_vs_observed_TC_intensity_20220405.pdf}
\end{center}
\end{minipage}
\begin{minipage}[ht]{0.39\linewidth}
\baselineskip=13pt
{\small \textit{Left:} Marginal distribution of generated wind speed values $Y$, based on the model in Equation~\ref{eq:TC_model}. \textit{Right:} Marginal distribution of observed wind speed values.}
\label{fig:TC_marginal_intensity}
\end{minipage}
\end{figure}
\\\\
\subsection{Re-calibration of Convolutional MDN Results of Intensity Distribution}
With our trained VAR model, we generate a very long time series for $PC1,PC2,PC3$ with a value of the $PC$'s randomly selected from the training set of storms as the initial point. The time series is then divided into 24 hour long chunks and the structural trajectory and intensities are reconstructed. We create 8000 such instances for our training set, 8000 more for our calibration set and 4000 instances for our test sets. We rejected a 24 hour long window between each chunk of time series to ensure that each instance has no memory of the previous ones.
We fit a unimodal Gaussian neural density model to estimate the conditional density $f(y|{\mathbf{s}})$ of TC intensities given past radial profiles. Specifically, we fit a convolutional mixture density network (ConvMDN, \cite{disanto2018cmdn}) with a single Gaussian component, two convolutional and two fully connected layers which gives an initial estimate of $f(y|{\mathbf{s}})$.
We then use a convolutional neural network \cite{LeCun1989CNN,FUKUSHIMA1982CNN} model with two convolutional layers followed by 5 fully connected layers which take the structural trajectory images and the coverage level ($\alpha$) as inputs training. The network output is restricted to be monotonic w.r.t. $\alpha$~\cite{Wehenkel2019UMNN}. For both the models we use ReLU activations \citep{glorot2011relu} for intermediate layers and train using the Adam optimizer \citep{kingma2014adam} with learning rate $10^{-3}$, $\beta_1 = 0.9$, and $\beta_2 = 0.999$. We use the same multiplicative learning rate decay schedule mentioned in \ref{app:hyperparameters}.
\subsection{Additional Example 3 Results}
The ConvMDN struggles in this example because of the conditional distribution of $Y|\S$ sometimes being skewed towards larger intensities; this phenomena can partly be observed in Figure~\ref{fig:TC_boxplots}, where we show the distribution of $Y_t$ at fixed values of $t$ for some example simulated TCs. \texttt{Cal-PIT} is able to adjust for the model misspecification (similar to Example 2), resulting in narrower prediction bands which are still conditionally valid. Figure \ref{fig:TC_pred_sets} shows a few more examples of predictions sets for simulated TCs before and after calibration.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.5\textwidth]{figures/TC_boxplots.pdf}
\caption{
{\small Boxplots of the distribution of $Y_t$ at fixed values of $t$, for simulated TCs. The distributions show skewness, which may explain why the uncalibrated ConvMDN does not fit perfectly. Morever, the calibrated prediction sets appear to track the observed trajectories (black curves) more closely than the ConvMDN.
}
}
\label{fig:TC_boxplots}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.98\textwidth]{figures/TC_prediction_sets_full.pdf}
\caption{
{\small Prediction sets for simulated TCs, before and after calibration. True trajectories are solid black,
and prediction sets at test points are in blue.
}
}
\label{fig:TC_pred_sets}
\end{figure*}
\subsubsection*{References}}
\usepackage{booktabs}
\usepackage{tikz}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{mathtools}
\usepackage{amsthm}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{adjustbox}
\newtheorem{prop}{Proposition}
\newtheorem{thm}{Theorem}
\newtheorem{Assumption}{Assumption}
\newtheorem{Lemma}{Lemma}
\newtheorem{Corollary}{Corollary}
\newtheorem{Definition}{Definition}
\newtheorem{Remark}{Remark}
\newtheorem{Example}{Example}
\newcommand{\swap}[3][-]{#3#1#2}
\def{\mathbb I}{{\mathbb I}}
\def{\mathbb P}{{\mathbb P}}
\def\mathbb{P}{\mathbb{P}}
\def{\mathcal L}{{\mathcal L}}
\def{\mathbf{X}}{{\mathbf{X}}}
\def{\mathbf{x}}{{\mathbf{x}}}
\def{\mathbf{s}}{{\mathbf{s}}}
\def{\mathbf{z}}{{\mathbf{z}}}
\def{\mathbf{y}}{{\mathbf{y}}}
\def{\mathbf{Y}}{{\mathbf{Y}}}
\def{\mathbf{Z}}{{\mathbf{Z}}}
\def{\mathbf{z}}{{\mathbf{z}}}
\def{\mathbb E}{{\mathbb E}}
\def{\mathbb V}{{\mathbb V}}
\def{\mathbb H}{{\mathbb H}}
\def{\mathcal D}{{\mathcal D}}
\def{\mathcal T}{{\mathcal T}}
\def{\rm PIT}{{\rm PIT}}
\def{\rm HPD}{{\rm HPD}}
\def{\rm INT}{{\rm INT}}
\let\hat\widehat
\renewcommand{\S}{\mathbf{S}}
\newcommand{\series}[1]{\{#1\}_{t\ge0}}
\newcommand{\codecomment}[1]{\textbf{\color{black}// #1}}
\definecolor{awesome}{rgb}{1.0, 0.13, 0.32}
\definecolor{safetyorange}{rgb}{1.0, 0.4, 0.0}
\definecolor{vermilion}{rgb}{0.89, 0.26, 0.2}
\newcommand{\add}[1]{{\color{vermilion} #1}}
\definecolor{aqua}{rgb}{0.0, 0.9, 0.9}
\newcommand{\remove}[1]{\textbf{\color{aqua}#1}}
\newcommand{\ann}[1]{\textbf{\color{cyan}[Ann: #1]}}
\newcommand{\trey}[1]{\textbf{\color{purple}[Trey: #1]}}
\newcommand{\annlee}[1]{\textbf{\color{magenta}[Ann: #1]}
\newcommand{\rafael}[1]{\textbf{\color{blue}[Rafael: #1]}}
\newcommand{\david}[1]{\textbf{\color{teal}[David: #1]}}
\newcommand{\dey}[1]{\textbf{\color{red}[Biprateep: #1]}}
\title{Calibrated Predictive
Distributions via Diagnostics for Conditional Coverage}
\author{
Biprateep Dey \thanks{equal contribution}\\
Dept. of Physics and Astronomy and PITT-PACC\\
University of Pittsburgh\\
Pittsburgh, PA 15260\\
\texttt{biprateep@pitt.edu} \\
\And
David Zhao\footnotemark[1]\\
Department of Statistics and Data Science\\
Carnegie Mellon University\\
Pittsburgh, PA 15213\\
\texttt{davidzhao@stat.cmu.edu} \\
\And
Jeffrey A. Newman \\
Dept. of Physics and Astronomy and PITT-PACC\\
University of Pittsburgh\\
Pittsburgh, PA 15260\\
\texttt{janewman@pitt.edu} \\
\And
Brett H. Andrews \\
Dept. of Physics and Astronomy and PITT-PACC\\
University of Pittsburgh\\
Pittsburgh, PA 15260\\
\texttt{andrewsb@pitt.edu} \\
\And
Rafael Izbicki \\
Department of Statistics\\
Federal University of S\~{a}o Carlos (UFSCar)\\
S\~{a}o Carlos, Brazil\\
\texttt{rafaelizbicki@gmail.com} \\
\And
Ann B. Lee \\
Department of Statistics and Data Science\\
Carnegie Mellon University\\
Pittsburgh, PA 15213\\
\texttt{annlee@stat.cmu.edu} \\
}
\begin{document}
\maketitle
\begin{abstract}
Uncertainty quantification is crucial for assessing the predictive ability of AI algorithms. A large body of work (including normalizing flows and Bayesian neural networks) has been devoted to describing the entire predictive distribution (PD) of a target variable Y given input features ${\mathbf{X}}$. However, off-the-shelf PDs are usually far from being conditionally calibrated; i.e., the probability of occurrence of an event given input ${\mathbf{X}}$ can be significantly different from the predicted probability. Most current research on predictive inference (such as conformal prediction) concerns constructing prediction sets, that do not only provide correct uncertainties on average over the entire population (that is, averaging over ${\mathbf{X}}$), but that are also approximately conditionally calibrated with accurate uncertainties for individual instances. It is often believed that the problem of obtaining and assessing entire conditionally calibrated PDs is too challenging to approach. In this work, we show that recalibration as well as validation are indeed attainable goals in practice. Our proposed method relies on the idea of regressing probability integral transform (PIT) scores against ${\mathbf{X}}$. This regression gives full diagnostics of conditional coverage across the entire feature space and can be used to recalibrate misspecified PDs. We benchmark our corrected prediction bands against oracle bands and state-of-the-art predictive inference algorithms for synthetic data, including settings with distributional shift and dependent high-dimensional sequence data. Finally, we demonstrate an application to the physical sciences in which we assess and produce calibrated PDs for measurements of galaxy distances using imaging data (i.e., photometric redshifts).
\end{abstract}
\section{Introduction}
\label{sec:intro}
The term Uncertainty Quantification (UQ) is often used for all approaches that go beyond using point estimation of a variable of interest to assess the predictive accuracy of models \cite{Berger2019UQReview,Abdar21UQReview}. In science applications, UQ is sometimes more important than point prediction cf., e.g., \citet{Gneiting2014Review,snowmassUQ}. In engineering and finance, UQ can also be essential for decision making, as when optimizing supply chains for actual demand \cite{farmer2017UQReview,gottlich2020UQReview}. In this work, we consider the problem of assessing the uncertainty about a continuous response or ``target'' variable $Y \in \mathbb{R}$ given input features or covariates ${\mathbf{X}} \in \mathcal{X}$.
UQ approaches that yield prediction regions for $Y$ include:
\emph{quantile regression} \citep{koenker1978QR,koenker2001QR}, which estimates conditional quantile functions ${F}^{-1}(\alpha | {\mathbf{x}})$ of $Y$ at specified levels $\alpha \in (0,1)$,
and
{\em conformal prediction} \citep{Vovk2005, Lei2014, Barber2020}, which provides a distribution-free approach to constructing prediction regions based upon remapping a measure of conformity between observed and fitted values of $Y$ to quantiles.
Prediction bands are useful in quantifying uncertainties, but we are now witnessing a transformation across scientific disciplines from point forecasts to the entire predictive distribution (PD) of $Y$ given ${\mathbf{x}}$; see, e.g., \citet{gneiting2008probabilistic} for probabilistic forecasting in weather predictions, \citet{timmermann2000PDFinance} for financial risk management, \citet{alkema2007HIVPD} for epidemiological projections and \citet{Mandelbaum2008PhotozPDF, Malz2022Photoz} for the importance of PDs for astrophysical studies.
Common approaches to obtaining the entire predictive distribution include:
\emph{conditional density estimation} (CDE), which directly estimates the conditional density functions $f(y|{\mathbf{x}})$, via, e.g., mixture density networks [MDN;\cite{Bishop1994MDN}], kernel mixture networks [KMN;\cite{Ambrogioni2017KMN}], Bayesian neural networks \cite{Goan2020BNN,neal2012BNNs,mckay1992BNNs,graves2011BNNs,blundell2015BNNs}, normalizing flows (including neural autoregressive models)\cite{Papamakarios2019NormalizinfFlow,Kobyzev2021NormalizingFlow}, Gaussian process CDEs\cite{Dutordoir2018GP-CDE}, or simpler nonparametric CDE methods \cite{izbicki2016nonparametric,izbicki2017converting,Dalmasso2020FlexcodePhotoz};
\emph{implicit CDE} methods that encode the PD implicitly (e.g., conditional generative adversial networks or cGANs; \cite{Mirza2014cGAN}); and \emph{quantile regression}
methods that estimate all quantiles simultaneously\cite{Chung2021Quantile,Fasiolo2021Quantile,Tagasovska2019Quantile,Lucrezia2018Quantile,Liu2011Quantile}.
Though there are many ways one can describe PDs, the models are only useful in practice if they are approximately {\em individually or conditionally calibrated}, meaning that the estimated conditional distribution function (CDF) $\widehat{F}(y|{\mathbf{x}}) \approx F(y|{\mathbf{x}})$ for all $y \in \mathbb{R}$ at every ${\mathbf{x}} \in \mathcal{X}$. In words, the predicted conditional probability of an event happening given input ${\mathbf{x}}$ should match its observed probability. Instance-wise uncertainties are crucial in practical applications. For example, weather forecasts may predict the probability of rainfall given the current state of environmental predictors. Similarly, medical research may estimate the efficacy of a drug for individuals of specific demographics after taking a given dose. Achieving instance-wise uncertainties can be important for algorithmic fairness due to the need not to over- or under-predict risk for certain groups of individuals \citep{kleinberg2016inherent,zhao2020individual}.
Prediction bands can be derived from PDs; by construction, individually calibrated PDs lead to conditionally valid predictions. Indeed, if $C_{\alpha}({\mathbf{X}})$ is a prediction band derived from $\widehat F$ with nominal coverage $1-\alpha$, individually calibrated prediction distributions $\widehat F$ imply
\begin{equation} \label{eq:cond_validity}
\mathbb{P}(Y\in C_\alpha({\mathbf{X}}) |{\mathbf{X}} = {\mathbf{x}}) = 1 -\alpha\,,\ \forall {\mathbf{x}} \in \mathcal{X}.
\end{equation}
However, off-the-shelf PDs are usually far from being calibrated:
CDEs are typically fitted by minimizing a loss function (e.g, the KL divergence or integral probability metrics \citep{papamakarios2019normalizing,dalmasso2020conditional}) that does not directly depend upon calibration.
An additional obstacle to achieving individual calibration is that
most metrics that assess calibration of PDs, such as the probabilistic integral transform (PIT; \citep{gan1990pit}), only assess average or marginal calibration over the entire distribution of ${\mathbf{X}} \in \mathcal{X}$.
Average calibration is often simply referred to as just ``calibration'' \cite{naeini2015ECE,Guo2017Calibration}, although it is a well-known problem that one can achieve marginally calibrated distributions,
$\mathbb{E}_{{\mathbf{X}} \sim F_X} \left[\widehat{F}\left(y|{\mathbf{x}} \right)\right] = \mathbb{E}_{{\mathbf{X}} \sim F_X} \left[F\left(y|{\mathbf{x}} \right)\right]$,
which completely ignore the input ${\mathbf{x}}$. For instance, the PIT statistic may be uniformly distributed, even if $\widehat f(y|{\mathbf{x}})=f(y)$ \cite{Schmidt2020Photo-z}. More generally, inconsistencies in various regions of the feature space can cancel out to produce optimal results when looked at as an ensemble \cite{zhao2021diagnostics,Jitkrittum2020LocalCalibration,Luo2021LocalCallibration}.
\textbf{Objectives and Our Approach:} We propose a non-parametric and easily interpretable framework for constructing and assessing entire predictive distributions (rather than just prediction sets), which reliably quantifies individual uncertainties (providing individual AKA conditional calibration).
Our approach builds on the key observation that an estimate $\widehat F$ is
conditionally calibrated if and only if its probability integral transform (PIT) value
${\rm PIT}(Y;{\mathbf{X}}):=\hat F(Y|{\mathbf{X}})$ is uniformly distributed \emph{conditionally on ${\mathbf{x}}$}. Thus,
if a model is well-calibrated, $r^{\hat f}(\gamma;{\mathbf{x}}) := \mathbb{P} \left( {\rm PIT}(Y;{\mathbf{X}}) \leq \gamma \ \middle| \ {\mathbf{x}} \right)$ is close to $\gamma$ for all ${\mathbf{x}}$'s.
We achieve this by learning the function $r$ via monotonic neural networks. Since
${\rm PIT}(y;{\mathbf{x}})<\gamma \iff y \in (-\infty,\widehat F^{-1}(\gamma|{\mathbf{x}}) )$,
the $L^2$ loss function used for training directly encourages conditional calibration. Moreover, our procedure is amortized, in the sense that we can train on ${\mathbf{x}}$ and $\gamma$ jointly, after which the function $r$ can be evaluated for any ${\mathbf{x}}$'s and $\gamma$. By evaluating how far $r^{\hat f}(\gamma;{\mathbf{x}})$ is from $\gamma$, one can assess at what locations in feature space $\widehat F$ is well-estimated. Moreover,
the learnt function $\widehat r^{\hat f}(\gamma;{\mathbf{x}})$ itself
suggests \emph{how} $\widehat F$ can be adjusted; that is, we are providing the practitioner with both interpretable diagnostics and a means for correcting discrepancies.
\textbf{Relation to Other Work} \\
{\em Goodness-of-fit tests and calibration:} Goodness-of-fit of PDs to observed data can be assessed by two-sample tests \citep{stute2002condtest, moreira2003condtest,jitkrittum2020cde}. Such tests are useful for deciding whether a PD needs to be improved, but do not provide any means to correct discrepancies. One way to recalibrate PDs to fit data on average is to instead assess how the marginal distribution of the PIT values differs from a uniform distribution \citep{cook2006validating,freeman2017photoz,talts2018validating,disanto2018cmdn} and apply corrections to bring them into agreement \citep{bordoloi2010photoz}; by construction, such recalibration schemes only improve marginal calibration.
In this work, we instead build on \citet{zhao2021diagnostics}, which proposes a version of PIT that is determined across the entire input feature space.
{\em Quantile regression:} Quantile regression intervals converge to the oracle $C^{*}_{\alpha}({\mathbf{X}})=\left[F^{-1}(0.5\alpha|{\mathbf{X}}), F^{-1}(1-0.5\alpha|{\mathbf{X}})\right]$ \citep{koenker1978QR,taylor1999quantile}. Even though $C^{*}_{\alpha}({\mathbf{X}})$ satisfies Equation~\ref{eq:cond_validity}, the standard pinball loss can yield highly miscalibrated UQ models for finite data sets
\citep{chung2021beyondpinball,feldman2021orthogQR}. New loss functions have been proposed to address this issue \citep{chung2021beyondpinball,feldman2021orthogQR}. Our method directly targets conditional calibration post-training, instead of trading off desirable properties of uncertainty estimates (e.g., average calibration and sharpness) during training itself.
{\em Conformal inference:}
Conformal prediction methods have the appealing property of yielding predictions sets with finite-sample marginal validity, $\mathbb{P}(Y\in C({\mathbf{X}})) \geq 1 -\alpha$, so long as the data are exchangeable \citep{Vovk2005, lei2018distribution}. However, there is no guarantee that Equation~\ref{eq:cond_validity} is satisfied, even approximately.
More recent efforts have addressed approximate conditional validity \citep{romano2019conformalizedQR,izbicki2020Dist-Split,chernozhukov2021distributional,izbicki2022}
by designing conformal scores whose distribution is approximately homogeneous across $\mathcal{X}$.
Unfortunately, it is difficult to check whether these methods provide good conditional coverage in practice. Conformal prediction bands are also not conditionally valid, even asymptotically, if the initial model is misspecified. Additionally, unlike conformal inference, our methods provide estimates of the full predictive distribution.
\textbf{Contribution and Novelty}
We present a unified framework for diagnostics and recalibration of {\em entire predictive distributions} $F(y|{\mathbf{x}})$; see Section \ref{sec:photo-z} for an application to a challenging physics problem that yields multimodal distributions. Our method directly targets conditional coverage and provides interpretable diagnostics; it can also be used to derive prediction sets.
Though estimating entire distributions nonparametrically is difficult,
our performance is on-par with state-of-the-art predictive inference algorithms for constructing prediction sets;
\footnote{sufficient to train a monotonic neural network \citep{Wehenkel2019UMNN} to learn the regression function (Equation~\ref{eq:r_alpha}).}
see Section \ref{sec:example_iid} for a comparison with conformal quantile regression \citep{romano2019conformalizedQR}, reg-split \citep{lei2018distribution}, and dist-split \citep{izbicki2020Dist-Split} (which instead ``conformalizes'' PIT scores). Our method can handle {\em model mis-specifications}; see Section \ref{sec:example_misspecified} for an example of diagnostics and recalibration in a setting with distributional shift. Finally, because we only require identically distributed (rather than exchangeable) data, our framework can be applied to stationary time series and other settings with {\em dependent} but identically distributed (DID) data (cf. Example \ref{sec:example_TCs}).
\section{Methodology}
\label{sec:methods}
{\bf Notation and Objectives.} Suppose that $\hat{f}(y|{\mathbf{x}})$ is a conditional density estimator (CDE) of a continuous random variable $Y \in \mathcal{Y} \subseteq \mathbb{R}$ given a random vector ${\mathbf{X}} \in \mathcal{X} \subseteq \mathbb{R}^d$. Let $\mathcal{D} = \{({\mathbf{X}}_1, Y_1), . . . ,({\mathbf{X}}_n, Y_n)\}$ denote an
i.i.d. sample from $F_{{\mathbf{X}},Y}$, the joint distribution of $({\mathbf{X}}, Y)$. Our goal is to use $\mathcal{D}$ to recalibrate our CDE, so as to achieve correct conditional coverage.
We refer to $\mathcal{D}$ as ``calibration data'', which are independent from the ``train data'' used to construct $\hat{f}(y|{\mathbf{x}})$.
\subsubsection*{Local Diagnostics via PIT}
Our calibration framework uses diagnostics developed by \citet{zhao2021diagnostics} for assessing conditional density models. For fixed ${\mathbf{x}} \in \mathcal{X}$ and $y \in \mathcal{Y}$, the local probability integral transform (PIT) of $y$ at ${\mathbf{x}}$ is given by
\begin{align}
\label{eq:pit}
{\rm PIT}(y;{\mathbf{x}}) := \int_{-\infty}^y \hat f(y'|{\mathbf{x}}) dy' = \hat F(y|{\mathbf{x}}).
\end{align}
where $\hat F$ is the cumulative distribution function (CDF) associated with $\hat f$. The diagnostics require the estimation of the CDF of the PIT values, which we refer to as the PIT-CDF:
\begin{Definition}[PIT-CDF]
For every ${\mathbf{x}} \in \mathcal{X}$ and $\gamma \in (0,1)$, the CDF of the local PIT is given by
\begin{align}
\label{eq:r_alpha}
r^{\hat f}(\gamma;{\mathbf{x}}) := \mathbb{P} \left( {\rm PIT}(Y;{\mathbf{x}}) \leq \gamma \ \middle| \ {\mathbf{x}} \right). \end{align}
\end{Definition}
We learn $r^{\hat f}(\gamma;{\mathbf{x}})$ using regression: in this paper, we first augment the calibration data ${\mathcal D}$ by drawing $\gamma_{i,1}, \ldots, \gamma_{i,K} \sim U(0,1)$ for each data point ($i=1,\ldots,n$), then regress the random variable
\begin{align}
W_{i,j}:= {\mathbb I}({\rm PIT}(Y_i; {\mathbf{X}}_i) \leq \gamma_{i,j})
\end{align}
on both ${\mathbf{X}}_i$ and $\gamma_{i,j}$
using the augmented calibration sample $\mathcal{D}'=\{({\mathbf{X}}_i,Y_i,W_{i,j})\}_{i,j}$, for $i=1,\ldots,n$ and $j=1,\ldots,K$. As $r^{\hat f}(\gamma;{\mathbf{x}})$ is a non-decreasing function of $\gamma$, we use monotonic neural networks \citep{Wehenkel2019UMNN} as our regression algorithm though any other suitable regression method may be used.
The PIT-CDF values $r^{\hat f}(\gamma;{\mathbf{x}})$ characterize the local consistency of $\widehat{f}$, defined as follows:
\begin{Definition}[Local consistency]A density estimate $\widehat{f}(\cdot|{\mathbf{x}})$ is locally consistent at a fixed ${\mathbf{x}}$ if, and only if, $\hat F(\cdot|{\mathbf{x}})=F(\cdot|{\mathbf{x}})$.
\end{Definition}
Indeed, for fixed $x$, $\hat f(\cdot|{\mathbf{x}})$ is locally consistent, if and only if, $r^{\hat f}(\gamma;{\mathbf{x}})=\gamma$ for every $\gamma \in (0,1)$ \citep[Corollary 1]{zhao2021diagnostics}. Hence, by plotting
an estimate of $r^{\hat f}(\gamma;{\mathbf{x}})$
versus $\gamma$, referred to as Amortized Local P-P plots (ALPs), we can assess how close $\hat{f}$ is to $f$ across the entire feature space. We can also describe the type of deviations that might occur; see Figure~\ref{fig:ex2_panels} for some examples.\\
\subsubsection*{Cal-PIT}
\texttt{Cal-PIT} uses the estimated regression function
$\hat r^{\hat f}(\gamma;{\mathbf{x}}) := \hat{\mathbb{P}} \left( {\rm PIT}(Y;{\mathbf{x}}) \leq \gamma \ \middle| \ {\mathbf{x}} \right)$
to correct the original CDE $\hat f$, so that the recalibrated CDE $\widetilde f$ is approximately locally consistent across the feature space. The procedure is as follows:
Consider a fixed evaluation point ${\mathbf{x}}$ and $\gamma \in G$, where $G$ is a fine grid over $(0,1)$. Let $\beta := \hat{r}^{\hat f}(\gamma;{\mathbf{x}})$. If the regression is perfectly estimated (that is, $\hat r^{\hat f}=r^{\hat f}$), then,
as long as both $F$ and $\widehat F$ are continuous and $\widehat F$ dominates $F$ (see Assumptions \ref{assump:continuity} and \ref{assump:dominates} in Section \ref{sec:theory} for details),
\begin{equation} \label{eq:cov_reg}
\beta:=r^{\widetilde f}(\gamma;{\mathbf{x}}) =\mathbb{P}\left(Y \leq \widehat{F}^{-1}(\gamma \mid {\mathbf{x}}) \ \middle| \ {\mathbf{x}} \right).
\end{equation}
That is, the probability of observing the response variable $Y$ below a predicted $\gamma$-quantile at $x$ is equal to $\beta$.
However, local consistency at ${\mathbf{x}}$ requires this probability to be equal to $\gamma$.
The above result suggests that we ``adjust'' the values of $\widetilde F$, and define a new cumulative (conditional) distribution function $\widetilde F$, where
\begin{align}
\label{eq:correction}
\widetilde F^{-1}\left(\beta|{\mathbf{x}} \right):=\hat F^{-1}(\gamma|{\mathbf{x}}).
\end{align}
By Equations~\ref{eq:cov_reg}-\ref{eq:correction}, the new CDE $\widetilde{f}$ will then satisfy the local consistency condition:
\begin{align*}
r^{\widetilde f}(\gamma;{\mathbf{x}}) := {\mathbb P}\left( Y \leq \widetilde F^{-1}\left(\gamma|{\mathbf{x}} \right) \ \middle| \ {\mathbf{x}} \right) = \gamma.
\end{align*}
Finally, for each ${\mathbf{x}}$ of interest, we use splines
to interpolate between $\gamma$-values on the grid $G$, so that $\widetilde{F}^{-1}( \cdot | {\mathbf{x}})$ and $\widetilde{F} (\cdot | {\mathbf{x}})$ can be evaluated for all $\beta \in (0,1)$.
The \texttt{Cal-PIT} prediction interval at ${\mathbf{x}}$, defined as
$$C_\alpha({\mathbf{x}}) :=\left[\widetilde F^{-1}(0.5\alpha|{\mathbf{x}}), \ \widetilde F^{-1}(1-0.5\alpha|{\mathbf{x}})\right],$$ approximately achieves $1-\alpha$ conditional coverage.
Algorithm~1 (in Appendix~\ref{app:algorithm}) details the \texttt{Cal-PIT} procedure for constructing either prediction intervals or re-calibrated CDEs from $\widetilde{F}^{-1}$. In Appendix~\ref{app:cal_hpd}, we also propose a related approach for computing highest predictive density (HPD) regions instead of predictive intervals. HPD regions can produce more informative and considerably smaller prediction sets than intervals for multimodal and skewed densities.
\begin{Remark}
If the initial model is good, then $r$ is easy to estimate; for instance, $\widehat f=f$ implies a constant function $r^{\hat f}(\gamma;{\mathbf{x}})=\gamma$. However, $\widehat f$ needs to cover the whole space. Depending on the application, an estimate of the marginal distribution $f(y)$, or an initial fit with an MDN and a wide Gaussian (see Examples \ref{sec:example_TCs}), could both be viable options.
\end{Remark}
\section{Theoretical Properties}
\label{sec:theory}
Next, we provide convergence rates for the recalibrated CDF estimator $\widetilde F$, and show
that \texttt{Cal-PIT} intervals achieve asymptotic conditional validity even if the initial CDE $\hat{f}$ is not consistent.
The following results are conditional on $\hat{f}$; all uncertainty refers to the calibration sample.
We assume that the true distribution of $Y|{\mathbf{x}}$ and its initial estimate are continuous, and that
$\widehat F$ places its mass on a region which is at least as large as that of $F$:
\begin{Assumption}[Continuity of the cumulative distribution functions]
\label{assump:continuity}
For every ${\mathbf{x}} \in \mathcal{X}$, $\widehat F(\cdot |{\mathbf{x}})$ and $F(\cdot |{\mathbf{x}})$ are strictly continuous functions.
\end{Assumption}
\begin{Assumption}[$\widehat F$ dominates $F$]
\label{assump:dominates}
For every ${\mathbf{x}} \in \mathcal{X}$, $\widehat F(\cdot |{\mathbf{x}})$ dominates $F(\cdot|{\mathbf{x}})$.
\end{Assumption}
To provide convergence rates
for the recalibrated CDF, we assume that $F(\cdot|{\mathbf{x}})$ cannot place too much mass in regions where the initial estimate
$\widehat F(\cdot|{\mathbf{x}})$
places little mass:
\begin{Assumption}[Bounded density]\label{assump:bounded}
There exists $K>0$ such that, for every ${\mathbf{x}} \in \mathcal{X}$, the Radon-Nikodym derivative of $F(\cdot | {\mathbf{x}})$ with respect to $\widehat F(\cdot | {\mathbf{x}})$ is bounded above by $K$.
\end{Assumption}
Finally, we assume that the regression method converges at a rate $O(n^{-\kappa})$:
\begin{Assumption}[Convergence rate of the regression method]\label{assump:convergence_rate} \label{assump:mse}
The regression method used to estimate $r^{\widehat f}$ is such that its convergence rate is given by
$${\mathbb E} \left[ \int \int \left(\widehat r^{\widehat f}(\gamma;{\mathbf{x}})-r^{\widehat f}(\gamma;{\mathbf{x}}) \right)^2 d\gamma dP({\mathbf{x}}) \right]=O\left(\frac{1}{n^\kappa}\right)$$
for some $\kappa>0$.
\end{Assumption}
Many methods satisfy Assumption \ref{assump:convergence_rate} for some value
$\kappa$, which is typically rated to the dimension of $\mathcal{X}$ and the smoothness of the true regression $r$ (see for instance \citealt{gyorfi2002distribution}).
Under these assumptions, we can derive the rate of convergence for $\widetilde F$:
\begin{thm}
\label{thm:rate}
Under Assumptions \ref{assump:continuity}, \ref{assump:dominates}, \ref{assump:bounded} and \ref{assump:convergence_rate},
$$ {\mathbb E} \left[ \int \int \left(\widetilde F(y|{\mathbf{x}})-F(y|{\mathbf{x}}) \right)^2 dP(y,{\mathbf{x}})\right]=O\left(\frac{1}{n^\kappa}\right).$$
\end{thm}
Next, we show that
with an uniformly consistent regression estimator $\hat r^{\hat f}(\gamma;{\mathbf{x}})$ (see \cite{bierens1983uniform,hardle1984uniform,liero1989strong,girard2014uniform} for some examples), \texttt{Cal-PIT} intervals achieve asymptotic conditional validity, even if the initial CDE $\hat{f}(y|{\mathbf{x}})$ is not consistent.
\begin{Assumption}[Uniform consistency of the regression estimator]
\label{assump:uniform_consistency}
The regression estimator is such that
$$\sup_{{\mathbf{x}} \in \mathcal{X},\gamma \in [0,1]} | \widehat r^{\widehat f}(\gamma;{\mathbf{x}})- r^{\widehat f}(\gamma;{\mathbf{x}})| \xrightarrow[n \longrightarrow\infty]{\enskip \text{a.s.} \enskip} 0,$$
where the convergence is with respect to the calibration set $\mathcal{D}$ only; $\widehat f$ is fixed.
\end{Assumption}
\begin{thm}[Consistency and conditional coverage of \texttt{Cal-PIT} intervals]
\label{thm:consistency}
Let $C^*_\alpha({\mathbf{x}})=\left[F^{-1}(0.5\alpha|{\mathbf{x}});
F^{-1}(1-0.5\alpha|{\mathbf{x}})\right]$ be the oracle prediction band,
and let $C^n_\alpha({\mathbf{x}})$ denote the \texttt{Cal-PIT} interval.
Under Assumptions \ref{assump:continuity}, \ref{assump:dominates} and \ref{assump:uniform_consistency},
$$\lambda\left(C_\alpha^n({\mathbf{X}}) \Delta C_\alpha^*({\mathbf{X}})\right) \xrightarrow[n \longrightarrow\infty]{\enskip \text{a.s.} \enskip} 0,$$
where $\lambda$ is the Lebesgue measure in $\mathbb{R}$ and $ \Delta$ is the symmetric difference between two sets. It follows that
$C_\alpha^n({\mathbf{X}})$ has asymptotic conditional coverage of
$1-\alpha$ \citep{lei2018distribution}.
\end{thm}
\section{Synthetic Examples}
\label{sec:toy_examples}
\subsection{Example 1: IID Data. No Model Misspecification.}
\label{sec:example_iid}
Our first example, motivated by the simulated two-group example of \citet{feldman2021orthogQR}, shows that conditional coverage is difficult to obtain even in a simple i.i.d. setting when the data consists of two groups with different spreads. The feature space has three dimensions ($X_{0/1/2}$) and the target variable ($Y$) is one dimensional. $X_{0}$ is a categorical variable and indicates which group the data point belongs to; $Y$ splits into two branches for $X_{1}> 0$, so that the true CDE is bimodal in this regime (see App.~D for generating distributions and visualization of the data).
We build $90\%$ prediction sets using quantile regression (QR) with a pinball loss \citep{koenker1978QR}, conformalized quantile regression (\texttt{CQR}), \citep{romano2019conformalizedQR} and \texttt{Reg-split} \citep{lei2018distribution}, all trained with XGBoost~\citep{Chen2016XGBoost}; \texttt{Dist-split} \citep{izbicki2020Dist-Split}; and \texttt{Cal-PIT} with an initial CDE trained using FlexCode with an XGBoost regressor \citep{izbicki2017converting,Dalmasso2020FlexcodePhotoz} and a monotonic neural network~\citep{Wehenkel2019UMNN} for learning $\widehat r^{\widehat f}(\gamma;{\mathbf{x}})$. We split the data into equal training and calibration sets of combined sizes $n=2000,5000\ \mathrm{and}\ 10000$ (with twice as much training data for QR), and measure conditional coverage at a fixed set of 1000 uniformly sampled test points in ${\mathbf{X}}$ for which the true CDE is known. Figure \ref{fig:ex1_coverage} compares the conditional coverage of each method. Test points whose coverage lies within 2 standard deviations (SD) of $1-\alpha=0.9$ based on 100 random realizations are classified as having ``correct'' coverage. All methods seem to converge towards more accurate conditional coverage (which should be perfect asymptotically), but only \texttt{Cal-PIT} consistently attains nominal $(1-\alpha)=90\%$ coverage across the feature space for $n>5000$.
We also calculate prediction set sizes averaged over all test points for the methods compared to theoretically expected average prediction set sizes (Oracle Band set sizes). Fig.~\ref{fig:ex1_boxplots} shows \texttt{Cal-PIT} intervals have the smallest size while achieving better conditional coverage than the other methods.
\begin{figure}[h!]
\centering
\includegraphics[width=0.32\textwidth]{figures/ex1_coverage_pct_n2000.pdf}
\includegraphics[width=0.32\textwidth]{figures/ex1_coverage_pct_n5000.pdf}
\includegraphics[width=0.32\textwidth]{figures/ex1_coverage_pct_n10000.pdf}
\caption{\small Proportion of test points with correct conditional coverage for different methods. Data of total size $n$ are split equally into train and calibration sets (except for QR which uses all data for training). While conformal methods improve upon QR, \texttt{Cal-PIT} leads to better conditional coverage, even for smaller sample sizes.}
\label{fig:ex1_coverage}
\end{figure}
\begin{figure*}[h!]
\includegraphics[width=1\textwidth, right]{figures/new_ex1_nested_boxplots_INT_2SD.pdf}
\caption{\small Average prediction set sizes for test points for different methods along with the ideal ``Oracle Band''. Box plots show the size distribution for multiple trials of the experiment. \texttt{Cal-PIT} achieves prediction sets that are at least as tight as those by other methods, while simultaneously providing more accurate coverage.}
\label{fig:ex1_boxplots}
\end{figure*}
\subsection{Example 2: Mis-specified Models. Diagnostics and Recalibration}
\label{sec:example_misspecified}
The next example demonstrates that our method can effectively diagnose and correct model mis-specifications, yielding prediction sets that still achieve conditional coverage. We explore a problem with a single predictor $X$ in two different settings: One in which the true target distribution $f$ is skewed and a second for which $f$ is kurtotic. In both cases, the initial estimate of the distribution, $\widehat f$ (used for the inputs to \texttt{Cal-PIT}) is a Gaussian centered at the true conditional mean with a standard deviation of 2 units. We split the data evenly between training and calibration sets with each having 10,000 data points. We measure coverage for various values of $X$ on an independent test set (see Appendix~\ref{app:toy_examples} for details of data generation).
Using the monotonic neural network \cite{Wehenkel2019UMNN} regression function for local PIT coverage trained on the calibration set, $\hat{r}^{\hat f}(\gamma;{\mathbf{x}})$, we construct ``amortized local P-P plots'' (ALPs) to show \emph{how} the estimated conditional density $\hat f(y|x)$ deviates from the true density in each setting (center panel of Fig.~\ref{fig:ex2_panels}). Our method pinpoints the nature of the discrepancy from the estimated distribution and directly corrects for deviations in conditional coverage.
\begin{figure}[h!]
\hspace*{-0.0cm}
\centering
\includegraphics[width=1\textwidth]{figures/example2_panels.png}
\caption{\small \textit{Left}: Initial and target distributions for Example \ref{sec:example_misspecified}. The
initial fit is Gaussian, but the target distributions are skewed and kurtotic, making the model mis-specified. Conditional densities for each distribution are shown at slices of $X$. \textit{Center}: Diagnostic local P-P plots. \texttt{Cal-PIT} identifies that, relative to the training density, the skewed observed data are biased at $X=-1$/$X=1$ but well estimated at $X=0$, and that the observed data for the kurtotic target are well estimated at $X=0$ but under- or over-dispersed at $X=-1$/$X=1$. These insights allow \texttt{Cal-PIT} to correct the initial model. \textit{Right}: Conditional coverage obtained via different calibration methods on target data; nominal coverage level $1-\alpha=0.9$. \texttt{Cal-PIT} is the only method to achieve conditional validity for all inputs $X$.
}\label{fig:ex2_panels}
\end{figure}
The right panel of Fig.~\ref{fig:ex2_panels} shows that the predictive distributions achieve nominal conditional coverage after recalibration using \texttt{Cal-PIT}. In contrast, \texttt{reg-split}, \texttt{CQR} and \texttt{Dist-split} fail to achieve conditional coverage, even though they are calibrated using data from the true data-generating process.
\subsection{Example 3: Dependent High-Dimensional Sequence Data}
\label{sec:example_TCs}
Next we illustrate \texttt{Cal-PIT} calibration of entire PDs of $Y_t|\S_{<t}$ for high-dimensional sequence data $\{(\S_{<t}, Y_t)\}$, which are based on satellite images of tropical cyclones (TCs). The target variable $Y_t$ represents TC intensity at time $t$, and the predictor $\S_{<t}$ is an entire sequence of one-dimensional functions summarizing the spatio-temporal evolution of TC convective structure leading up to time $t$.
We simulate from a model fit to observed data so that we can compute exact conditional coverage; the details are in Appendix~\ref{app:example_TCs}. The original data capture TC convective structure, as observed every 30 minutes by Geostationary Operational Environmental Satellite (GOES) infrared imagery \citep{janowiak2020NOAA}
of storms from the North Atlantic and Eastern North Pacific basins between 2000-2020; in addition, we have TC intensities from NHC's HURDAT2 best track database (6-hour synoptic times are interpolated for a 30 minute resolution before fitting a vector-autoregressive model; we then simulate a series of scalar TC intensities $Y_t$ via a time series regression of $Y_t$ on its own most recent values and on $\S_{<t}$).
Figure~\ref{fig:TC_data} shows an example of data from a simulated storm.
On the left, we have a so-called Hovm{\"o}ller diagram of the evolution of TC convective structure $\{({\mathbf{X}}_t)\}_{t \geq 0}$, with each row representing the radial profile ${\mathbf{X}}_t \in \mathbb{R}^{120}$ of cloud-top temperatures as a function of radial distance from the TC center; time evolution is top-down in hours.
On the right, we have $\{Y_t\}_{t \geq 0}$, the simulated TC ``intensities'' at corresponding times $t$.
Let a sequence
$\S_{<t}:=({\mathbf{X}}_{t-48}, {\mathbf{X}}_{t-47}, \ldots,{\mathbf{X}}_{t})$ include the 24-hour history of convective structure (49 radial profiles). We simulate 800 ``storms'' from a fitted TC length distribution. Data $\{(\S_{<t}, Y_t)\}$ from the same storm may overlap, while data from different storms are independent.
\begin{figure}[htb]
\begin{minipage}[ht]{0.5\linewidth}
\begin{center}
\includegraphics[width=0.8\textwidth]{figures/sim_TC_rad.pdf}
\end{center}
\end{minipage}
\begin{minipage}[ht]{0.48\linewidth}
\caption{
{\small Simulated radial profiles and intensities for an example TC.
\textit{Left:} Hovm{\"o}ller diagram of the evolution of TC convective structure $\{{\mathbf{X}}_t\}_{t \geq 0}$; each row represents the radial profile ${\mathbf{X}}_t$ of cloud-top temperatures as a function of radial distance from the TC center at time $t$. Our predictors are 48-hour overlapping sequences $\{\S_t\}_{t \geq 0}$ with data from the same ``storm'' being highly dependent.
\textit{Right:}
The target response, here shown as a time series $\{(Y_t)\}_{t \geq 0}$ of simulated TC intensities.}} \label{fig:TC_data}
\end{minipage}
\end{figure}
Our goal is to construct prediction sets for $Y_t|\S_{<t}$, and illustrate how \texttt{Cal-PIT} improves upon an initial MDN fit.
Train, calibration, and testing were performed on {\em different} simulated ``storms''. First, we fit an initial CDE (ConvMDN, \citep{disanto2018cmdn}), which estimates $f(y|{\mathbf{s}})$ as a unimodal Gaussian, using a train set with 8000 points, $\{(\S_{<t},Y_t)\}$ (see Appendix~\ref{app:example_TCs} for details). Next, we apply \texttt{Cal-PIT} to learn $\hat r^{\hat f}(\gamma;{\mathbf{s}})$ using 8000 calibration points. Note however that the data within the same storm are {\em highly} dependent; hence, the effective train or calibration sample sizes are much smaller than the nominal value. Finally, we evaluate the conditional coverage of the initial CDE and \texttt{Cal-PIT} on 4000 test points; see Fig.~\ref{fig:TC_cond_cov_paper}.
\texttt{Cal-PIT} recalibration improves upon the initial ConvMDN fit:
Fig.~\ref{fig:TC_cond_cov_paper} (left) shows prediction sets for $Y_t|\S_{<t}$ for a sample simulated TC, before and after calibration.
The calibrated prediction sets track the behavior of the observed trajectory more closely, as shown in Appendix~\ref{app:example_TCs}.
Moreover, the right panel shows \texttt{Cal-PIT} achieves better conditional coverage, even though the effective sample size is small because of dependencies between radial profiles in the same storm.
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\textwidth, valign=t]{figures/TC_prediction_sets.pdf}
\includegraphics[width=0.5\textwidth, valign=t]{figures/TC_coverage.pdf}
\caption{
{\small \textit{Left:} Simulated TC example with dependent high-dimensional sequence data. Prediction sets for TC intensities, before and after calibration (blue bars), together with the actual trajectory of intensities
$\{Y_t\}_{t}$ (solid black lines). \texttt{Cal-PIT} sets track the behaviour of the trajectories more closely. \textit{Right:} Conditional coverage of both methods across sequences ${\mathbf{s}}$. The initial ConvMDN fit with a single Gaussian component over-covers in certain regions of the feature space due to the true PD being skewed toward larger intensities (see Appendix ~\ref{app:example_TCs}); \texttt{Cal-PIT} partly corrects for the over-coverage and returns more precise prediction sets.
}
}
\label{fig:TC_cond_cov_paper}
\end{figure}
\section{Application to Photometric Redshift CDEs}\label{sec:photo-z}
We consider an application of our method to obtain CDEs for photometric redshifts of galaxies, which are crucial for many studies in astrophysics and cosmology. Redshift is an observable measure of distance to a galaxy (and hence times in the past;
distance estimates are critical for converting apparent brightness measurements into intrinsic properties such as mass, as well as for comparing the evolution of galaxies across different epochs of the Universe. Redshifts are also crucial for many probes of cosmology that are sensitive to the accelerating expansion of the Universe. However, obtaining direct redshift measurements of a large number of objects is prohibitively resource-intensive. Therefore, redshift estimates often must be derived from easier-to-obtain imaging data, resulting in measurements called photometric redshifts or photo-$z$'s.
Images contain limited
information about redshifts. Consequently, galaxies at the same redshift can have very different image properties, and galaxies at different redshifts can have similar image properties. CDEs are commonly used to represent photo-$z$ estimates and associated uncertainties; these often are multi-modal and do not conform to any of the standard probability distributions \citep{Benitez2000BPz,Mandelbaum2008PhotozPDF,Malz2022Photoz}. Machine learning-based methods are widely used to predict photo-$z$-distributions when adequate training data are available (e.g., \citep{Beck2016Photoz,Zhou2021DESIPhotoz,Dalmasso2020FlexcodePhotoz,Almosallam2016GPz,Razim2021ANNPhotoz,Dey2021Photoz}), though they do not guarantee accurate conditional coverage.
In this work we use the simulated data from \citet{schmidt2020photoz}, which has been used to compare photo-$z$ CDE prediction methods in the past. The features used to train the models are called apparent magnitudes and colors; these are measures of total light in an image and the ratios between the total light from an object as measured in two different wavelength bands.
We use the ``training set'' from~\citet{schmidt2020photoz} with about 44,000 instances as our calibration set; then split the remaining data into two sets:
a validation set (twice as large as the calibration set) and a larger test set comprised of roughly 250,000 instances. We start with the marginal distribution of redshifts as our initial CDE estimate for all instances. \citet{Schmidt2020Photo-z} demonstrated that such a CDE estimate can perform well on many commonly used metrics that check for marginal coverage, although it does not provide information about individual instances.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{figures/photo_z_P-P_local.pdf}
\caption{\small{\textit{Top: } Diagnostic local P-P plot for 5 galaxies before and after Cal-PIT is applied. \textit{Bottom:} CDEs for the corresponding galaxies before and after calibration along with their true redshift. Recalibration using \texttt{Cal-PIT} can recover multimodalities while ensuring good conditional coverage.}}
\label{fig:photo-z-local}
\end{figure}
We learn the local distribution of PIT values by training $r^{\widehat f}$ on the calibration set and use it to recalibrate the CDEs in our validation and test sets using the methods described in Section~\ref{sec:methods}. To assess the quality of our recalibrated CDEs, we train another regression model using the validation set and its recalibrated CDEs. We infer the local CDF of PIT for every instance in the test set before and after recalibration using the two trained models. Fig.~\ref{fig:photo-z-local}~(top) shows the diagnostic local P-P plot for five galaxies in the test set. The local CDF of PIT for these instances follow the identity line closely (i.e., the CDF of a uniform distribution), indicating good conditional coverage. Fig.~\ref{fig:photo-z-local}~(bottom) also shows that multimodal CDEs can be recovered (as are typical for photo-$z$'s, cf. Appendix~\ref{app:photo-z}), even when the input CDE before calibration is unimodal. The Cram\'er-von Mises statistic between the local PIT CDF of each galaxy in the test set and the uniform distribution is a measure of the quality of conditional coverage\cite{Schmidt2020Photo-z}, and decreases significantly on the entire test set when comparing both fits, with a mean decrease of $\sim4.5\times$ (Appendix~\ref{app:photo-z}). We also see a large improvement in the value of the CDE Loss \citep{izbicki2017converting}, which provides an independent metric of conditional coverage, with a decrease from $-0.84$ to $-10.71$ after recalibration. For comparison, in \citet{Schmidt2020Photo-z} the photo-$z$ algorithms considered yielded CDE losses ranging from $-1.66$ at worst to $-10.60$ at best. \texttt{Cal-PIT} might yield even better results if applied to one of the algorithms that initially performed better.
\section{Discussion}
\label{sec:discussion}
\texttt{Cal-PIT} can assess whether a PD estimate $\widehat F(\cdot|{\mathbf{x}})$ is well-calibrated for all inputs ${\mathbf{x}}$, as well as correct for discrepancies. In order for \texttt{Cal-PIT} corrections to give good results, the initial estimate $\widehat F(\cdot|{\mathbf{x}})$ needs to place its mass on a region which is at least as large as $F(\cdot|{\mathbf{x}})$,\footnote{if this is not the case, a practical way of mitigating the problem is, for instance, by artificially widening $\widehat F$ by convolution with a Gaussian kernel} but the initial fit can be poor otherwise. Good results also require calibration data to learn the regression function (Eq.~\ref{eq:r_alpha}); empirically, we see that data sizes are still reasonable if using the right NN architecture and training correctly. \texttt{Cal-PIT} does not require exchangeable data, only stationary processes; hence it can be applied to (stationary) probabilistic time series forecasting. Individually calibrated PDs automatically return conditionally calibrated prediction sets. However, \texttt{Cal-PIT} works under the assumption that $Y$ is continuous, and do not apply to classification tasks (unlike calibration schemes in, e.g., \cite{Kull2019Conditional,Wald2017Conditional}).
Finally, \texttt{Cal-PIT} can potentially be extended to multivariate output vectors ${\mathbf{Y}}$ by the decomposition $f({\mathbf{y}}|{\mathbf{x}})=\prod_{i} f(y_i|{\mathbf{x}},{\mathbf{y}}_{<i})$; thus performing \texttt{Cal-PIT} corrections on auto-regressive components of the conditional distribution. This is a particularly promising direction for Deep Pixel-CNN and Pixel-RNN models \citep{van2016conditional,van2016pixel} (work in progress).
\begin{ack}
The authors would like to thank Trey McNeely for helpful discussions and for preparing the tropical cyclone data that were used to fit the Example 3 model. This work is supported in part by NSF DMS-2053804, NSF PHY-2020295, and the C3.ai Digital Transformation Institute. BD, BHA and JAN acknowledge the support of the National Science Foundation under Grant No. AST-2009251. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. RI is grateful for the financial support of CNPq (309607/2020-5) and FAPESP (2019/11321-9). This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231.
\end{ack}
\clearpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3 |
package com.android.server.am;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import org.xmlpull.v1.XmlPullParser;
import org.xmlpull.v1.XmlPullParserException;
import org.xmlpull.v1.XmlSerializer;
import com.android.internal.util.FastXmlSerializer;
import android.app.ActivityManager;
import android.app.AppGlobals;
import android.content.pm.ApplicationInfo;
import android.content.pm.IPackageManager;
import android.content.res.CompatibilityInfo;
import android.os.Handler;
import android.os.Message;
import android.os.RemoteException;
import android.util.AtomicFile;
import android.util.Slog;
import android.util.Xml;
public final class CompatModePackages {
private final String TAG = ActivityManagerService.TAG;
private final boolean DEBUG_CONFIGURATION = ActivityManagerService.DEBUG_CONFIGURATION;
private final ActivityManagerService mService;
private final AtomicFile mFile;
// Compatibility state: no longer ask user to select the mode.
public static final int COMPAT_FLAG_DONT_ASK = 1<<0;
// Compatibility state: compatibility mode is enabled.
public static final int COMPAT_FLAG_ENABLED = 1<<1;
private final HashMap<String, Integer> mPackages = new HashMap<String, Integer>();
private static final int MSG_WRITE = ActivityManagerService.FIRST_COMPAT_MODE_MSG;
private final Handler mHandler = new Handler() {
@Override public void handleMessage(Message msg) {
switch (msg.what) {
case MSG_WRITE:
saveCompatModes();
break;
default:
super.handleMessage(msg);
break;
}
}
};
public CompatModePackages(ActivityManagerService service, File systemDir) {
mService = service;
mFile = new AtomicFile(new File(systemDir, "packages-compat.xml"));
FileInputStream fis = null;
try {
fis = mFile.openRead();
XmlPullParser parser = Xml.newPullParser();
parser.setInput(fis, null);
int eventType = parser.getEventType();
while (eventType != XmlPullParser.START_TAG) {
eventType = parser.next();
}
String tagName = parser.getName();
if ("compat-packages".equals(tagName)) {
eventType = parser.next();
do {
if (eventType == XmlPullParser.START_TAG) {
tagName = parser.getName();
if (parser.getDepth() == 2) {
if ("pkg".equals(tagName)) {
String pkg = parser.getAttributeValue(null, "name");
if (pkg != null) {
String mode = parser.getAttributeValue(null, "mode");
int modeInt = 0;
if (mode != null) {
try {
modeInt = Integer.parseInt(mode);
} catch (NumberFormatException e) {
}
}
mPackages.put(pkg, modeInt);
}
}
}
}
eventType = parser.next();
} while (eventType != XmlPullParser.END_DOCUMENT);
}
} catch (XmlPullParserException e) {
Slog.w(TAG, "Error reading compat-packages", e);
} catch (java.io.IOException e) {
if (fis != null) Slog.w(TAG, "Error reading compat-packages", e);
} finally {
if (fis != null) {
try {
fis.close();
} catch (java.io.IOException e1) {
}
}
}
}
public HashMap<String, Integer> getPackages() {
return mPackages;
}
private int getPackageFlags(String packageName) {
Integer flags = mPackages.get(packageName);
return flags != null ? flags : 0;
}
public void handlePackageAddedLocked(String packageName, boolean updated) {
ApplicationInfo ai = null;
try {
ai = AppGlobals.getPackageManager().getApplicationInfo(packageName, 0, 0);
} catch (RemoteException e) {
}
if (ai == null) {
return;
}
CompatibilityInfo ci = compatibilityInfoForPackageLocked(ai);
final boolean mayCompat = !ci.alwaysSupportsScreen()
&& !ci.neverSupportsScreen();
if (updated) {
// Update -- if the app no longer can run in compat mode, clear
// any current settings for it.
if (!mayCompat && mPackages.containsKey(packageName)) {
mPackages.remove(packageName);
mHandler.removeMessages(MSG_WRITE);
Message msg = mHandler.obtainMessage(MSG_WRITE);
mHandler.sendMessageDelayed(msg, 10000);
}
}
}
public CompatibilityInfo compatibilityInfoForPackageLocked(ApplicationInfo ai) {
CompatibilityInfo ci = new CompatibilityInfo(ai, mService.mConfiguration.screenLayout,
mService.mConfiguration.smallestScreenWidthDp,
(getPackageFlags(ai.packageName)&COMPAT_FLAG_ENABLED) != 0);
//Slog.i(TAG, "*********** COMPAT FOR PKG " + ai.packageName + ": " + ci);
return ci;
}
public int computeCompatModeLocked(ApplicationInfo ai) {
boolean enabled = (getPackageFlags(ai.packageName)&COMPAT_FLAG_ENABLED) != 0;
CompatibilityInfo info = new CompatibilityInfo(ai,
mService.mConfiguration.screenLayout,
mService.mConfiguration.smallestScreenWidthDp, enabled);
if (info.alwaysSupportsScreen()) {
return ActivityManager.COMPAT_MODE_NEVER;
}
if (info.neverSupportsScreen()) {
return ActivityManager.COMPAT_MODE_ALWAYS;
}
return enabled ? ActivityManager.COMPAT_MODE_ENABLED
: ActivityManager.COMPAT_MODE_DISABLED;
}
public boolean getFrontActivityAskCompatModeLocked() {
ActivityRecord r = mService.getFocusedStack().topRunningActivityLocked(null);
if (r == null) {
return false;
}
return getPackageAskCompatModeLocked(r.packageName);
}
public boolean getPackageAskCompatModeLocked(String packageName) {
return (getPackageFlags(packageName)&COMPAT_FLAG_DONT_ASK) == 0;
}
public void setFrontActivityAskCompatModeLocked(boolean ask) {
ActivityRecord r = mService.getFocusedStack().topRunningActivityLocked(null);
if (r != null) {
setPackageAskCompatModeLocked(r.packageName, ask);
}
}
public void setPackageAskCompatModeLocked(String packageName, boolean ask) {
int curFlags = getPackageFlags(packageName);
int newFlags = ask ? (curFlags&~COMPAT_FLAG_DONT_ASK) : (curFlags|COMPAT_FLAG_DONT_ASK);
if (curFlags != newFlags) {
if (newFlags != 0) {
mPackages.put(packageName, newFlags);
} else {
mPackages.remove(packageName);
}
mHandler.removeMessages(MSG_WRITE);
Message msg = mHandler.obtainMessage(MSG_WRITE);
mHandler.sendMessageDelayed(msg, 10000);
}
}
public int getFrontActivityScreenCompatModeLocked() {
ActivityRecord r = mService.getFocusedStack().topRunningActivityLocked(null);
if (r == null) {
return ActivityManager.COMPAT_MODE_UNKNOWN;
}
return computeCompatModeLocked(r.info.applicationInfo);
}
public void setFrontActivityScreenCompatModeLocked(int mode) {
ActivityRecord r = mService.getFocusedStack().topRunningActivityLocked(null);
if (r == null) {
Slog.w(TAG, "setFrontActivityScreenCompatMode failed: no top activity");
return;
}
setPackageScreenCompatModeLocked(r.info.applicationInfo, mode);
}
public int getPackageScreenCompatModeLocked(String packageName) {
ApplicationInfo ai = null;
try {
ai = AppGlobals.getPackageManager().getApplicationInfo(packageName, 0, 0);
} catch (RemoteException e) {
}
if (ai == null) {
return ActivityManager.COMPAT_MODE_UNKNOWN;
}
return computeCompatModeLocked(ai);
}
public void setPackageScreenCompatModeLocked(String packageName, int mode) {
ApplicationInfo ai = null;
try {
ai = AppGlobals.getPackageManager().getApplicationInfo(packageName, 0, 0);
} catch (RemoteException e) {
}
if (ai == null) {
Slog.w(TAG, "setPackageScreenCompatMode failed: unknown package " + packageName);
return;
}
setPackageScreenCompatModeLocked(ai, mode);
}
private void setPackageScreenCompatModeLocked(ApplicationInfo ai, int mode) {
final String packageName = ai.packageName;
int curFlags = getPackageFlags(packageName);
boolean enable;
switch (mode) {
case ActivityManager.COMPAT_MODE_DISABLED:
enable = false;
break;
case ActivityManager.COMPAT_MODE_ENABLED:
enable = true;
break;
case ActivityManager.COMPAT_MODE_TOGGLE:
enable = (curFlags&COMPAT_FLAG_ENABLED) == 0;
break;
default:
Slog.w(TAG, "Unknown screen compat mode req #" + mode + "; ignoring");
return;
}
int newFlags = curFlags;
if (enable) {
newFlags |= COMPAT_FLAG_ENABLED;
} else {
newFlags &= ~COMPAT_FLAG_ENABLED;
}
CompatibilityInfo ci = compatibilityInfoForPackageLocked(ai);
if (ci.alwaysSupportsScreen()) {
Slog.w(TAG, "Ignoring compat mode change of " + packageName
+ "; compatibility never needed");
newFlags = 0;
}
if (ci.neverSupportsScreen()) {
Slog.w(TAG, "Ignoring compat mode change of " + packageName
+ "; compatibility always needed");
newFlags = 0;
}
if (newFlags != curFlags) {
if (newFlags != 0) {
mPackages.put(packageName, newFlags);
} else {
mPackages.remove(packageName);
}
// Need to get compatibility info in new state.
ci = compatibilityInfoForPackageLocked(ai);
mHandler.removeMessages(MSG_WRITE);
Message msg = mHandler.obtainMessage(MSG_WRITE);
mHandler.sendMessageDelayed(msg, 10000);
final ActivityStack stack = mService.getFocusedStack();
ActivityRecord starting = stack.restartPackage(packageName);
// Tell all processes that loaded this package about the change.
for (int i=mService.mLruProcesses.size()-1; i>=0; i--) {
ProcessRecord app = mService.mLruProcesses.get(i);
if (!app.pkgList.containsKey(packageName)) {
continue;
}
try {
if (app.thread != null) {
if (DEBUG_CONFIGURATION) Slog.v(TAG, "Sending to proc "
+ app.processName + " new compat " + ci);
app.thread.updatePackageCompatibilityInfo(packageName, ci);
}
} catch (Exception e) {
}
}
if (starting != null) {
stack.ensureActivityConfigurationLocked(starting, 0);
// And we need to make sure at this point that all other activities
// are made visible with the correct configuration.
stack.ensureActivitiesVisibleLocked(starting, 0);
}
}
}
void saveCompatModes() {
HashMap<String, Integer> pkgs;
synchronized (mService) {
pkgs = new HashMap<String, Integer>(mPackages);
}
FileOutputStream fos = null;
try {
fos = mFile.startWrite();
XmlSerializer out = new FastXmlSerializer();
out.setOutput(fos, "utf-8");
out.startDocument(null, true);
out.setFeature("http://xmlpull.org/v1/doc/features.html#indent-output", true);
out.startTag(null, "compat-packages");
final IPackageManager pm = AppGlobals.getPackageManager();
final int screenLayout = mService.mConfiguration.screenLayout;
final int smallestScreenWidthDp = mService.mConfiguration.smallestScreenWidthDp;
final Iterator<Map.Entry<String, Integer>> it = pkgs.entrySet().iterator();
while (it.hasNext()) {
Map.Entry<String, Integer> entry = it.next();
String pkg = entry.getKey();
int mode = entry.getValue();
if (mode == 0) {
continue;
}
ApplicationInfo ai = null;
try {
ai = pm.getApplicationInfo(pkg, 0, 0);
} catch (RemoteException e) {
}
if (ai == null) {
continue;
}
CompatibilityInfo info = new CompatibilityInfo(ai, screenLayout,
smallestScreenWidthDp, false);
if (info.alwaysSupportsScreen()) {
continue;
}
if (info.neverSupportsScreen()) {
continue;
}
out.startTag(null, "pkg");
out.attribute(null, "name", pkg);
out.attribute(null, "mode", Integer.toString(mode));
out.endTag(null, "pkg");
}
out.endTag(null, "compat-packages");
out.endDocument();
mFile.finishWrite(fos);
} catch (java.io.IOException e1) {
Slog.w(TAG, "Error writing compat packages", e1);
if (fos != null) {
mFile.failWrite(fos);
}
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,493 |
WHS is fine. People are the problem
It's been a tough few weeks for the integrity of sport. There I was minding my own business, smelling the roses and gazing at stars, only for my world to be rocked by the first of many scandals…
It was a case of Fisherman's Blues for two men caught cheating – hook, line and sinker – after heavy objects fell from their catch at the weigh-in of an Ohio fishing tournament. Now I'd heard of scaling fish but nothing like this; the pair caught tipping the scale in their favour in an attempt to take home the $30,000 first prize cheque.
I'm not naïve. I realise corruption in sport is nothing new. Be it dodgy judges ruining boxing, juicing cyclists stripped of titles or drugged up horses jumping out of their hooves, the spotlight of the law has shone on many sports, but fishing?
Safe to say I was left reeling, wondering if there was any sport still sacred out there at all, when good lord of the dance didn't more drama surface, this time in a sphere much closer to home.
Yes, apparently the jig is up for a dozen Irish dance teachers whose screenshots asking for, or offering to, fix competitions were leaked to Coimisiun Le Rinci Gaelacha. It's reported that a dance teacher and a competition judge were even exchanging sexual favours in return for higher scores. And here I was thinking the hornpipe was a type of dance…
Unfortunately it seems that as long as people draw breath, rules will be manipulated and legality stretched, which means the sport of golf, self-policed with its rules regularly upheld by the integrity of its participants, has long been ripe for the picking, but particularly since the introduction of the new World Handicap System.
It's a pity, because although I realise it's not a popular opinion, I believe WHS provides a more accurate representation of one's playing ability. I love the casual rounds element, allowing me to get competitive with my handicap outside of traditional competition times that often don't suit. And I enjoy tracking my progress on the Golf Ireland app, watching the ebbs and flows of the graph illustrating my topsy-turvy game.
I like the new system because my goal is to play my best every time I tee up and reduce my handicap as low as possible. But would you believe – and I hope you're sitting down for this one – not everyone plays the game this way?
Yes, the anecdotal evidence of people duping the system is now plain as day; digital records depicting players – in great detail – returning casual round scores in the hundreds and bloating their handicaps only to turn up at the club's biggest events and put the competition to the sword… the lengths some people will go for a McGuirks voucher, eh?
Only, I shouldn't joke. There are names of individuals being written into the annals of history as major prize winners at their respective clubs who shouldn't be there, and as things stand, there is neither the recourse, nor the will, to stop them.
It's got so bad that these people aren't even trying to hide their manipulation because they know, even though it's staring honest players in the face, these things are very hard to prove. And with the threat of defamation lawsuits dangling, few will call it out for what it blatantly is. Cheating.
So, what can we do? Well, a player's handicap can be reviewed by the Handicap Committee, but does the need for human intervention defeat the purpose of a system? And even if you spot a handicap discrepancy, good luck implementing that change because golfers will go to even greater distances off the course to protect their handicaps.
What's even more frustrating is that according to data collected by Europe's largest network of golfers on HowDidiDo, WHS has largely levelled the playing field. Even in the face of these "exceptional scores", average stableford scores in competition have actually shifted from significantly favouring players in lower categories to being almost equal. Furthermore, there's just under two points between the average totals scored by players in all categories, down from an almost 13 point difference pre-WHS.
You see, the system is working, for the most part, but these rotten examples of what should be unachievable scores are a cancer capable of corroding golf to its core. Honest players will be turned away thinking they don't stand a chance. Members will stop joining clubs, income will be hit and the entire ecosystem will suffer. Which is not only a shame but a total embarrassment because if a minority of golfers could just be honest with themselves and others, then there's no reason why WHS in Ireland can't prove to be the success it is in most countries around the world.
It's a people problem, not a WHS problem. Hate the player, not the game.
3 responses to "WHS is fine. People are the problem "
Harry Smyth
Article generally accurate but in a way also pointless as it offers no solution to the difficulty that it identifies. Writer points out WHS is fairer and better than what went before despite the fact that it makes it easier to cheats to succeed. Without a solution to that problem it can neither be considered fairer or better.
Gerard Maher
Your article reflects the feelings of most honest golfers in the country.
I believe the only fair way to protect the integrity of the game is for Golf Ireland to carry out an annual review of all club handicaps.
Handicap secretaries are at a disadvantage as they will be ridiculed by those cheats they seek to cut for obvious manipulation of the WHS system.
Golf Ireland have all the data on players in Ireland at their fingertips. A 10 handicapper in Limerick with over 20 rounds of golf will have a similar playing record with a 10 handicapper in Cork or Dublin. It is very obvious comparing WHS records which of the players is manipulating the system to win prizes or interclub pennants.
Golf Ireland need to take on this role and write to clubs cutting the culprits based on the evidence.
This would also make the national championships fairer with similar playing ability golfers competing rather than a 15 handicapper playing Jimmy Bruen and winning matches against an honest 6 handicapper.
I know that Golf Ireland will not take on this role but if they are serious about the integrity of our game they need to support the amateur volunteer handicap secretaries across the country.
The only other way to protect honest golfers is to limit the WHS handicap increase to 1 shot per annum.
I have got to agree with Gérard here. The concept of casual golf may appeal to some but is being heavily abused by others. Golf Ireland need to address this by showing how handicap builders can be identified and dealt with within a club without fear of litigation/costs. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,532 |
package com.google.gerrit.pgm.init;
import static com.google.gerrit.pgm.init.api.InitUtil.username;
import com.google.common.primitives.Ints;
import com.google.gerrit.pgm.init.api.InitUtil;
import com.google.gerrit.pgm.init.api.Section;
public class HANAInitializer implements DatabaseConfigInitializer {
@Override
public void initConfig(Section databaseSection) {
final String defInstanceNumber = "00";
databaseSection.string("Server hostname", "hostname", "localhost");
databaseSection.string("Instance number", "instance", defInstanceNumber,
false);
String instance = databaseSection.get("instance");
Integer instanceNumber = Ints.tryParse(instance);
if (instanceNumber == null || instanceNumber < 0 || instanceNumber > 99) {
instanceIsInvalid();
}
databaseSection.string("Database username", "username", username());
databaseSection.password("username", "password");
}
private void instanceIsInvalid() {
throw InitUtil.die("database.instance must be in the range of 00 to 99");
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,275 |
{"url":"https:\/\/math.stackexchange.com\/questions\/1763539\/proving-recurrence-relation-by-forward-substitution","text":"# Proving Recurrence Relation By Forward Substitution\n\nI'm having trouble understanding the inductive proof of the following recurrence relation by forward substitution. I get that were plugging in the value for our induction step into the relation but I don't get how $n ^{2.585}$ is ultimately derived.\n\nUsing the relationship T(n) = 6T(n\/2) for n>1.\n\nT(2) = 6T(2\/2) = 6T(1) = 6\n\nT(4) = 6T(4\/2) = 6T(2) = 36\n\nT(8) = 6T(8\/2) = 6T(4) = 216\n\nWe find lg 2=1, lg 4=2, lg 8=3.\n\nSo the relationship is T(n)= $6^{lgn}$.\n\nInduction base: For n = 1 we have initial condition\n\nT(1) = $6^{lg1}$ = $6^0$ = 1\n\nInduction hypothesis: Assume, for arbitrary n>1, n being a power of 2, that\n\nT(n)= $6^{lgn}$\n\nInduction step: Show induction hypothesis also true for the next step 2n (each step doubles n)\n\nT(2n) = $6 ^{lg (2n)}$\n\nTo show next step, we replace n with 2n in the recurrence T(n)=6T(n\/2) and use the hypothesis T(n) = $6 ^{lg n}$ , we get:\n\nT(2n) = 6T((2n)\/2) = 6T(n) = 6 . $6^{lgn}$ = $6 ^{1 + lg n}$ = $6 ^{lg 2 + lg n}$ = $6 ^{lg(2n)}$\n\nQED\n\nSo, T (n) = $6 ^{lgn}$ = $n ^{lg6}$ = $n ^{2.585}$ Solution for the above recurrence for T(n) turns out to be $O(^{n2.585})$\n\n\u2022 The log are of base 2. Apr 29, 2016 at 2:14\n\n$$\\large 6^{\\lg n}=\\left(2^{\\lg 6}\\right)^{\\lg n}=2^{(\\lg 6)(\\lg n)}=2^{\\lg \\left(n^{\\lg 6}\\right)}=n^{\\lg 6}$$\n\u2022 For more symmetry you could either replace the penultimate term $2^{\\lg \\left(n^{\\lg 6}\\right)}$ by $\\left(2^{\\lg n}\\right)^{\\lg 6}$, or replace the second term $\\left(2^{\\lg 6}\\right)^{\\lg n}$ by $2^{\\lg \\left(6^{\\lg n}\\right)}$ Sep 29, 2019 at 15:15","date":"2022-07-01 08:50:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.811354398727417, \"perplexity\": 203.26376695562618}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103922377.50\/warc\/CC-MAIN-20220701064920-20220701094920-00467.warc.gz\"}"} | null | null |
Student Involvement oversees campus culture through different student-led organizations and clubs. Want to get involved on campus? You're in the right place.
Student Involvement is housed in the Office of Student Involvement which is primarily run by student leaders in a professional setting. Designed to promote social responsibility and spiritual growth, Student Involvement offers a wide range of clubs and services, from ministry teams to academic clubs to intramural sports to performance arts opportunities. We will help you explore your interests and find the best fit.
SAB is a group of event planners that facilitates activities to bring the campus together. Beyond simply providing fun, these activities assist students in building and deepening relationships. SAB members meet once a week with other officers in the Student Involvement Offices.
Popular events include: Homecoming Banquet/Variety Show, Cereal Bar, The Joust (a 4-day campus competition), Lake Day, Taste of Warnona, and bingo, trivia and movie nights.
GIP makes intra-collegiate sports available to all students.
Sports include: flag football, volleyball, basketball and soccer.
GIP events include: rocking climbing, ski trips, futsal and dodgeball tournaments.
Club sport teams offer students a structured opportunity to compete with other college sport teams.
Sports include: shooting club and ultimate frisbee.
Grace College's campus hosts a disc golf course that is available for students and the community to enjoy.
SERVE is a student-led ministry program that give students opportunities to explore their dreams, talents and passions to serve others. Students can join an existing team, event and/or be equipped to make their vision of serving a reality.
SERVE teams include: Heartline Pregnancy Center, food pantries, mentors for youth, bingo with seniors and more.
Senate consists of the student body president and class representatives who promote campus unity, address relevant issues and influence change to improve the student experience at Grace.
The Sounding Board is a professional bi-weekly newspaper that encourages equity in student in expression.
Roots Magazine is a bi-yearly yearbook in magazine form that captures and preserves the essence of the year.
CDI seeks to celebrate different cultures and traditions by hosting monthly events and by serving as a hub to connect students of color, as well as other minority demographics, to one another. Grace knows not all of its students come from the same background or have the same experiences. CDI is a place for students to make connections with other students who share in their experiences.
The Red Zone is Grace's student cheer block who oversees theme nights at sporting events and tailgates.
Back in Five is Grace's improv comedy club designed to develop, hone and showcase the skills of improvisational comedy. Students write, rehearse and perform both short and long form sketches for the college and community.
Cinema Conversations hosts regular film viewings on campus and invite students to have a discussion afterwards centered around how the films' themes impact faith.
Contact Director of Student Involvement Kearstin Criswell. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,860 |
{"url":"https:\/\/stats.stackexchange.com\/questions\/481171\/why-is-a-squared-standard-normal-variable-a-chi-square-variable","text":"# Why Is A Squared Standard Normal Variable A Chi Square Variable\n\nIf for any $$i \\in \\lbrace1,2,...n\\rbrace$$ where $$Z_i \\sim N(0,1)$$ and all $$Z_1, Z_2, ..., Z_n$$ are independent of each other, why is it that $$Z_i^2 \\sim \\chi^2_1$$ and $$\\sum_i Z_i^2 \\sim \\chi^2_n$$ when the pdf for a chi-square distribution with $$k$$ degrees of freedom is:\n\n?\n\n## 1 Answer\n\n\"why is it...\"\n\nBecause one of the first persons (Karl Pearson?) to calculate the density function of $$Z_i^2$$ and $$\\sum_{i=1}^n Z_i^2$$ chose to name these random variables $$\\chi^2$$ random variables with $$1$$ and $$n$$ degrees of freedom respectively. If he had chosen some other name, say $$\\Phi^2$$ random variables, you would have been asking why $$Z_i^2$$ and $$\\sum_{i=1}^n Z_i^2$$ are called $$\\Phi^2$$ random variables.\n\n\u2022 I don't understand how a (sum of) squared standard normal variable has a pdf like the one shown above (with the gamma function). Aug 2, 2020 at 3:43\n\u2022 The proof for the case of 1 normal random variable is here. statlect.com\/probability-distributions\/chi-square-distribution. As the author says, the general case for $n$ normal random variables is straightforward :). Aug 2, 2020 at 3:57","date":"2022-12-06 01:54:27","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 15, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7632492780685425, \"perplexity\": 204.80412384881788}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446711064.71\/warc\/CC-MAIN-20221205232822-20221206022822-00125.warc.gz\"}"} | null | null |
Q: segmentAlongLine() returning line with a gap in the middle I'm trying to use segmentAlongLine() to return a portion of a polyline based on percentage in my feature class. It works for most of my lines in the feature class, but in some cases, the polyline returned contains a gap in the middle.
For example, here is the original polyline.
When I create a new line using polyline.segmentAlongLine(0, 0.5, True), I get the following line as an output:
Has anyone else run into a similar issue or know what would be causing the gap? It happens for any value of end_measure greater than 0.27 (27%). I've also tried switching the start_measure and end_measure values but end up with the same issue.
I'm using Python 2.7 with ArcMap 10.7.1.
EDIT: one detail I failed to mention is the polyline is the result of a Dissolve with "DISSOLVE_LINES" set as my unsplit_lines attribute.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,325 |
\section{Introduction}
\label{sec:introduction}
With the LHC experiments entering the second long phase of data collection after
the upgrade period, we expect that the Standard Model (SM) of particle physics
will be probed in exquisite detail while searching for hints of phenomena beyond
our current knowledge. A major role in this endeavor is played by parton-shower
Monte Carlo programs, which allow to predict the full final-state kinematics on an
event-by-event basis.
In this talk, we will briefly describe the evolution and status of combining
fixed-order calculations with parton shower (PS) resummation, followed by
comments on which state-of-the art merging schemes lend themselves to further
improvements. We will then discuss how next-to-next-to-leading order (NNLO)
accurate predictions can be included into event generators. Finally, we present
results in the \protect\scalebox{0.9}{UN$^2$LOPS}\xspace scheme~\cite{Hoeche:2014aia,Hoche:2014dla} as
implemented in \protect\scalebox{0.9}{SHERPA}\xspace~\cite{Gleisberg:2008ta}.
\section{The story so far}
\label{sec:story_so_far}
Finding ways to combine accurate fixed-order calculations with parton showers
has been a major topic in event generator development since the turn
of the century. A decisive boost came from methods for merging multiple
inclusive tree-level calculations by making them exclusive using
Sudakov form factors derived from the parton shower~\cite{Catani:2001cc,Lavesson:2008ah,Lonnblad:2012ix}.
Another breakthrough was the development of algorithms for matching parton showers to
NLO QCD calculations~\cite{Frixione:2002ik}.
All these methods have ambiguities and uncertainties. A particularly striking
example of differences between NLO+PS matched results was presented in~\cite{Alioli:2008tz}:
The prediction for the Higgs-boson transverse momentum distribution shown in this publication
varies greatly with the matching scheme. Differences in the schemes are formally
beyond the required NLO+PS accuracy. Their numerical size reveals, however,
that more accurate and less variable calculations of the Higgs-boson + jet process
must be included to make experimentally relevant predictions.
This can be achieved using methods for combining a sequence of multi-parton
fixed-order calculations, often referred to as "multi-jet merging".
Merging methods exist for tree-level~\cite{Catani:2001cc}
and NLO calculations~\cite{Lavesson:2008ah,Lonnblad:2012ix}.
They provide state-of-the art predictions for LHC Run-II.
A comparison of NLO merging schemes in~\cite{Butterworth:2014efa}
has shown good agreement between different approaches.
More importantly, the agreement between theory and experiment is improved,
and theoretical uncertainties may be reduced.
\section{Moving towards NNLO accuracy}
\label{sec:towards_nnlo_matching}
NLO multi-jet merging techniques have additional features compared to
LO merging. For example, those real-emission corrections to
$X+n$-jet production which lead to $n+1$ well-separated jets above the
merging scale need to be removed, since such configurations are
already included by merging with the $n+1$-jet calculation.
In addition, the approximate virtual corrections included in the PS
must, at $\mathcal{O}(\alpha_s^{n+1})$, be replaced by the full NLO result.
A more subtle issue arises from additionally demanding the stability of
inclusive jet cross sections~\cite{Lonnblad:2012ix,Hamilton:2012rf}:
In merged calculations, the emission probability is given by exact
fixed-order matrix elements. In contrast, the resummed virtual corrections
derive from the Sudakov factor of the parton shower. Upon integration
over the radiative phase space, the two do not cancel, leading to a
``unitarity violation''.
This discrepancy can be removed using unitary merging techniques~\cite{Lonnblad:2012ix}.
One of them is the so-called \protect\scalebox{0.9}{UNLOPS}\xspace method. It allows, in a process-independent way,
to add the precise difference between fixed-order real-emission matrix elements and
their parton-shower approximations to the merged result. This is called the
"subtract what you add" philosophy. In the \protect\scalebox{0.9}{UNLOPS}\xspace scheme, it is possible to
combine arbitrarily many NLO calculations, and include tree-level results
when NLO calculations are not available. \protect\scalebox{0.9}{UNLOPS}\xspace retains the merging scale as a
\emph{technical} parameter, since low merging scales -- while desirable to use
higher-order calculations over most of the phase space --
leads to inefficient event generation.
\section{Combining NNLO calculations with parton showers}
\label{sec:nnlo_matching}
Although NLO merging yields accurate predictions for many multi-jet observables,
it is desirable for some reactions to move beyond NLO accuracy. Such processes
include reactions with large higher-order corrections, e.g.\ Higgs-boson
production in gluon fusion, standard candles like Drell-Yan lepton pair
production, and other phenomenologically important processes.
NNLO accurate matching to the parton shower has been achieved first in the MINLO approach~\cite{Hamilton:2013fea}.
The MINLO method~\cite{Hamilton:2012rf} is based on matching the hard process plus one-jet NLO calculation
to the parton shower, and supplement it with Sudakov form factors that account for the resummed virtual
and unresolved higher-order corrections between the hard scale and the resolution scale of the jet.
In its current implementation it uses analytic Sudakov factors derived for $q_T$ resummation, which limits
its applicability to hard processes with no light QCD jets in the final state.
The genuine NNLO corrections are included through pre-tabulated phase-space dependent
K-factors, which leads to fast event generation but makes the extension to
processes with more complicated final states challenging.
Within the \protect\scalebox{0.9}{UN$^2$LOPS}\xspace approach~\cite{Hoeche:2014aia,Hoche:2014dla}, a variant of \protect\scalebox{0.9}{UNLOPS}\xspace,
NNLO corrections associated with the emission of resolvable
QCD radiation are treated as the hard process plus one additional jet at NLO. The remainder
of the phase space is filled by a calculation for the hard process at NNLO, with a corresponding
veto on any QCD activity. Both parts are separately finite, and parton shower matching is only
needed for the first. To make the result physically meaningful, the separation cut must be smaller
than the infrared cutoff of the parton shower. This requires very stable NLO matched calculations
for the one-jet process. In contrast to the MINLO method, real-emission configurations do not
receive a contribution from the NNLO K-factor.
Neither NNLOPS nor \protect\scalebox{0.9}{UN$^2$LOPS}\xspace should be considered final a answer to NNLO+PS matching,
but rather as a first step towards more general methods.
\section{NNLO+PS matched results in \protect\scalebox{0.9}{SHERPA}\xspace}
\label{sec:results}
We will now discuss some phenomenologically relevant results obtained with the \protect\scalebox{0.9}{UN$^2$LOPS}\xspace
matching as implemented in the \protect\scalebox{0.9}{SHERPA}\xspace event generator. In order to control all
aspects of the matched calculation, the full NNLO calculation using a $q_\perp$ cutoff method
has been implemented in Sherpa itself. This technique is limited to processes without light jets
in the hard process, a shortcoming that can in principle be remedied by using different
techniques for performing the fixed-order NNLO calculation. The following plots, and the
\protect\scalebox{0.9}{SHERPA}\xspace plug-in containing the \protect\scalebox{0.9}{UN$^2$LOPS}\xspace implementation are publicly available~\cite{Code}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.35\textwidth]{./plots/7tev-FOWpNLO-Etae+.pdf}
\includegraphics[width=0.35\textwidth]{./plots/7tev-FOWpNNLO-Etae+.pdf}\\
\includegraphics[width=0.35\textwidth]{./plots/7tev-PSWpNLO-PTe+.pdf}
\includegraphics[width=0.35\textwidth]{./plots/7tev-PSWpNNLO-PTe+.pdf}
\caption{Charged current Drell-Yan lepton pair production, for two different PDF choices.
{\it Upper left}: Pseudorapidity of the positron at NLO and NNLO accuracy. NLO PDFs used in the NLO calculation.
{\it Upper right}: Pseudorapidity of the positron. NNLO PDFs used in the NLO calculation.
{\it Lower left}: $p_\perp$ of the positron. NLO PDFs used in \protect\scalebox{0.9}{MC@NLO}\xspace.
{\it Lower right}: $p_\perp$ of the positron. NNLO PDFs used in \protect\scalebox{0.9}{MC@NLO}\xspace.
}
\label{fig:pdf_secret_w}
\end{figure*}
Figure \ref{fig:pdf_secret_w} highlights an
interesting feature of the NNLO corrections to neutral and charged current
Drell-Yan lepton pair production. For inclusive observables,
using a NNLO PDF for a NLO calculation reproduces the full NNLO
calculation very well, both in normalization and in shape. This is clearly
a very process-dependent statement, and it breaks down once an observable
depends not only on the Born degrees of freedom, as shown in the lower right
panel of Figure \ref{fig:pdf_secret_w}: In the phase space region
which can only be accessed by giving the lepton-pair system transverse
momentum ($p_T>40$~GeV), the NNLO result cannot be mimicked by a NLO calculation.
In this region the improvement obtained from \protect\scalebox{0.9}{UN$^2$LOPS}\xspace is apparent.
\begin{figure*}[t]
\centering
\includegraphics[width=0.35\textwidth]{./plots/individual-PTh0.pdf}
\includegraphics[width=0.35\textwidth]{./plots/factorised-PTh0.pdf}
\caption{Higgs boson $p_\perp$ spectrum in
individual matching (left) and factorized matching (right).
}
\label{fig:higgs_matching}
\end{figure*}
The \protect\scalebox{0.9}{UN$^2$LOPS}\xspace prescription has also been applied to ~\cite{Hoche:2014dla}.
Figure \ref{fig:higgs_matching} exemplifies the residual uncertainties of the
NNLO matched calculation in Higgs-boson production through gluon fusion. We use
two different ways to include the Wilson coefficient for the $ggh$ vertex~\cite{Hoche:2014dla}:
A factorized matching scheme which is reminiscent of the \protect\scalebox{0.9}{POWHEG}\xspace strategy, and an individual
matching scheme that somewhat mimics the \protect\scalebox{0.9}{MC@NLO}\xspace procedure.
The results are as expected: The factorized approach leads to a harder tail in
the $q_\perp$ distribution, whereas the individual matching has a softer tail
and a small enhancement for medium $q_\perp$ values. The individual matching shows
better agreement with the NNLO+NNLL result of the HqT program~\cite{Bozzi:2003jy}. The
uncertainty due to varying the parton shower starting scale becomes appreciable
for small $q_\perp$ values, and is significantly larger than the resummation
scale variation in HqT. This might be taken as indication that a more accurate
parton shower would be beneficial.
\section{Conclusions}
We have reviewed the current status of matching and merging parton
shower resummation and fixed order calculations. Some state-of-the-art
NLO merging methods have recently been molded into NNLO matching methods.
The prerequisite for these extensions was a well-defined one-jet cross section,
which was then updated to NNLO accuracy for the inclusive process. Results
of the \protect\scalebox{0.9}{UN$^2$LOPS}\xspace scheme as implemented in \protect\scalebox{0.9}{SHERPA}\xspace have been presented. This
implementation includes new NNLO fixed-order calculations for (neutral and
charged current) Drell-Yan lepton pair and (gluon-fusion initiated) Higgs-boson
production. When applied to the Drell-Yan process, we find that the NLO results,
when computed with NNLO PDFs, reproduce the full NNLO results for inclusive observables.
For Higgs-boson production at NNLO+PS accuracy, two schemes were
presented, highlighting some residual uncertainties of the matching.
Work supported by the US Department of Energy under contract DE--AC02--76SF00515.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,731 |
Austria y Prusia eran los principados más poderosos del Sacro Imperio Romano entre los siglos XVIII y XIX, y habían emprendido una lucha por la supremacía en Europa Central. Localmente conocida como Deutscher Dualismus (Dualismo alemán), la rivalidad entre Austria y Prusia se caracterizó por grandes conflictos territoriales, contenciones económicas, culturales y políticas por el liderazgo soberano entre los pueblos de habla alemana.
Ambos oponentes se encontraron por primera vez en las guerras de Silesia y la guerra de los Siete Años a mediados del siglo XVIII hasta la culminación del conflicto en la guerra austro-prusiana de 1866. Sin embargo, las relaciones no siempre fueron hostiles, ya que ambos países cooperaron con éxito durante las guerras napoleónicas y la segunda guerra de Schleswig.
Trasfondo
El Margraviato de Brandeburgo fue declarado oficialmente uno de los siete electorados del Sacro Imperio Romano por la Bula de Oro de 1356. Extendió la mayor parte de su territorio en la región oriental de Neumark, y después de la guerra de Jülich por la sucesión del Tratado de Xanten de 1614 también ganó el Ducado de Cleves, así como los condados de Mark y Ravensberg ubicados en el noroeste de Alemania. Finalmente surgió de las fronteras imperiales cuando en 1618 los electores de Hohenzollern se convirtieron en duques de Prusia, entonces un feudo de la Corona Polaca y las tierras de Brandeburgo-Prusia fueron gobernadas en unión personal. En 1653, el Gran Elector Frederick William adquirió Farther Pomerania y alcanzó la plena soberanía en Prusia Ducal mediante el Tratado de Wehlau de 1657 concluido con el rey polaco John II Casimir Vasa. En 1701, el hijo y sucesor de Frederick William, Frederick I, llegó al consentimiento del emperador Leopoldo I para proclamarse «rey en Prusia en Königsberg», con respecto al hecho de que todavía tenía la dignidad electoral de Brandeburgo y el título real solo era válido en las tierras prusianas fuera del imperio.
El largo ascenso de la Casa de los Habsburgo austríaca había comenzado con la victoria del rey Rudolph en la batalla de 1278 en Marchfeld y la obtención final de la corona imperial por el emperador Federico III en 1452. Sus descendientes Maximiliano I y Felipe El Justo ganaron mediante matrimonio la herencia de los duques de Borgoña y la corona española de Castilla (tu felix Austria nube), y bajo el emperador Carlos V el reino de los Habsburgo se convirtió en una gran potencia europea. En 1526 su hermano Fernando I heredó las Tierras de la Corona de Bohemia así como el Reino de Hungría fuera de las fronteras del Imperio, sentando las bases de la Monarquía Habsburgo de Europa Central. Desde el siglo XV hasta el siglo XVIII, todos los emperadores del Sacro Imperio Romano fueron archiduques austríacos de la dinastía de los Habsburgo, que también poseían la dignidad real bohemia y húngara.
Después de la Reforma protestante, los Habsburgo católicos tuvieron que aceptar la Paz de Augsburgo en 1555 y no pudieron fortalecer su autoridad imperial tras la desastrosa guerra de los Treinta Años. Tras la Paz de Westfalia de 1648, Austria tuvo que lidiar con el creciente poder Brandeburgo-Prusiano en el norte, que reemplazó al Electorado de Sajonia como el principal estado protestante. Los esfuerzos realizados por el Gran Elector y «Rey Soldado» Frederick William I habían creado un estado progresista con un ejército prusiano altamente efectivo que, tarde o temprano, tuvo que colisionar con las pretensiones de poder de los Habsburgo.
Historia
Se cree que la rivalidad comenzó cuando, tras la muerte del emperador de los Habsburgo Carlos VI en 1740, el rey Federico el Grande de Prusia lanzó una invasión de Silesia controlada en ese entonces por Austria, comenzando la primera guerra de Silesia (de las tres guerras de Silesia por venir) contra María Teresa. Federico había roto su promesa de reconocer la Sanción Pragmática de 1713 y la indivisibilidad de los territorios de los Habsburgo, por lo que desencadenó la guerra paneuropea de la sucesión austríaca. Él derrotó decisivamente a las tropas austríacas en la batalla de Chotusitz de 1742, después de lo cual María Teresa, por los Tratados de Breslavia y Berlín, tuvo que ceder la mayor parte de las tierras de Silesia a Prusia.
En ese momento, Austria aún reclamaba el manto del Imperio y era la principal fuerza de los estados alemanes desunidos. Hasta 1745, María Teresa pudo recuperar la corona imperial de su rival de Wittelsbach, Carlos VII, al ocupar sus tierras bávaras pero, a pesar de su Alianza Cuádruple con Gran Bretaña, la República Holandesa y Sajonia, no pudieron recuperar Silesia: la segunda guerra de Silesia comenzó con La invasión de Federico a Bohemia en 1744 y después de la victoria prusiana en la batalla de Kesselsdorf de 1745, por el Tratado de Dresdese, confirmó el status quo ante bellum: Frederick mantuvo Silesia pero finalmente reconoció la adhesión del esposo del emperador Francisco I, María Teresa. Los términos fueron nuevamente confirmados por la paz final de Aquisgrán en 1748.
María Teresa, todavía irritada por la pérdida de la joya más preciada de la corona, aprovechó el respiro para implementar varias reformas civiles y militares dentro de las tierras austríacas, como el establecimiento de la Academia Militar Theresian en Wiener Neustadt en 1751. Su hábil canciller estatal, el príncipe Wenzel Anton de Kaunitz, tuvo éxito en la Revolución Diplomática de 1759, aliándose con la exnémesis de los Habsburgo, Francia, bajo el rey Luis XV, para aislar a Prusia. Federico, sin embargo, había completado la «cuadrilla señorial» con la conclusión del Tratado de Westminster con Gran Bretaña. Volvió a actuar mediante una guerra preventiva, invadiendo Sajonia y abriendo una tercera guerra de Silesia (y la guerra de los Siete Años más amplia).
Sin embargo, la conquista de Praga fracasó y, además, el rey tuvo que lidiar con las fuerzas rusas que atacaban Prusia Oriental mientras las tropas austríacas entraban en Silesia. Su situación empeoró cuando las fuerzas austríacas y rusas se unieron para infligirle una aplastante derrota en la batalla de Kunersdorf de 1759. Federico, al borde, fue salvado por la discordia entre los vencedores en el «Milagro de la Casa de Brandeburgo», cuando la emperatriz Isabel de Rusia murió el 5 de enero de 1762 y su sucesor Pedro III concluyó la paz con Prusia. Por el Tratado de Hubertusburgo de 1763 Austria, por tercera vez, tuvo que reconocer las anexiones prusianas. El reino usurpador había prevalecido contra las grandes potencias europeas y jugaría un papel vital en el futuro «Concierto de Europa».
Austria y Prusia lucharían contra Francia en las guerras napoleónicas; después de su conclusión, los estados alemanes se reorganizaron en 37 estados separados más unificados de la Confederación Alemana. Los nacionalistas alemanes comenzaron a exigir una Alemania unificada, especialmente en 1848 y sus revoluciones. Estaban en conflicto sobre el mejor estado-nación para lograr esto, una pregunta que se conoció como la «Pregunta alemana». La solución de la «Pequeña Alemania» (Kleindeutschland) favoreció a la Prusia protestante anexionándose todos los estados alemanes excepto Austria, mientras que la «Gran Alemania» (Grossdeutschland) favoreció a los católicos al tomar Austria el control de los estados alemanes separados. La cuestión de Schleswig-Holstein también se vio envuelta en el debate; la segunda guerra de Schleswig vio a Dinamarca perder ante las fuerzas combinadas de Austria y Prusia, pero esta última más tarde obtendría el control total de la provincia después de la guerra austro-prusiana, por lo que Austria fue excluida de Alemania. Después de la guerra franco-prusiana, Alemania se unificó bajo Prusia para convertirse en el Imperio alemán en 1871, y la rivalidad fue disminuyendo después del Congreso de Berlín en 1878. Alemania, liderada por Prusia, se había convertido en una potencia superior a Austria-Hungría.
Referencias
Enlaces externos
Prusia en el siglo XVIII
Prusia en el siglo XIX
Imperio austríaco
Rivalidad geopolítica
Relaciones bilaterales de Prusia
Sacro Imperio Romano Germánico en el siglo XVIII
Alemania en el siglo XIX
Relaciones internacionales de Prusia | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,993 |
Q: How to factor in breakpoints when measuring elapsed time in C++? There are multiple methods for measuring time, e. g. clock(), time(), std::chrono, or QueryPerformanceCounter. But how do I factor in time spent in the debugger?
I'd like to know how much time I spend in total on waiting for certain functions. If I hit a breakpoint, it would invalidate the measurement, but how can I become aware of it? Is there any callback for DebugBreak/_CrtDbgBreak? I would be ok with just flagging the measurement as invalid so that I can ignore it.
Asking for Windows.
Edit: I'm asking this to measure productivity. Sometimes I get annoyed because debugging takes a long time. I have to wait 30 seconds here, 2 minutes there. It adds up. Some functions take a long time in debug builds. The question I'm trying to answer is: Does it matter? Should I do something about it? How long do I really wait for this, over the course of a month?
A: You can't. Also there is no point in timing debug code.
A: There is really no reason to do so. Code running in a debugger will be much slower and it is probably built without optimization. You should build the code with optimization and run it without a debugger to measure speed. Get the code working first using the debugger and then test the performance. Testing performance and debugging are really two different steps.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 518 |
- Universal GPS parser
- Openstreetmap Map
- Create/Fork map
- Add pictures to map
- Weather report
- Mobile support
- Offline support
- Login
http://sarmap.is
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,886 |
Willem Frederik (Arnhem, 7 augustus 1613 – Leeuwarden, 31 oktober 1664), graaf van Nassau-Dietz (1640-1654), vorst van Nassau-Dietz (1654-1664), stadhouder van Friesland (1640-1664), stadhouder van Groningen en Drenthe (1650-1664), landcommandeur van de Duitse Orde (1641-1664), was de zoon van Ernst Casimir van Nassau-Dietz en Sophia Hedwig van Brunswijk-Wolfenbüttel. Willem Frederik is een stamvader van het huidige koninklijk huis in Nederland.
Biografie
Willem Frederik studeerde in Leiden en Groningen en nam daarna dienst in het leger van Frederik Hendrik van Oranje. In 1640 nam hij deel aan de strijd om Hulst, waarbij zijn oudere broer Hendrik Casimir I van Nassau-Dietz sneuvelde. Vervolgens ontstond een conflict met Frederik Hendrik van Oranje over de vraag wie Hendrik Casimir zou opvolgen als stadhouder van Friesland, Groningen en Drenthe.
Als opperbevelhebber van het leger weigerde Frederik Hendrik dan ook om Willem Frederik tot veldmaarschalk te benoemen. Willem Frederik deed er alles aan om Frederik Hendriks vertrouwen te winnen. Uiteindelijk werd hij stadhouder van Friesland en Frederik Hendrik van Groningen en Drenthe. Tegen die tijd was de Tachtigjarige Oorlog bijna ten einde.
In 1650 leidde hij de mislukte aanslag op Amsterdam in opdracht van Willem II. Toen die even later plots overleed, werd Willem Frederik alsnog stadhouder van Groningen en Drenthe. Zijn benoeming tot veldmaarschalk van de Republiek werd enkele malen door Johan de Witt en Cornelis de Graeff tegengehouden. In 1662 werd hij door de Staten van Friesland aan het hoofd van een klein expeditieleger gesteld dat de orde in de stad Groningen moest herstellen. In die stad waren problemen ontstaan rond Johan Schulenborgh.
Tijdens een jachtpartij wilde Willem Frederik een schot lossen met zijn zadelpistool, dat weigerde af te gaan. Bij het schoonmaken van het pistool kreeg hij een schot door zijn kin en kaak. Hij overleed op 31 oktober 1664 ten gevolge van dit ongeval. Zijn stoffelijk overschot werd bijgezet in de grafkelder van de Friesche Nassau's in Leeuwarden. Het pistool waarmee hij zich doodde, bevindt zich in het Rijksmuseum Amsterdam.
Hij werd na zijn dood opgevolgd door zijn zoon Hendrik Casimir II; de weduwe Albertine Agnes werd 'regentes' voor haar zoon.
Dagboeken
Willem Frederik is ook bekend geworden vanwege zijn dagboeken die later teruggevonden zijn. Hierin schreef hij openhartig over emotionele onderwerpen als ziekte, lust, drankzucht, berouw en schuld. Tijdens ziektes en onpasselijkheden hield hij zorgvuldig zijn stoelgang en braakpartijen bij.
Huwelijk en kinderen
Na zo'n tien jaar vleien, gepaste aandacht geven, zich bemind maken bij Frederik Hendrik en diens echtgenote, de overbedillerige Amalia van Solms, moest Willem Frederik toezien hoe hun oudste dochter, Louise Henriëtte, werd uitgehuwelijkt aan Frederik Willem I, de grote Keurvorst van Brandenburg.
Willem Frederik trouwde echter op 2 mei 1652 te Kleef met de vijfde dochter van Frederik Hendrik, zijn achternicht Albertine Agnes van Nassau (1634-1696). Uit dit huwelijk werden de volgende kinderen geboren:
Letters
De voorletters van Willem Frederik en zijn echtgenote Albertine Agnes zijn in de Prinsentuin in Groningen te zien in de vorm van geknipte heggetjes.
Galerij
Literatuur
Luuc Kooijmans: Liefde in opdracht. Het hofleven van Willem Frederik van Nassau. Amsterdam, Bert Bakker, 1995
Willem Fr
Willem Fr
Stadhouder in de Verenigde Provinciën
Staats militair in de Tachtigjarige Oorlog
Adel in de Nederlanden in de 17e eeuw
Willem Frederik van Nassau-Dietz
17e-eeuws militair | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,955 |
On Monday, March 25th, 14 teams from Maharishi School competed at the State level in Destination ImagiNation at Grinnell College in Grinnell, Iowa. It was a wonderful opportunity for so many of our School families to watch the children share their fulfillment of many months of hard work. Competing with over 158 teams throughout Iowa, our students represented themselves with great confidence and dignity. They showed appreciation for each other, as well as for the other schools. Four 1st place winning teams won the right to advance to global competition at the Final Destination, which this year will be held Knoxville, Tennessee at the University of Tennessee from May 22-25.
The four teams that came in first place were: 7th/8th grade team,"It's Your Move"- Josh Adams, Michael Sutherland, Max and Danny Steinberg, Anjali Krystofiak, and Suzannah Schindler; 7th grade team, "On Holiday"- Ace Boothby, Newlin Wilkins, Cooper Rose, Serena Stakland, Luke Stenger, and Eric Van Arsdale; 7th/8th grade team, "Art of Improv"- Anna Sica, Ami Freeberg, Julia Ross, Sammy Goldstein, Bagambhrini Gerace, and Devon Jarvis; and Upper School team, "Dual Dilemma"- Josh Denbaum, Milo Winningham, Shane Bellmer, Aaron Hirshberg, and Sam Rozen. The 7th/8th grade "Art of Improv" team was also awarded the Creativity Award in the Middle School category. All of the teams were outstanding in their performances and oftentimes competed with groups that were two grades above them!
At the Final Destination, these teams will be sharing their presentation with hundreds of teams from all over the world. Coach Mark Headlee has offered our students the opportunity to achieve excellence!
Here are the four Iowa state champion teams. They will be competing at Global Finals at the University of Tennessee-Knoxville in May.
On Holiday - Middle Level
It's Your Move - Middle Level
Art of Improv - Middle Level
Dual DI-Lemma - Secondary Level
[Close the Window ] | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 582 |
\section{GMEB dual subgradient algorithm}
\label{sec:alg}
\begin{algorithm*}[!ht]
\caption{Algorithm to minimize Equation~\eqref{eq:dualProb} with back-tracking line search}
\label{alg:subgrad}
\begin{spacing}{1.25}
\begin{algorithmic}[1]
\Function{GMEB}{$\big\{\mathbf{X}_i\big\}_{i=1}^M,k,a,\eta,\zeta, \beta$}
\State \textbf{input:} Data: $\big\{\mathbf{X}_i\big\}_{i=1}^M$, Rank: $k$, Step size parameter: $a$, Stopping criteria: $\eta$, Step size threshold: $\zeta$, Growth parameter: $\beta$
\State \textbf{output:} Weights: $\bm{\lambda}^*$, Minimax center: $\*U^*$
\State $t \gets 0$
\State $\bm{\lambda}^{(t)} \gets [\nicefrac{1}{M}, \ldots , \nicefrac{1}{M}]^T \in \mathbb{R}^M$ \Comment{$\bm{\lambda}^{(t)} \gets \bm{\lambda}^*(k-1)$ for warm-start}
\State $\*U^{(t)} \gets \textrm{dominant }k\textrm{ eigenvectors}\big(\sum_{i=1}^M \lambda_i^{(t)} X_i^{} X_i^T\big)$
\State $\*g^{(t)} \gets -\big[d_{\textrm{Gr}(k,n)}(\*U^{(t)},\*X_1), d_{\textrm{Gr}(k,n)}(\*U^{(t)},\*X_2), \ldots, d_{\textrm{Gr}(k,n)}(\*U^{(t)},\*X_M) \big]^T$
\State $f_{\textrm{primal}}(\*U^{(t)}) \gets \min_{i=1,\ldots,M} \{-d_{\textrm{Gr}(k,n)}(\*U^{(t)},\*X_i)\}$ \Comment{Primal cost at iteration $t$}
\State $f_{\textrm{dual}}(\bm{\lambda}^{(t)}) \gets \bm{\lambda}^{(t)T}\*g^{(t)}$ \Comment{Dual cost at iteration $t$}
\While{ $f_{\textrm{dual}}(\bm{\lambda}^{(t)}) - f_{\textrm{primal}}(\*U^{(t)}) > \eta$ \AND $\underset{i=1,\ldots,10}{\max}\{f_{\textrm{dual}}(\bm{\lambda}^{(t-i)})-f_{\textrm{dual}}(\bm{\lambda}^{(t)})\}> \eta$}
\State $t \gets t + 1$
\State $\alpha^{(t)} \gets \nicefrac{a}{\sqrt{t}}$
\State $\bm{\lambda}^{(t)} \gets \bm{\lambda}^{(t-1)} - \alpha^{(t)} \*g^{(t-1)}$, $\bm{\lambda}^{(t)} \gets \nicefrac{\bm{\lambda}^{(t)}}{\|\bm{\lambda}^{(t)}\|_1}$
\State $\*U^{(t)} \gets \textrm{dominant }k\textrm{ eigenvectors}\big(\sum_{i=1}^M \lambda_i^{(t)} \*X_i^{} \*X_i^T\big)$
\State $\*g^{(t)} \gets -\big[d_{\textrm{Gr}(k,n)}(\*U^{(t)},\*X_1), d_{\textrm{Gr}(k,n)}(\*U^{(t)},\*X_2), \ldots, d_{\textrm{Gr}(k,n)}(\*U^{(t)},\*X_M) \big]^T$
\State $\tilde{\alpha}^{(t)} \gets \alpha^{(t)}$
\State $\tilde{\bm{\lambda}}^{(t)} \gets \bm{\lambda}^{(t)}$
\State $f_{\textrm{dual}}(\tilde{\bm{\lambda}}^{(t)}) \gets \tilde{\bm{\lambda}}^{(t)T}\*g^{(t)}$
\While{ $f_{\textrm{dual}}( \tilde{\bm{\lambda}}^{(t)} ) > f_{\textrm{dual}}( \bm{\lambda}^{(t-1)}) \AND \tilde{\alpha}^{(t)} > \zeta\alpha^{(t)}$} \Comment{Back-tracking line search}
\State $a \gets \nicefrac{a}{2}$
\State $\tilde{\alpha}^{(t)}\gets \nicefrac{a}{\sqrt{t}}$
\State $\tilde{\bm{\lambda}}^{(t)} \gets \bm{\lambda}^{(t-1)} - \tilde{\alpha}^{(t)} \*g^{(t-1)}$, $\tilde{\bm{\lambda}}^{(t)} \gets \nicefrac{\tilde{\bm{\lambda}}^{(t)}}{\|\tilde{\bm{\lambda}}^{(t)}\|_1}$
\State $\tilde{\*U}^{(t)} \gets \textrm{dominant }k\textrm{ eigenvectors}\big(\sum_{i=1}^M \tilde{\lambda}_i^{(t)} \*X_i^{} \*X_i^T\big)$
\State $\tilde{\*g}^{(t)} \gets -\big[ d_{\textrm{Gr}(k,n)}(\tilde{\*U}^{(t)},\*X_1), d_{\textrm{Gr}(k,n)}(\tilde{\*U}^{(t)},\*X_2), \ldots, d_{\textrm{Gr}(k,n)}(\tilde{\*U}^{(t)},\*X_M) \big]^T$
\State $f_{\textrm{dual}}(\tilde{\bm{\lambda}}^{(t)}) \gets \tilde{\bm{\lambda}}^{(t)T}\tilde{\*g}^{(t)}$
\If{ $f_{\textrm{dual}}( \tilde{\bm{\lambda}}^{(t)} ) \leq f_{\textrm{dual}}( \bm{\lambda}^{(t-1)})$} \Comment{Update variables if $f_\textrm{dual}$ decreases}
\State $a \gets \beta a$
\State $\bm{\lambda}^{(t)} \gets \tilde{\bm{\lambda}}^{(t)}$
\State $\*U^{(t)} \gets \tilde{\*U}^{(t)}$
\State $\*g^{(t)} \gets \tilde{\*g}^{(t)}$
\EndIf
\EndWhile
\State $f_{\textrm{primal}}(\*U^{(t)}) \gets \min_{i=1,\ldots,M} \{-d_{\textrm{Gr}(k,n)}(\*U^{(t)},\*X_i)\}$
\State $f_{\textrm{dual}}(\bm{\lambda}^{(t)}) \gets \bm{\lambda}^{(t)T}\*g^{(t)}$
\EndWhile
\Return $\bm{\lambda^{(t)}}, \ \*U^{(t)}$
\EndFunction
\end{algorithmic}
\end{spacing}
\end{algorithm*}
\section*{Acknowledgments}
The authors would like to thank Emilie Renard for the stimulating discussions that improved the ideas presented here.
\bibliographystyle{siamplain}
\section{Introduction}
\label{sec:intro}
Finding the minimum enclosing ball (MEB) of a finite collection of points in a metric space, or the $\ell_{\infty}$-center of mass, is a topic of broad interest in the mathematical community \cite{arnaudon2013approximating,badoiu2003smaller,renard2018grassmannian,kumar2003approximate,fischer2004smallest,yildirim2008two,nielsen2009approximating}. For Euclidean data, the problem has been well studied, and research has transitioned towards finding approximate solutions efficiently when computing the MEB exactly is impractical \cite{badoiu2003smaller,yildirim2008two}. A breakthrough technique of B\u{a}doiu and Clarkson\cite{badoiu2003smaller} finds an optimal subset of the data, called a core-set, such that finding the exact MEB of the core-set is computationally tractable. They show that the radius of this core-set will be bounded by $(1+\epsilon)$ times the radius of the entire data set, where $\epsilon$ depends only on the number of points in the core-set~\cite{badoiu2003smaller}. That is, the minimum enclosing ball can be approximated to any desired accuracy by increasing the number of points in the core-set, and the number of points needed for the radius of the core-set to be at most $\epsilon$ percent larger than the true radius is $\lceil \frac{2}{\epsilon} \rceil.$ This solution represents efforts to make $\ell_{\infty}$-averaging possible for complex data sets.
The difficulty in computing the MEB of Euclidean data is due to the massive size of data sets to be averaged, however in less traditional settings other difficulties arise and contribute to the complexity of this task. Many modern problems are formulated on manifolds instead of Euclidean space in situations where the manifold geometry better represents the natural structure of the data model~\cite{chellapa,rentmeesters2010efficient,marrinan2016}. Afsari provided existence and uniqueness conditions for Riemannian $\ell_p$ centers of mass~\cite{afsari2011riemannian}, and with this type of structure in mind, Arnaudon and Nielsen~\cite{arnaudon2013approximating} adapted the efficient MEB algorithm of B\u{a}doiu and Clarkson to Riemannian manifolds. For linear subspace data, a subclass of data addressed by~\cite{arnaudon2013approximating}, this work was further generalized by Renard, Gallivan, and Absil~\cite{renard2018grassmannian,renard2019minimax}. They created a technique that applies to points lying on a disjoint union of Grassmann manifolds, that is, a collection of $p_i$-dimensional subspaces of $\mathbb R^n$ where $p_i$ is not necessarily equal for all $i$. Although the data comes from a collection of manifolds, the MEB must be computed on one individual Grassmannian and the choice of which is not obvious. Determining which Grassmannian provides the best center for a collection of subspaces is one of the tasks of this manuscript, and we provide a geometrically motivated criteria for automatically selecting this manifold.
\begin{figure*}
\centering
\input{gmeb_fig01.tex}
\caption{\label{fig:common_info}One way to interpret the center of the Grassmannian minimum enclosing ball is as a basis for the common information in a collection of subspaces. If the subspaces share information in $k$ dimensions, then each subspace in the collection contains a $k$-dimensional subspace, $\*Y_i \subseteq \*X_i$ for $i=1,2,3$, that represents this similarity. If $\*U$ is the minimax center of these points on Gr$(k,n)$, then $\*U$ is the best $k$-dimensional approximation to the original subspaces $\*X_1,\*X_2,$ and $\*X_3$ in the sense that it minimizes the maximum distance to $\{\*Y_i\}_{i=1}^3.$}
\end{figure*}
With subspace data, it is natural to think of the center of the Grassmannian minimum enclosing ball (GMEB) as the common information in the data set. To see this, consider the illustration in Figure~\ref{fig:common_info}. Suppose that $\{\*X_i\}_{i=1}^3$ are linear subspaces of $\mathbb R^n$ with dim$(\*X_1) = k+2,$ dim$(\*X_2) = k+1,$ and dim$(\*X_3) = k.$ In Figure~\ref{fig:common_info}, these subspaces are indicated by the colored rectangles on the right. The subspaces can be identified with points on different Grassmannians, which are pictured by the corresponding colored points on the lefthand side of the figure. If these three spaces intersect in some $k$-dimensional subspace $\*U,$ then $\*U$ is certainly one of the best $k$-dimensional approximations of the collection. Alternatively, if the spaces do not intersect, we can look for a best $k$-dimensional approximation, $\*U,$ to these spaces by minimizing some measure of dissimilarity to the collection. One formulation is to find $\*U$ such that it minimizes the maximum dissimilarity with the elements of $\{\*X_i\}_{i=1}^3.$ Once this $\*U$ is identified, there is an implicitly defined $k$-dimensional subspace for each element in the set, $\*Y_i \subseteq \*X_i$ for $i=1,2,3,$ with the property that $\*U$ is the best $k$-dimensional approximation for $\{\*Y_i\}_{i=1}^3.$ This property can be seen in Figure~\ref{fig:common_info} where $\*U$ is the center of the minimum enclosing ball of the points $\*Y_i$ associated with each $\*X_i.$
Common subspace extraction can be found in subspace clustering~\cite{abdolali2019scalable}, domain adaptation, and subspace alignment. These tools can be used in a plethora of tasks in pattern recognition including subspace tracking~\cite{srivastava2004bayesian}, face recognition~\cite{chang2012feature,chakraborty2015recursive}, video action recognition~\cite{o2012scalable,chakraborty2015recursive}, infected patient diagnosis~\cite{ma2018self}, adaptive sorting~\cite{jurrus2016adaptive}, model reduction~\cite{franz2014interpolation}, and many more. Common subspace extraction is frequently done by finding the $\ell_2$- or $\ell_1$-center in cases where outliers are present in the data collection, but if the data are drawn from a uniform distribution whose support is a ball, the $\ell_{\infty}$-center gives the maximum likelihood estimator for the center of the support and thus may be preferred when all the subspaces are assumed to be valid~\cite{afsari2011riemannian}. Furthermore, techniques have been developed to prune outliers from data sets using the $\ell_{\infty}$-norm, with theoretical guarantees in some circumstances~\cite{sim2006removing}.
In this paper, we present a novel technique to accurately estimate the GMEB for a collection of linear subspaces of possibly differing dimension, and a geometrically inspired order-selection rule to identify the Grassmannian that best represents the shared information in the data. Choosing the ideal manifold on which to perform the $\ell_{\infty}$-averaging is inherently related to finding a common subspace of optimal rank, and thus the numerical experiments explore the relationships between different rank-adaptive subspace averaging methods.
The main contributions of the paper are summarized as follows. We propose
\begin{itemize}
\item a subgradient approach to solve the dual of the GMEB problem for subspaces of differing dimensions. A duality gap of zero certifies the solution as optimal.
\item an unsupervised order-selection rule for the dimension of the center of the GMEB.
\item a warm-start initialization for the subgradient algorithm that reduces the number of iterations needed for the subgradient algorithm to converge.
\item a hybrid method for order-selection which modifies the existing rule of~\cite{santamaria2016order} for use with the center of the GMEB.
\item a synthetic data model that allows us to measure the accuracy of an estimate for the center of the GMEB, and demonstrate the effectiveness of the proposed technique using data generated with this model.
\end{itemize}
Finally, we compare the proposed order-selection rules to existing methods for automatic order selection in subspace averaging with numerical experiments.
\section{Problem formulation: Grassmannian minimum enclosing ball}
\label{sec:p2s}
In this section we provide the mathematical background necessary to formulate the GMEB problem for subspaces of differing dimension. We define maps that associate a subset of points on a single manifold with each subspace from the collection, and we describe the point-to-set distance that measures the dissimilarity of these sets. Finally, we explicitly state the minimax optimization problem that defines this GMEB.
Denote by Gr$(k,n)$ the Grassmann manifold of $k$-dimensional subspaces in $\mathbb R^n$. If $A$ is an $n \times k$ matrix with full column rank, the column space of $A$, col$(A)$, defines a subspace that can be identified with a point $\*A \in \textrm{Gr}(k,n)$. To simplify notation we assume without loss of generality that the chosen representative for a point $\*A \in \textrm{Gr}(k,n)$ is an orthonormal basis, $A \in \mathbb R^{n \times k}$ with $A^TA = I$. Let $\textrm{O}(k)$ denote the set of $k \times k$ orthogonal matrices. If $Q_k \in \textrm{O}(k)$ then $\textrm{col}(A Q_k) = \textrm{col}(A) = \*A ,$ and we can see that a point on this Grassmannian can be represented by any real $n \times k$ matrix that spans the same subspace. For any two points, $\*A, \*B \in \textrm{Gr}(k,n),$ there exists a set of $k$ principal angles, $0 \leq \theta_1(\*A,\*B) \leq \cdots \leq \theta_k(\*A,\*B) \leq \nicefrac{\pi}{2},$ defined recursively as
\begin{equation}
\label{eq:angles}
\begin{aligned}
\theta_1(\*A,\*B) := &\underset{\*a_1 \in \*A, \*b_1 \in \*B}{\min} \cos^{-1} \left(\frac{\*a_1^T\*b_1^{}}{\|\*a_1\|_2 \|\*b_1\|_2} \right) , \textrm{ and for } i = 2, \ldots, k\\
\theta_i(\*A,\*B) := &\underset{\*a_i \in \*A, \*b_i \in \*B}{\min} \cos^{-1} \left(\frac{\*a_i^T\*b_i}{\|\*a_i\|_2 \|\*b_i\|_2} \right)\\
& \ \textrm{s.t. } \*a_j^T \*a_i^{} = 0 \textrm{ for } j<i\\
& \ \phantom{\textrm{s.t. }} \*b_j^T \*b_i^{} = 0 \textrm{ for } j<i.
\end{aligned}
\end{equation}
The vectors that form these angles, $\{\*a_1, \ldots, \*a_k\}$ and $\{\*b_1, \ldots, \*b_k\},$ are called the left and right principal vectors, respectively, and form orthogonal bases for the spaces $\*A$ and $\*B$. The principal angles and principal vectors can be computed via the singular value decomposition (SVD)~\cite{golub}. Let $A^T B = V \Sigma W^T$ be a thin SVD with the singular values sorted in nonincreasing order, so that
\begin{equation}
\label{eq:svd}
\begin{aligned}
V \in \mathbb R^{k\times k} &\textrm{ with } V^TV = I, \\
\Sigma \in \mathbb R^{k \times k} &\textrm{ with } \Sigma = \textrm{diag}(\cos(\bm{\theta}(\*A,\*B))), \textrm{ and}\\
W \in \mathbb R^{k\times k} &\textrm{ with } W^T W = I.
\end{aligned}
\end{equation}
Then $\theta_i(\*A,\*B) = \cos^{-1}(\Sigma_{ii})$ is the $i$th principal angle separating $\*A$ and $\*B$, with associated left and right principal vectors $\*a_i = A\*v_i$ and $\*b_i = B\*w_i$ for $i = 1, \ldots, k$.
Let $d: \textrm{Gr}(k,n) \times \textrm{Gr}(k,n) \rightarrow \mathbb R$ be a Grassmannian metric. If for all $\*A, \*B \in \textrm{Gr}(k,n)$ and for all $Q_n \in \textrm{O}(n)$ the left action of $Q_n$ on $A$ and $B$ by multiplication does not change the value of the metric, that is, $d(\*A,\*B) = d(\*{Q_nA},\*{Q_nB}),$ then $d$ is said to be orthogonally invariant. Orthogonally invariant metrics depend only on the relative position of $\*A$ and $\*B,$ so as a result of~\cite[Thm.~3]{wong1967differential}, $d$ can be written as a function of the vector of principal angles separating $\*A$ and $\*B,$ $\bm{\theta}(\*A,\*B) \in \mathbb R^{k}$. Additionally, for $Gr(k,n)$ with either $k\neq2$ or $n\neq2$ there is an essentially unique invariant Riemannian metric (up to scaling) which yields $d(\*A,\*B) = \| \bm{\theta}(\*A,\*B) \|_2,$ and is frequently referred to as the geodesic distance based on arc length~\cite{wong1967differential}. For an orthogonally invariant metric $d(\cdot,\cdot),$ the generalized Grassmann mean of $\left\{\*X_i\right\}_{i=1}^M \in \textrm{Gr}(k,n)$ is defined as
\begin{equation}
\label{eq:generalized_mean}
\*U^* = \underset{\*U \in \textrm{Gr}(k,n)}{\argmin} \left(\sum_{i=1}^M d(\*U,\*X_i)^p\right)^{\nicefrac{1}{p}}.
\end{equation}
When $p=2$ the solution is the Grassmannian center of mass, or the Karcher mean~\cite{karcher}.
This manuscript is concerned with computing the generalized Grassmann mean when $p \to \infty.$ However, rather using a Grassmannian metric we measure dissimilarity by the squared chordal distance, $d(\*A,\*B) = \|\sin(\bm{\theta}(\*A,\*B))\|_2^2.$ A common interpretation of $\ell_{\infty}$-norm minimization is that it minimizes the maximum value. In this context we wish to solve,
\begin{equation}
\label{eq:minmax}
\*U^* = \underset{\*U \in \textrm{Gr}(k,n)}{\argmin} \lim_{p \to \infty} \left(\sum_{i=1}^M d(\*U,\*X_i)^{p}\right)^{\nicefrac{1}{p}} = \underset{\*U \in \textrm{Gr}(k,n)}{\argmin} \max_{i = 1, \ldots, M} d(\*U,\*X_i),
\end{equation}
for a collection of Grassmannian points, $\left\{\*X_i\right\}_{i=1}^M$. The solution, $\*U^{*},$ can be referred to as the minimax center, and is the center of the minimum enclosing ball of the collection on Gr$(k,n).$
Alternatively, let $\mathcal{D} = \left\{\*X_i \right\}_{i=1}^M$ be a finite collection of subspaces of $\mathbb R^n$ with possibly different dimensions, so that dim$(\*X_i) = p_i.$ For the set of positive integers $\mathcal{P} = \{\textrm{dim}(\*X_i) : \*X_i\in\mathcal{D} \}$ we can consider $\mathcal{D}$ as a collection of points lying on the disjoint union of Grassmann manifolds, $\*X_i \in \coprod_{p \in \mathcal{P}}{\textrm{Gr}(p,n)}.$ In this scenario Equation~\eqref{eq:minmax} is not well-defined without further formalism. To account for the difference in subspace dimensions, we adopt the convention of~\cite{ye_lim} by redefining $d(\*U,\*X_i)$ as the minimum distance between $\*U$ and a subset of points on $\textrm{Gr}(k,n),$ appropriately defined for each $\*X_i \in \mathcal{D}$. Each subspace is associated with one of two types of subset, which are defined by
\begin{equation}
\label{eq:schubs}
\begin{aligned}
\Omega_{+}(\*X_i) \doteq \left\{\*Y \in \textrm{Gr}(k,n) : \*X_i\subseteq \*Y \right\} &\textrm{ for } p_i < k, \textrm{ and} \\
\Omega_{-}(\*X_i) \doteq \left\{\*Y \in \textrm{Gr}(k,n) : \*Y \subseteq \*X_i \right\} & \textrm{ for } p_i \geq k.
\end{aligned}
\end{equation}
We use $\Omega_{*}(\*X_i)$ when referring to either type generically. For $\*X_i$ such that $p_i < k$, $\Omega_{+}(\*X_i)$ is the set of all points of Gr$(k,n)$ containing $\*X_i.$ Alternatively when $\*X_i$ is a $p_i$-plane with $p_i > k,$ $\Omega_{-}(\*X_i)$ is all $k$-dimensional subspaces contained in $\*X_i,$ and when $p_i = k$ the subset of points is just the singleton, $\*X_i$.
Finally, we overload the notation for distance so that
\begin{equation}
\label{eq:schub_dist}
d_{\textrm{Gr}(k,n)}(\*U,\*X_i) \doteq d_{\textrm{Gr}(k,n)}(\*U,\Omega_{*}(\*X_i)) = \min \{ d(\*U,\*Y_i) : \*Y_i \in \Omega_{*}(\*X_i)\}
\end{equation}
when the distance is being measured on $\textrm{Gr}(k,n)$ and the data comes from Grassmann manifolds of possibly differing dimension. This is the proposed distance of~\cite{ye_lim}, which is well-defined on any single fixed Grassmannian. Figure~\ref{fig:schubs} shows an illustration of this distance as the length of the shortest path between a point, $\*U$, and the set of points, $\Omega_{*}(\*X_i)$.
\begin{figure*}[t]
\centering
\input{gmeb_fig02.tex}
\caption{\label{fig:schubs}Illustration of the minimum point-to-set distance on $\textrm{Gr}(k,n)$ between $\*U$ and the sets $\Omega_{-}(\*X_1)$ and $\Omega_{+}(\*X_2)$ associated with points on $\textrm{Gr}(k+1,n)$ and $\textrm{Gr}(k-1,n)$, respectively. The points that realize the minimum distance are $\*Y_1 \in \Omega_{-}(\*X_1)$ and $\*Y_2 \in \Omega_{+}(\*X_2)$.}
\end{figure*}
The minimum in Equation~\eqref{eq:schub_dist} always exists because $\Omega_{*}(\*X_i)$ is a closed subset of the Grassmannian, and the points satisfying $\*Y_i = \argmin_{\*Y \in \Omega_{*}(\*X_i)} d(\*U,\*Y)$ are independent of the choice of orthogonally invariant distance measure~\cite{schwickerath2014linear}. Let $U^TX_i = V \Sigma W^T$ be a thin SVD as in Equation~\eqref{eq:svd}. One point that achieves the minimum distance is the columnspace of the matrix defined by
\begin{equation}
\label{eq:y_def}
Y_i \doteq
\begin{dcases}
\left[X_i\*w_1, \ldots,X_i\*w_k\right] & \textrm{for } p_i\geq k; \\[4pt]
\left[X_i\*w_1, \ldots,X_i\*w_{p_i}, U\*v_{p_i+1},\ldots, U\*v_k\right] & \textrm{otherwise.}
\end{dcases}
\end{equation}
This formalism implies that distances can be written as a function of exactly $k$ principal angles regardless of the dimension of $\*X_i$, and conveniently the definition agrees with many pseudo-metrics commonly used in the literature that measure similarity as a function of the (possibly less than $k$) principal angles between subspaces of different dimension. It should be clear, however, that this is not a metric because the distance between $\*A$ and $\*B$ will be zero if $\*A$ is a proper subspace of $\*B$, despite being non-identical.
The minimum point-to-set distance using the squared chordal distance is
\begin{equation}
\begin{aligned}
\label{eq:p2s_dist}
d_{\textrm{Gr}(k,n)}(\*U,\*X_i) &=\|\sin(\bm{\theta}(\*U,\*Y_i))\|_2^2 \\
&= \frac{1}{2}\|U^{}_k U^{T}_k - Y_i^{} Y_i^T \|_F^2 \\
& = k - \textrm{Tr}(U^{T}Y_i^{} Y_i^{T}U^{}) \\
&= \min \{k,p_i\} - \textrm{Tr}(U^TX_i^{}X_i^TU),
\end{aligned}
\end{equation}
where $\bm{\theta}(\*U,\*Y_i) \in \mathbb{R}^k$ is the vector of principal angles between $\*U$ and the point $\*Y_i \in \Omega_{*}(\*X_i)$ that attains the minimum. The final equality in Equation~\eqref{eq:p2s_dist} can be seen from the definition of $\*Y_i$ in Equation~\eqref{eq:y_def} and will be demonstrated in Equation~\eqref{eq:chordal}. Note that it is not necessary to know $\*Y_i$ in order to compute $d_{\textrm{Gr}(k,n)}(\*U,\*X_i).$ With this definition and choice distance measurement, the problem in~\eqref{eq:minmax} is well-defined when written as
\begin{equation}
\label{eq:formal_prob}
\*U^* = \underset{\*U \in \textrm{Gr}(k,n)}{\argmin} \max_{i = 1, \ldots, M} d_{\textrm{Gr}(k,n)}(\*U,\*X_i).
\end{equation}
Using the notion of distance from Equation~\eqref{eq:schub_dist}, an algorithm was proposed by \cite{renard2018grassmannian} to solve Problem~\eqref{eq:formal_prob} for a given value of $k$. Since the data is not of uniform dimension, it is one of our goals to find the solution across all possible values of $k$ that best represents the common subspace in the data. In Section~\ref{sec:ord_select} we propose an order-selection rule for comparing solutions of different dimension, however we must first be able to find the solutions of different dimension efficiently. In general $\*U^*(k) \in \textrm{Gr}(k,n)$ is not contained in $\*U^*({k+1}) \in \textrm{Gr}(k+1,n),$ so it is not possible to construct the respective solutions iteratively via deflation. Instead the problem needs to be solved independently for each value of $k$.
\section{Dual formulation}
\label{sec:existing}
Problem~\eqref{eq:formal_prob} is nonconvex and challenging to optimize directly. Therefore, in this section we formulate its dual function which can be solved efficiently. The dual variables also provide a primal-feasible solution, which can be tested for optimality.
Using Equation~\eqref{eq:p2s_dist}, Problem~\eqref{eq:formal_prob} can be written as one with matrix arguments that can be identified with the Grassmannian points they represent. That is,
\begin{equation}
\label{eq:matrix_eq}
\begin{aligned}
U^*
= & \ \underset{U \in \mathbb R^{n \times k}}{\argmin} \max_{i = 1, \ldots, M} \left(\min \{k,p_i\} - \textrm{Tr}(U^TX_i^{}X_i^TU) \right)\\
& \ \textrm{s.t. } U^TU = I,
\end{aligned}
\end{equation}
where $U$ is an orthonormal basis for $\*U,$ $X_i$ is an orthonormal basis for $\*X_i,$ and $p_i = \dim(\*X_i).$ The solution to \eqref{eq:formal_prob} is then the column space of the solution to \eqref{eq:matrix_eq}, $\*U^* = \textrm{col}(U^*)$. For ease of notation we will treat the dual problem as a minimization, so we reformulate the primal as,
\begin{equation}
\begin{aligned}
U^*
= & \ \underset{U \in \mathbb R^{n \times k}}{\argmax} \min_{i = 1, \ldots, M} - \left(\min \{k,p_i\} - \textrm{Tr}(U^TX_i^{}X_i^TU) \right)\\
& \ \textrm{s.t. } U^TU = I.
\end{aligned}
\end{equation}
Adding an auxiliary variable $\tau$, the quadratic cost function to be minimized is replaced by a smooth linear objective that is maximized with respect to quadratic inequality constraints,
\begin{subequations}
\begin{align}
U^*
= & \ \underset{U \in \mathbb R^{n \times k}, \ \tau \in \mathbb R}{\argmax} \tau \\
& \ \textrm{s.t. } -\tau - \min \{k,p_i\} + \textrm{Tr}(U^TX_i^{}X_i^TU) \geq 0 \textrm{ for } i = 1, \ldots, M, \label{eq:const1}\\
& \ \hphantom{s.t. } U^TU = I. \label{eq:const2}
\end{align}
\end{subequations}
Let $\bm{\lambda} = [\lambda_1, \ldots, \lambda_M]^{T}$ be a vector of Lagrange multipliers associated with the inequality constraints in~\eqref{eq:const1}. Dualizing only the inequality constraints leads to the Lagrangian%
\begin{equation}
\begin{aligned}
\mathcal{L}(U,\tau,\bm{\lambda}) &= \tau + \sum_{i=1}^M \lambda_i \left(-\tau -\min \{k,p_i\} + \textrm{Tr}(U^TX_i^{}X_i^TU) \right), \label{eq:lagrange}
\end{aligned}
\end{equation}
such that $U^TU=I$ and $\lambda_i \geq 0$ for $i = 1,\ldots, M,$ with first-order optimality conditions
\begin{subequations}
\begin{align}
\sum_{i=1}^M \lambda_i &= 1 && \left(\nabla_{\tau} \mathcal{L}(U,\tau,\bm{\lambda}) = 0 \right),\label{eq:kkt2}\\
-\tau - \min \{k,p_i\} + \textrm{Tr}(U^TX_i^{}X_i^TU) &\geq 0 \ \textrm{for } i = 1, \ldots, M && \left(\nabla_{\bm{\lambda}} \mathcal{L}(U, \tau,\bm{\lambda}) \geq 0 \right),\label{eq:kkt5}\\
\lambda_i\big(\tau + \min \{k,p_i\} - \textrm{Tr}(U^TX_i^{}X_i^TU) \big) &= 0 \ \textrm{for } i = 1, \ldots, M && (\textrm{complementarity}), \label{eq:kkt6} \\
\lambda_i &\geq 0 \ \textrm{for } i = 1, \ldots, M && (\textrm{nonnegativity}) \label{eq:kkt1}.
\end{align}
\end{subequations}
The dual of Equation~\eqref{eq:lagrange} is found by maximizing $\mathcal{L}$ over $U$ and $\tau$,
\begin{equation}
\begin{aligned}
f(\bm{\lambda}) &= \sup_{\tau}\big( \tau - \sum_{i=1}^M \lambda_i \tau \big) + \sup_{U^TU = I} \left( \sum_{i=1}^M -\lambda_i^{}\big(\min \{k,p_i\} - \textrm{Tr}(U^TX_i^{}X_i^TU) \big)\right).
\end{aligned}
\end{equation}
The maximum over $\tau$ yields $f(\bm{\lambda}) = \infty$ unless $\|\bm{\lambda}\|_1 = 1,$ in which case the first term is zero, and the dual can be written as
\begin{equation}
\label{eq:dual_sup}
\begin{aligned}
f(\bm{\lambda}) &= -\sum_{i=1}^M \lambda_i^{} \min \{k,p_i\} + \sup_{U^TU = I} \textrm{Tr}(U^T(\sum_{i=1}^M \lambda_iX_i^{}X_i^T ) U)
\end{aligned}
\end{equation}
The set of $n \times k$ matrices with orthonormal columns is closed, thus the supremum is achieved by an element of the set. For a given $\bm{\lambda} \in \mathbb R^{M},$ let $\big(\sum_{i=1}^M \lambda_i^{}X_i^{} X_i^{T}\big) V= VD$ be an orthogonal eigenvector decomposition where the eigenvalues are ordered in decreasing magnitude, $D_{11} \geq D_{22} \geq \cdots \geq D_{nn}.$ The matrix whose columns are the $k$ dominant eigenvectors,
\begin{equation}
\label{eq:weightedEVD}
U_{\bm{\lambda}} \doteq [\*v_1, \ldots, \*v_k],
\end{equation}
satisfies $U_{\bm{\lambda}}^TU_{\bm{\lambda}}^{} = I$ and maximizes the term $\textrm{Tr}(U^T(\sum_{i=1}^M \lambda_iX_i^{}X_i^T ) U),$ so we can write
\begin{equation}
\label{eq:dual_max}
\begin{aligned}
f(\bm{\lambda}) &= -\sum_{i=1}^M \lambda_i^{} \min \{k,p_i\} + \textrm{Tr}(U_{\bm{\lambda}}^T(\sum_{i=1}^M \lambda_iX_i^{}X_i^T ) U_{\bm{\lambda}}^{}).
\end{aligned}
\end{equation}
Finally, we wish solve this optimization problem,
\begin{equation}
\label{eq:dualProb}
\bm{\lambda}^* = \argmin_{\bm{\lambda} \in \mathbb{R}^M} f(\bm{\lambda}) \textrm{ s.t. } \|\bm{\lambda}\|_1 = 1 \textrm{ and } \lambda_i \geq 0 \textrm{ for } i = 1 , \ldots, M,
\end{equation}
that minimizes the dual cost over all feasible weights, $\bm{\lambda}$.
\section{Solution via subgradient}
\label{sec:solution}
The objective function of \eqref{eq:dualProb} is a nondifferentiable convex function. In this section we show how the subgradient method~\cite{shor2012minimization} can be applied to solve this dual problem. After an appropriate subgradient has been identified, the well-developed literature of subgradient algorithms provides a variety of techniques and step sizes to optimize Problem~\eqref{eq:dualProb} with associated convergence guarantees.
Recall that a vector $\*g \in \mathbb R^M$ is a subgradient of $f : \mathbb R^M \to \mathbb R$ at $\*x \in \textrm{dom } f$ if for all $\*z \in \textrm{dom } f$,
$$f(\*z)\geq f(\*x) + \*g^T(\*z-\*x).$$ In this case we denote that $\*g$ is in the subdifferential of $f$ at $\*x$ by writing $\*g \in \partial f(\*x)$. If $f$ is differentiable at $\*x$ then the gradient is the only subgradient and $\*g = \nabla f(\*x) = \partial f(\*x).$
To minimize $f$ in Problem~\eqref{eq:dualProb}, the subgradient method uses the iteration
\begin{equation}
\label{eq:subiter}
\bm{\lambda}^{(t+1)} = \Pi(\bm{\lambda}^{(t)} - \alpha^{(t)} \*g^{(t)}),
\end{equation}
where $\alpha^{(t)}$ is a step size selected to guarantee that the sequence $\{ \bm{\lambda}^{(t)}\}_{t=1}^{\infty}$ converges (in distance) to the optimum, $\bm{\lambda}^{*},$ and $\Pi:\mathbb R^M \to \{\*x : \|\*x\|_1=1, x_i \geq 0 \textrm{ for } i = 1,\ldots,M\} \subset \mathbb R^M$ projects the iterate into the unit simplex.
There is a standard trick for computing a subgradient of the dual function that can adapted to this problem from nonlinear optimization texts such as~\cite{bertsekas1997nonlinear}. Write the Lagrangian as $\mathcal{L}(U,\tau,\bm{\lambda}) = q(U,\tau) + \bm{\lambda}^T\*g(U,\tau),$ where $q(U,\tau)$ is the primal objective function and $\*g(U,\tau) \in \mathbb R^M$ is the vector of constraint values. Given the dual variable, $\bm{\lambda}^{(t)} \in \mathbb R^M,$ at iteration $t,$ let $(U_{\bm{\lambda}^{(t)}},\tau_{\bm{\lambda}^{(t)}})$ be the primal variable that maximizes the Lagrangian. Then $\*g^{(t)} = \*g(U_{\bm{\lambda}^{(t)}},\tau_{\bm{\lambda}^{(t)}})$ is a subgradient of the dual function, $f,$ at $\bm{\lambda}^{(t)}.$
In our case $U_{\bm{\lambda}^{(t)}}$ is defined according by Equation~\eqref{eq:weightedEVD} and the $i$th element of the constraint vector is $g_i(U_{\bm{\lambda}^{(t)}},\tau_{\bm{\lambda}^{(t)}}) = -\tau_{\bm{\lambda}^{(t)}} - \min \{k,p_i\} + \textrm{Tr}(U_{\bm{\lambda}^{(t)}}^TX_i^{}X_i^TU_{\bm{\lambda}^{(t)}}^{}).$ However, the constant vector $[-\tau_{\bm{\lambda}^{(t)}}, \ldots, -\tau_{\bm{\lambda}^{(t)}}]^T \in\mathbb R^M$ does not affect the direction after projection onto the unit simplex, so a subgradient of $f(\bm{\lambda}^{(t)})$ is
\begin{equation}
\label{eq:generic_subgrad}
\*g^{(t)} =
\begin{pmatrix}
- \min \{k,p_1\} + \textrm{Tr}(U_{\bm{\lambda}^{(t)}}^TX_1^{}X_1^TU_{\bm{\lambda}^{(t)}}^{}) \\
\vdots \\
- \min \{k,p_M\} + \textrm{Tr}(U_{\bm{\lambda}^{(t)}}^TX_M^{}X_M^TU_{\bm{\lambda}^{(t)}}^{})
\end{pmatrix}.
\end{equation}
We can check that $\*g^{(t)}$ is a subgradient of $f$ as follows. For any $\tilde{\bm{\lambda}} \in \mathbb R^M$ such that $\| \tilde{\bm{\lambda}}\|_1 = 1$ and $\tilde{\lambda}_i \geq 0 $ for $i = 1, \ldots, M$ we have
\begin{equation}
\begin{aligned}
f(\bm{\lambda}^{(t)}) + \*g^{(t)T}(\tilde{\bm{\lambda}} - \bm{\lambda}^{(t)}) &= f(\bm{\lambda}^{(t)}) + \*g^{(t)T}\tilde{\bm{\lambda}}-\*g^{(t)T}\bm{\lambda}^{(t)} \\
&= f(\bm{\lambda}^{(t)}) + \*g^{(t)T}\tilde{\bm{\lambda}} - f(\bm{\lambda}^{(t)}) \\
&= -\sum_{i=1}^M \tilde{\lambda_i^{}} \min \{k,p_i\} + \textrm{Tr}(U_{\bm{\lambda}^{(t)}}^T(\sum_{i=1}^M \tilde{\lambda_i^{}} X_i^{}X_i^T ) U_{\bm{\lambda}^{(t)}}^{})\\
&\leq -\sum_{i=1}^M \tilde{\lambda_i^{}} \min \{k,p_i\} + \max_{U^TU = I} \textrm{Tr}(U^T(\sum_{i=1}^M \tilde{\lambda_i^{}}X_i^{}X_i^T ) U)\\
&= f(\tilde{\bm{\lambda}}),
\end{aligned}
\end{equation}
and thus $\*g^{(t)} \in \partial f(\bm{\lambda}^{(t)}).$
\subsection{Convergence}
\label{subsec:conv}
The subgradient $\*g^{(t)}$ can be used to update $\bm{\lambda}^{(t)}$ via the iteration in~\eqref{eq:subiter}. The subgradient method is not a descent method, so the value of the objective function at step $t+1$ may be larger than it was at step $t$. Thus we keep track of the dual variable with the lowest cost at each iteration and denote it
\begin{equation}
\bm{\lambda}^{(t+1)}_{\textrm{best}} =
\begin{dcases}
\bm{\lambda}^{(t)}_{\textrm{best}} & f(\bm{\lambda}^{(t+1)}) > f(\bm{\lambda}_{\textrm{best}}^{(t)}); \\[4pt]
\bm{\lambda}^{(t+1)} & \textrm{otherwise.}
\end{dcases}
\end{equation}
Given an upper bound on the norm of the subgradients, $\|g^{(t)}\|_2 \leq G < \infty$ for all $t,$ classical theory makes different guarantees on the convergence of the sequence of iterates, $\{\bm{\lambda}^{(t)}\}_{t=1}^{\infty},$ and thus on the sequence of objective function values, $\{f(\bm{\lambda}^{(t)}_{\textrm{best}} )\}_{t=1}^{\infty},$ depending on the choice of step size, $\alpha^{(t)}.$ For example, with step sizes independent of iteration like $\alpha^{(t)} = a$ or $\alpha^{(t)} = \nicefrac{a}{\|\*g^{(t)}\|_2}$ for some $a>0$, the subgradient algorithm will converge respectively to within $\nicefrac{G^2 a }{2}$ or $\nicefrac{G a }{2}$ of the optimal value~\cite{bertsekas1997nonlinear}. Alternatively, if the step size converges to zero and the sequence is nonsummable or square-summable, that is, $\lim_{t \to \infty} \alpha^{(t)} = 0$ and
\begin{equation}
\label{eq:decreasingStep}
\sum_{t=1}^{\infty} \alpha^{(t)} = \infty \quad \textrm{or} \quad \sum_{t=1}^{\infty} (\alpha^{(t)})^2 < \infty,
\end{equation}
the subgradient method converges to an optimal objective value, $\lim_{t \to \infty} f(\bm{\lambda}^{(t)}_{\textrm{best}}) = f(\bm{\lambda}^*).$ These conditions are satisfied by step sizes like, $\alpha^{(t)} = \nicefrac{a}{\sqrt{t}}$ for $a>0,$ or $\alpha^{(t)} = \nicefrac{a}{(b+t)}$ where $a > 0$ and $b \geq 0.$ Proofs of these results can be found in standard literature on convex optimization for nonsmooth problems such as \cite{bertsekas1997nonlinear,shor2012minimization, hiriart2013convex}.
Although the theory requires $\alpha^{(t)}$ to satisfy the constraints in \eqref{eq:decreasingStep} for convergence, the small step size leads to very slow convergence. In practice we can find an approximate solution quickly by stepping in the direction of a subgradient but requiring the dual objective to decrease at each iteration. Algorithm~\ref{alg:subgrad} (in Appendix~\ref{sec:alg}) solves Problem~\eqref{eq:dualProb} by performing a back-tracking line search in the direction of $\*g^{(t)} \in \partial f(\bm{\lambda}^{(t)})$ to ensure that the dual objective decreases at each step, however, this method is not guaranteed to converge because $\*g^{(t)}$ is not necessarily a descent direction. The practical implementation of Algorithm~\ref{alg:subgrad} is a hybrid of a back-tracking line search and a nonsummable diminishing step size and for a fixed dimension $k$ it identifies a stationary point of the dual problem while providing a feasible solution to the primal problem. It is not intended to be a state-of-the-art subgradient algorithm, but rather just one example of an implementation that is faster than the standard $\nicefrac{a}{(b+t)}$ square-summable step size. Alternatively, a well-established quasi-Newton method like the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm~\cite{curtis2017bfgs} can be used to solve Equation~\eqref{eq:dualProb}, but empirically the convergence rates are comparable to those of the algorithm presented here for this problem.
\subsection{Optimality}
\label{subsec:opt}
In addition to theoretical convergence guarantees, the optimality of a solution to the dual subgradient approach can be verified in some cases. Let $\bm{\lambda}^{*}$ be a solution to Problem~\eqref{eq:dualProb}. There exists a matrix $U_{\bm{\lambda}^{*}}$ whose columns are the $k$ dominant eigenvectors of $\sum_{i=1}^M \lambda_i^{*}X_i^{} X_i^{T}$, analogous to Equation~\eqref{eq:weightedEVD}. Then $U_{\bm{\lambda}^{*}}$ satisfies $U_{\bm{\lambda}^{*}}^T U_{\bm{\lambda}^{*}} = I$ and is thus a feasible solution to the primal problem in \eqref{eq:matrix_eq}. If the primal and dual objective functions are equal, strong duality holds and implies that $\bm{\lambda}^{*}$ and $\*U^{*} = \textrm{col}(U_{\bm{\lambda}^{*}})$ are globally optimal dual and primal variables, respectively. Empirically this occurs for collections of data that satisfy an implicit assumption of minimax optimization; that the data collection is free of outliers. Even when strong duality does not hold, the duality gap gives a bound on the maximum possible improvement for a solution.
This verification of optimality is standard for problems where the primal and dual costs are both computable, but existing techniques for finding the GMEB do not offer this feature. For instance, using a primal method like \cite{renard2018grassmannian} does not directly provide a solution to the dual problem, and thus the duality gap is unknown. Section~\ref{subsec:accuracy} contains numerical experiments that demonstrate the accuracy of the proposed subgradient method.
\section{Proposed order selection rule}
\label{sec:ord_select}
Given a dimension, $k$, and a finite collection of subspaces, $\mathcal{D} = \left\{\*X_i \in \textrm{Gr}(p_i,n) \right\}_{i=1}^M,$ there exist subspaces,
\begin{equation}
\*{U}^{*}(k) = \underset{\*U \in \textrm{Gr}(k,n)}{\argmin} \ \underset{i = 1, \ldots, M}{\max} d_{\textrm{Gr}(k,n)}(\*U,\*X_i),
\label{eq:minmax_cost}
\end{equation}
for $k = 1, \ldots \max_i\{\textrm{dim}(\*X_i)\}$. The argument $k$ is now included in the notation for the GMEB center to emphasize that the subspace depends on the parameter $k$, and may differ significantly depending on the value of this parameter. Section~\ref{sec:solution} described a method to compute $\*U^*(k)$ from the associated dual variable, $\bm{\lambda}^{*}(k) \in \mathbb R^M.$ However, because $\mathcal{D}$ contains subspaces of differing dimension, it is unclear on which Grassmannian the minimum enclosing ball should be computed. Thus, given the set $\mathcal{D}$, in this section we would like to determine the optimal choice for $k,$ in addition to the associated center $\*U^*(k)$. Please note a change in notation; the costs associated with a particular order, $k$, are more intuitive when the primal is formulated as a minimization problem and the dual is a maximization. Therefore, as shown in Equation~\eqref{eq:minmax_cost}, the primal minimization formulation is used for the remainder of the manuscript. The prior formulation was only used for ease of notation in the subgradient method.
All orthogonally invariant distances on Gr$(k,n)$ can be written as a function of the $k$ principle angles between a pair of points. It should be clear from the definition in Equation~\eqref{eq:angles} that each angle is bounded above by $\nicefrac{\pi}{2},$ and thus that the squared chordal distance is bounded above by $k$. Scaling the primal objective function by $\nicefrac{1}{k}$ normalizes the cost associated with $\*U^{*}(k)$ so that the value of
\begin{equation}
\label{eq:scaled_obj}
c_{\textrm{obj}}(k):=
\begin{dcases}
0 & k=0; \\
\max_{i = 1, \ldots, M} \frac{d_{\textrm{Gr}(k,n)}(\*U^{*}(k),\*X_i)}{k} & k = 1 , \ldots \max_i\{\textrm{dim}(\*X_i)\},
\end{dcases}
\end{equation}
gives a fair comparison across different values of $k.$ The normalized objective function achieves its maximum value, $c_{\textrm{obj}}(k)=1,$ when there exits an $i$ such that $\*X_i \perp \*U^{*}(k).$ That is, $\*U^{*}(k)$ contains no information about at least one of the points in $\mathcal{D}$. At the other extreme, the minimum occurs when $k=0$, and when the point of each $\Omega_{*}(\*X_i)$ closest to the center coincides with the center. That is, $c_{\textrm{obj}}(k)=0$ when $\*Y^{*}_i(k) = \*U^{*}(k)$ for all $i,$ where $\*Y_i^{*}(k) = \underset{\*Y_i \in \Omega_{*}(\*X_i)}{\argmin} d_{\textrm{Gr}(k,n)}(\*U^{*}(k),\*Y_i).$
Simply minimizing $c_{\textrm{obj}}(k)$ with respect to $k$ is not sufficient to identify the ideal dimension of $\*U^{*}(k)$ because on average $c_{\textrm{obj}}(k) \leq c_{\textrm{obj}}(k+1)$ irrespective of the relationship between the data points, and of course $c_{\textrm{obj}}(0) = 0$ by definition. However, the dimension of the ideal center should represent all the common information without over-fitting, and should also indicate when no significant relationship exists between the data. Thus we propose a penalty term based on the dimensions of the data not represented by $\*U^{*}(k)$ that balances the information lost by making $k$ too small with the lack of specificity that comes from setting $k$ too large.
Let $\*U^{*\perp}(k)$ denote the orthogonal complement of $\*U^{*}(k)$ and $\tilde{p}_j \doteq \min\{n-k,\textrm{dim}(\*X_j)\}$ for $j=1,\ldots,M.$ The expression
\begin{equation}
\label{eq:proposed_penalty}
c_{\textrm{pen}}(k):=
\begin{dcases}
1 & k=0; \\
\underset{j=1,\ldots,M}{\min} 1 -\frac{ d_{\textrm{Gr}(\tilde{p}_j,n)}(\*U^{*\perp}(k),\*X_j)}{\tilde{p}_j} & k = 1 , \ldots \max_j\{\textrm{dim}(\*X_j)\},
\end{dcases}
\end{equation}
represents the minimum similarity between any point in $\mathcal{D}$ and the dimensions not contained in the center of the GMEB. A high minimum similarity between points in $\mathcal{D}$ and $\*U^{*\perp}(k)$ implies that too much information is being left out of the central subspace, $\*U^{*}(k)$. The penalty term takes a value of $c_{\textrm{pen}}(k) = 1$ when dim$(\*U^{*\perp}(k) \cap \*X_j) = \tilde{p}_j$ for all $j$ and $c_{\textrm{pen}}(k) = 0$ when there exists a $j$ for which $\*X_j \perp \*U^{*\perp}(k).$ The sum of the terms in \eqref{eq:scaled_obj} and \eqref{eq:proposed_penalty} leads to the proposed order selection rule,
\begin{equation}
\label{eq:order_rule}
k^{*} = \argmin_{k= 0,\ldots, \max_i\{\textrm{dim}(\*X_i)\}} c_{\textrm{obj}}(k) + c_{\textrm{pen}}(k).
\end{equation}
The two terms in \eqref{eq:order_rule} are computed independently so the GMEB center is not affected by the penalty term. The value of $k^{*}$ that minimizes the sum of these two terms corresponds to the number of subspace dimensions needed to represent the common information present in $\mathcal{D}$ without over-fitting. Numerical experiments in Section~\ref{subsec:order_selection} demonstrate the efficacy of the order selection rule on simulated data with ground truth.
\subsection{Primal solutions are not nested in general for increasing values of $k$}
\label{subsec:not_nested}
Naively, the order selection rule in Equation~\eqref{eq:order_rule} can be applied by computing the costs $c_{\textrm{obj}}(k)$ and $c_{\textrm{pen}}(k)$ independently for $k=0,\ldots, \max_i\{\textrm{dim}(\*X_i)\}$ as follows,
\begin{enumerate}
\item Compute $\bm{\lambda}^{*}(k)$ using the subgradient method described in Section~\ref{sec:solution}.
\item Find the associated primal variable, $\*U^{*}(k),$ as the $k$-dimensional eigenspace of the weighted sum $\sum_{i=1}^M \lambda_i^{*}(k) X_i^{} X_i^T.$
\item Compute the orthogonal complement, $\*U^{*\perp}(k) = \textrm{col}\left(I - U^{*}(k) U^{*T}(k)\right).$
\end{enumerate}
Then $k^{*}$ is selected as the value of $k$ associated with the minimum cost, $c_{\textrm{obj}}(k) + c_{\textrm{pen}}(k)$. If $\bm{\lambda}^{*}(k) = \bm{\lambda}^{*}(k+1)$ for some $k<\max_i\{\textrm{dim}(\*X_i)\}$ then the solution on Gr$(k+1,n)$ can be constructed in a greedy fashion as the direct sum of the solution on Gr$(k,n)$ and the $(k+1)$st eigenvector of $\sum_{i=1}^M \lambda_i^{*}(k) X_i^{} X_i^T.$ Unfortunately, the dual variables are not generally equal for increasing values of $k$, so a greedy approach is not appropriate.
Observe that the central subspaces are not nested for increasing dimensions in the following illustrative example. Let
\begin{equation}
\begin{aligned}
\label{eq:example_pts}
X_1 =
\begin{bmatrix}
\frac{\sqrt{2}}{\sqrt{3}}& 0 \\
\frac{1}{\sqrt{6}}& 0 \\
\frac{1}{\sqrt{6}} & 0 \\
0 & \frac{\sqrt{7}}{\sqrt{8}} \\
0 & \frac{1}{\sqrt{8}}
\end{bmatrix}, & &
X_2 =
\begin{bmatrix}
\frac{1}{\sqrt{6}}& 0 \\
\frac{\sqrt{2}}{\sqrt{3}}& 0 \\
\frac{1}{\sqrt{6}} & 0 \\
0 & \frac{1}{\sqrt{8}} \\
0 & \frac{\sqrt{7}}{\sqrt{8}}
\end{bmatrix},
& & \textrm{ and }
X_3 =
\begin{bmatrix}
\frac{1}{\sqrt{6}} \\
\frac{1}{\sqrt{6}} \\
\frac{\sqrt{2}}{\sqrt{3}} \\
0 \\
0
\end{bmatrix},
\end{aligned}
\end{equation}
be orthonormal bases for the three points $\*X_1, \*X_2 \in \textrm{Gr}(2,5) \textrm{ and } \*X_3 \in \textrm{Gr}(1,5).$ One can check that the subspace that minimizes the maximum distance to these three points on Gr$(1,5)$ is the mean of their first columns. That is, the optimal primal and dual variables are
\begin{equation}
\begin{aligned}
\*U^{*}(1) = \textrm{col}\left(
\begin{bmatrix}
\frac{1}{\sqrt{3}}&
\frac{1}{\sqrt{3}}&
\frac{1}{\sqrt{3}} &
0 &
0
\end{bmatrix}^T\right),
& & \textrm{ and }
\bm{\lambda}^{*}(1) =
\begin{bmatrix}
\frac{1}{\sqrt{3}}&
\frac{1}{\sqrt{3}}&
\frac{1}{\sqrt{3}}
\end{bmatrix}^T,
\end{aligned}
\end{equation}
with associated primal and dual costs of
\begin{equation}
\label{eq:kOne_duality_gap}
\underset{\*U \in \textrm{Gr}(1,5)}{\min} \max_{i = 1, 2, 3} d_{\textrm{Gr}(1,5)}(\*U,\*X_i) = \max_{\bm{\lambda} \in \mathbb{R}^3} \min_{U^TU = I} 1 - \sum_{i=1}^3 \lambda_i^{} \textrm{Tr}(U^{T}Y_i^{} Y_i^{T}U^{}) = \frac{1}{9}.
\end{equation}
The duality gap in Equation~\eqref{eq:kOne_duality_gap} is zero, indicating that this is a global solution.
On Gr$(2,5)$, however, $\Omega_{+}(\*X_3)$ consists of subspaces that span $X_3$ and any orthogonal direction. In particular there exists $\*Y_3 \in \Omega_{+}(\*X_3)$ such that the second column of $Y_3$ is $\left[0 \ 0 \ 0 \ \nicefrac{1}{\sqrt{2}} \ \nicefrac{1}{\sqrt{2}}\right]^T.$ This leads to a solution for the center of the minimum enclosing ball on Gr$(2,5)$ given by primal and dual variables
\begin{equation}
\begin{aligned}
\*U^{*}(2) = \textrm{col} \left(
\begin{bmatrix}
\frac{3}{\sqrt{22}}& \frac{3}{\sqrt{22}}& \frac{2}{\sqrt{22}} & 0 & 0 \\
0 & 0 & 0 & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{bmatrix}^T\right), && \textrm{ and }
\bm{\lambda}^{*}(2) =
\begin{bmatrix}
\frac{1}{2} & \frac{1}{2} & 0
\end{bmatrix}^T.
\end{aligned}
\end{equation}
Notably, $\*X_3$ is not in the support of the minimum enclosing ball on Gr$(2,5)$ and thus does not influence the central subspace. Strong duality also holds for this solution with
\begin{equation}
\label{eq:kTwo_duality_gap}
\underset{\*U \in \textrm{Gr}(2,5)}{\min} \max_{i = 1, 2, 3} d_{\textrm{Gr}(2,5)}(\*U,\*X_i) = \max_{\bm{\lambda} \in \mathbb{R}^3} \min_{U^TU = I} 2 - \sum_{i=1}^3 \lambda_i^{} \textrm{Tr}(U^{T}Y_i^{} Y_i^{T}U^{}) = \frac{14-3\sqrt{7}}{24}.
\end{equation}
Since $\*U^{*}(1)$ is orthogonal to the second dimension of $\*U^{*}(2)$ and noncollinear with the first, and the columns of $U^{*}(2)$ are orthogonal, we have $\*U^{*}(1) \not\subset \*U^{*}(2).$ Additionally we find that optimal order selected by applying the rule in Equation~\eqref{eq:order_rule} is $k^* = 1,$ because
\begin{equation}
\begin{aligned}
&c_{\textrm{obj}}(0) + c_{\textrm{pen}}(0) = 0 + 1 = 1, \\
&c_{\textrm{obj}}(1) + c_{\textrm{pen}}(1) = \frac{1}{1}\left(\frac{1}{9}\right)+ \frac{1}{1}\left( 1 - \left(\frac{\sqrt{8}}{\sqrt{9}}\right)^2\right)\approx 0.22, \ \textrm{ and}\\
&c_{\textrm{obj}}(2) + c_{\textrm{pen}}(2) = \frac{1}{2}\left(\frac{14-3\sqrt{7}}{24}\right) + \frac{1}{2}\left(2 - \left( \frac{-1}{\sqrt{12}}\right)^2 + \left(\frac{1 - \sqrt{7}}{\sqrt{16}} \right)^2\right)\approx 0.25.
\end{aligned}
\end{equation}
This agrees with the intuition that the center of the minimum enclosing ball represents the common information in all points without over-fitting to any subset of points, but note that the optimal order is not always the dimension of the smallest subspace. The common subspace may have dimension smaller than any of the samples or there may be no common subspace.
Even though the primal solutions are not always nested, a good initial guess for the dual variable will reduce computational overhead. One benefit of the subgradient approach is that $\bm{\lambda}^{*}(k)$ is computed explicitly. Thus we can initialize the algorithm with $\bm{\lambda}^{(0)}(k+1) = \bm{\lambda}^{*}(k)$. The impact of this heuristic warm-start is discussed in the experiments in Section~\ref{subsec:warm_start}.
\subsection{Related literature on order fitting for subspace averaging}
A recent work from Santamar{\'\i}a \textit{et al}.\@ ~\cite{santamaria2016order} also attempts to find a central subspace of ambiguous dimension. The authors minimize the mean-squared error (MSE) between a subspace and a collection of data in the space of $n \times n$ projection matrices using the squared Frobenius norm. That is,
\begin{equation}
\label{eq:mse}
E(k) = \underset{\*U \in \textrm{Gr}(k,n)}{\min}\frac{1}{M}\sum_{i=1}^M \|U U^{T} - X_i^{} X_i^T \|_F^2.
\end{equation}
Putting aside for a moment that the current work is interested in minimizing the maximum deviation rather than the mean-squared error, there remains a central difference between the technique in~\cite{santamaria2016order} and the proposed method. The optimization of Equation~\eqref{eq:mse} is done in a vector space, after which the solution is mapped to the nearest point on the Grassmann manifold. This is subtly different than minimizing the MSE on the Grassmannian with respect to the squared chordal distance using the point-to-set interpretation of~\cite{ye_lim}. To see this, write half of the squared distance from \cite{santamaria2016order} between the central subspace and the $i$th point as
\begin{equation}
\label{eq:santa_dist}
\begin{aligned}
\frac{1}{2}\|U^{*}(k) U^{*T}(k) - X_i^{} X_i^T \|_F^2 &= \frac{k + p_i}{2} - \sum_{r=1}^{\min\{k,p_i\}} \cos^2 (\theta_r(\*U^{*}(k) ,\*X_i))\\
&= \frac{|k - p_i|}{2} + \sum_{r=1}^{\min\{k,p_i\}} \sin^2 (\theta_r(\*U^{*}(k) ,\*X_i)).
\end{aligned}
\end{equation}
In contrast, the point-to-set squared chordal distance on $\textrm{Gr}(k,n)$ is
\begin{equation}
\label{eq:chordal}
\begin{aligned}
d_{\textrm{Gr}(k,n)}(\*U^{*}(k) ,\*X_i) &= \min \big\{ d(\*U^{*}(k) ,\*Y_i) : \*Y_i \in \Omega_{*}(\*X_i)\big\} \\
&= \min \big\{\frac{1}{2}\|U^{*}(k) U^{*T}(k) - Y_i^{} Y_i^T \|_F^2 : \*Y_i \in \Omega_{*}(\*X_i)\big\} \\
&= k - \sum_{r=1}^{k} \cos^2 (\theta_r(\*U^{*}(k) ,\*Y_i))\\
&=\sum_{r=1}^{\min \{k,p_i\}} \sin^2 (\theta_r(\*U^{*}(k) ,\*X_i)) \\
\end{aligned}
\end{equation}
because $0 = \theta_{p_i}(\*U^{*}(k) ,\*Y_i) = \theta_{p_i+1}(\*U^{*}(k) ,\*Y_i) = \cdots = \theta_{k}(\*U^{*}(k) ,\*Y_i)$ if $p_i<k$ by the definition of $\*Y_i$ in Equation~\eqref{eq:y_def}. Thus the distances differ by $\frac{|k-p_i|}{2},$ which is the difference in dimensions between the central subspace and the $i$th data point.
The slight difference in distance measurements lends itself to an interesting interpretation when determining the appropriate rank of the central subspace. The solution to
\begin{equation}
\label{eq:flag}
\*U^{*}(k) = \underset{\*U \in \textrm{Gr}(k,n)}{\argmin}\frac{1}{M}\sum_{i=1}^M \|U^{} U^{T} - X_i^{} X_i^T \|_F^2
\end{equation}
for a fixed $k$ is the dominant $k$-dimensional eigenspace of the sum $\frac{1}{M}\sum_{i=1}^M X_i^{} X_i^T$. That is, if
\begin{equation}
\label{eq:decomp}
\frac{1}{M}\sum_{i=1}^M X_i^{} X_i^T = F D F^T
\end{equation} is an eigendecomposition with eigenvectors $F = [\*f_1, \*f_2, \ldots, \*f_{R}]$ and associated eigenvalues $d_{1} \geq d_{2} \geq \cdots \geq d_{R},$ then the solution to Equation~\eqref{eq:flag} is $\*U^{*}(k) = [\*f_1, \*f_2, \ldots, \*f_k].$ Note that this $\*U^{*}(k)$ is not the same subspace as the center of the minimum enclosing ball. The MSE in Equation~\eqref{eq:mse} can be written as a function of all $R$ eigenvalues,
\begin{equation}
\label{eq:mse_eigs}
E(k) = \sum_{r=1}^k 1-d_r + \sum_{r=k+1}^R d_r,
\end{equation}
and the minimum of Equation~\eqref{eq:mse_eigs} is achieved when $k^{*}$ is the smallest value for which $d_{k+1} < 0.5$. This eigenvalue threshold is then fixed regardless of the dimension of the ambient space, and as we will see in Section~\ref{subsec:order_selection}, the selected dimension could differ drastically for noisy data depending on the ambient dimension.
For a different interpretation of the $k^{*}$ that minimizes Equation~\eqref{eq:mse} we can rewrite Equation~\eqref{eq:mse_eigs} as a function of the angles between each eigenvector and the subspaces,
\begin{eqnarray}
E(k) &=& \sum_{r=1}^k 1-\*f_r^T(\frac{1}{M}\sum_{i=1}^M X_i^{} X_i^T)\*f_r + \sum_{r=k+1}^R \*f_r^T(\frac{1}{M}\sum_{i=1}^M X_i^{} X_i^T)\*f_r\\
&=& \sum_{r=1}^k 1 - \frac{1}{M}\sum_{i=1}^M \cos^2(\theta(\*f_r,\*X_i)) + \sum_{r=k+1}^R \frac{1}{M}\sum_{i=1}^M \cos^2(\theta(\*f_r,\*X_i))\\
&=&\sum_{r=1}^k \frac{1}{M}\sum_{i=1}^M \sin^2(\theta(\*f_r,\*X_i)) + \sum_{r=k+1}^R \frac{1}{M}\sum_{i=1}^M \sin^2(\frac{\pi}{2}- \theta(\*f_r,\*X_i)) \label{eq:avg_angles3}\\
&=&\sum_{r=1}^k \frac{1}{M}\sum_{i=1}^M \sin^2(\theta(\*f_r,\*X_i)) + \sum_{r=k+1}^R \frac{1}{M}\sum_{i=1}^M \sin^2(\theta(\*f_r,\*X_i^{\perp})) \label{eq:avg_angles4} \\
&=&\sum_{r=1}^k \frac{1}{M}\sum_{i=1}^M d_{\textrm{Gr}(1,n)}(\*f_r,\*X_i) + \sum_{r=k+1}^R \frac{1}{M}\sum_{i=1}^M d_{\textrm{Gr}(1,n)}(\*f_r,\*X_i^{\perp}). \label{eq:avg_angles5}
\end{eqnarray}
The equality between \eqref{eq:avg_angles3} and \eqref{eq:avg_angles4} is due to~\cite[Thm.~2.7]{knyazev2006majorization} which implies that $\frac{\pi}{2}- \theta(\*f_r,\*X_i) = \theta(\*f_r,\*X_i^{\perp}).$ Note, however, that Equation~\eqref{eq:avg_angles5} is \textit{not} equivalent to
\begin{equation}
\frac{1}{M}\sum_{i=1}^M d_{\textrm{Gr}(k,n)}(\*U^{*}(k),\*X_i) + \frac{1}{M}\sum_{i=1}^M d_{\textrm{Gr}(R-k,n)}(\*U^{*\perp}(k),\*X_i^{\perp})
\end{equation}
because linear combinations of the eigenvectors, $\*f_r,$ are not included in the expression. A new interpretation of the MSE-minimizing $k$ becomes fairly apparent in light of Equation~\eqref{eq:avg_angles5}. The optimal $k^{*}$ is the one that minimizes the mean-squared chordal distance between $\left\{ \*f_{1}, \ldots, \*f_k\right\}$ and the data points, plus the mean-squared chordal distance between $\left\{ \*f_{k+1}, \ldots, \*f_R \right\}$ and the orthogonal complements of the data points.
\subsection{Hybrid rule}
\label{subsec:hybrid}
It is possible to create a hybrid of the order-selection rule of~\cite{santamaria2016order} and the proposed method with a slight modification.
In~\cite{garg2019subspace}, a robustification of the technique in~\cite{santamaria2016order} is proposed that leads to a weighted eigenvalue decomposition at optimality. The weights are determined using a variety of robust objective functions via a majorization-minimization scheme, which results in a down-weighting of outliers in the data. By minimizing the mean-squared error of the \textit{weighted} average (similar to Equation~\eqref{eq:mse}), this amounts to a hard eigenvalue threshold with the order chosen to be the number of dimensions with eigenvalues greater than $0.5$.
For the hybrid method, weights will come from the values of the dual variable, $\bm{\lambda}^{*}(k),$ at optimality. Since these values depend on the parameter $k,$ the hard eigenvalue threshold is not applicable. Let $d_1(k) \geq d_2(k) \geq \cdots \geq d_R(k)$ be the eigenvalues of $\sum_{i=1}^{M} \lambda^{*}_i(k) X_i^{} X_i^T$ where $\bm{\lambda}^{*}(k)$ is the vector of optimal dual variables computed for the GMEB on Gr$(k,n)$ using the proposed algorithm. For $k=0,$ let $\lambda_i^{*}(0) =\frac{1}{M}$ for $i=1,\ldots,M.$ We define a modified version of the MSE from Equation~\eqref{eq:mse_eigs} as
\begin{equation}
\label{eq:modified_mse}
\tilde{E}(k) = \sum_{r=1}^k 1-d_r(k) + \sum_{r=k+1}^R d_r(k).
\end{equation}
The order-selection rule of~\cite{santamaria2016order} applied to the GMEB center is then
\begin{equation}
\label{eq:modified_santa_rule}
k^* = \underset{k=0,\ldots, \max_i\{\textrm{dim}(\*X_i)\}}{\argmin}\tilde{E}(k).
\end{equation}
It should be clear that the eigenvalues $\{d_r(k)\}_{r=1}^R$ will be different for different values of $\bm{\lambda}^{*}(k).$ In the experiments of Section~\ref{subsec:order_selection}, this combined method is referred to as ``Hybrid'' and performs favorably for all tests; out-performing the other techniques in $2$ out of $3$ scenarios.
\section{Synthetic data generation}
\label{sec:data}
The numerical experiments in Section~\ref{sec:numerical} require data for which the ground truth is known, and ideally data for which the center of the GMEB is distinct from the other generalized Grassmannian means. Thus, in this section we propose two different models for sampling points nonuniformly from a unit ball on the Grassmannian. The first is an asymmetrical nested ball structure, and the second samples more densely within a randomly selected arc of the boundary of a unit ball.
\subsection{Asymmetrical nested ball model}
\label{subsec:nested}
A collection of subspaces, $\mathcal{D} = \{\*X_i\}_{i=1}^{M},$ are uniformly sampled from two balls, $\mathcal{B}_{\epsilon_2}(\*Z_2) \subset \mathcal{B}_{\epsilon_1}(\*Z_1) \subset \textrm{Gr}(k_0,n)$ with centers at $\*Z_1, \ \*Z_2$ and corresponding radii $\epsilon_1 > \epsilon_2,$ respectively. The larger ball, $\mathcal{B}_{\epsilon_1}(\*Z_1),$ is the minimum enclosing ball of the data so that $\*U^*(k_0) = \*Z_1$. The smaller ball is fully contained within the larger ball, i.e., $\mathcal{B}_{\epsilon_2}(\*Z_2) \subset \mathcal{B}_{\epsilon_1}(\*Z_1),$ but $\*Z_1 \notin \mathcal{B}_{\epsilon_2}(\*Z_2)$. Let $M_1,M_2$ be the number of points sampled from $\mathcal{B}_{\epsilon_1}(\*Z_1),\mathcal{B}_{\epsilon_2}(\*Z_2)$ respectively, with $M = M_1 + M_2$. When $M_2 = 0$, the generalized Grassmannian means are all equal to the point $\*Z_1$. When more points are sampled from $\mathcal{B}_{\epsilon_2}(\*Z_2)$ and the fraction $\nicefrac{M_2}{M_1}$ grows, the generalized Grassmannian means for $p < \infty$ move away from $\*Z_1$ in the direction of $\*Z_2$, making the averages distinct without affecting the center of the GMEB. The radius of the large ball, $\epsilon_1,$ controls the similarity of the data points.
\begin{figure*}[!t]
\centering
\input{gmeb_fig03.tex}
\caption{Two examples of point sets from Gr$(1,3)$ generated using the nested ball model embedded into $\mathbb R^2$ by multidimensional scaling. The points from $\mathcal{B}_1(\*Z_1)$ are indicated with x's, points from $\mathcal{B}_{0.2}(\*Z_2)$ are marked with white circles, the true center is the green square, the Karcher mean is the blue circle, and the estimated GMEB center is the yellow diamond. }
\label{fig:3d}
\end{figure*}
As described, the data points are all sampled from a single manifold, Gr$(k_0,n).$ If $\epsilon_1$ is small enough, then the optimal rank for the GMEB (or any of the generalized Grassmannian means) is $k^* = k_0$. This construction can be generalized in two ways.
\begin{enumerate}
\item For $i=1,\ldots,M$, the basis for $\*X_i$ can be completed to a $p_i$-dimensional subspace by taking the span of $X_i$ and $p_i-k_0$ random dimensions. If the $p_i-k_0$ random dimensions are mutually orthogonal for $i=1,\ldots, M$, then the optimal rank for the GMEB is still $k^* = k_0$.
\item Points from the large ball can be sampled from one manifold, $\mathcal{B}_{\epsilon_1}(\*Z_1) \subset \textrm{Gr}(k_1,n)$ while points from the small ball are sampled from another, $\mathcal{B}_{\epsilon_2}(\*Z_2) \subset \textrm{Gr}(k_2,n).$ If $k_1 \neq k_2$, the optimal rank of the central subspace is ambiguous. Experiments show that using the proposed order selection rule, $k^* = k_1$ independent of other parameters, but using the criteria of \cite{santamaria2016order}, $k^*$ depends on $\epsilon_1$ and $\nicefrac{M_2}{M_1}$.
\end{enumerate}
As an illustrative example, Figure~\ref{fig:3d} shows $2$-dimensional embeddings via multidimensional scaling of data sets on Gr$(1,3)$ that have been generated according to the asymmetrical nested ball model. The yellow diamond indicates the center of the GMEB (computed via the proposed method) and the blue circle marks the Karcher mean of each data collection.
\subsection{Unit ball with higher sampling density from a random arc}
\label{subsec:nonuni}
Another practical scenario where the GMEB center may differ from other generalized Grassmannian means is when data has been sampled unevenly. This setting is simulated by selecting a random arc from the boundary of a unit ball and sampling additional points from that region. A collection of subspaces, $\mathcal{D} = \{\*X_i\}_{i=1}^{M},$ are uniformly sampled from the ball $\mathcal{B}_{\epsilon_1}(\*Z_1) \subset \textrm{Gr}(k_0,n)$ with center at $\*Z_1$ and radius $\epsilon_1$. $M_1$ points are sampled from $\mathcal{B}_{\epsilon_1}(\*Z_1)$ so that $\*U^*(k_0) = \*Z_1$. Two points are randomly selected from the boundary of $\mathcal{B}_{\epsilon_1}(\*Z_1),$ and $M_2$ additional points are uniformly sampled from the arc connecting them on the boundary to create $M = M_1 + M_2$ samples. The data points are all sampled from a single manifold, Gr$(k_0,n),$ and for sufficiently small $\epsilon_1,$ the optimal rank for the GMEB (or any of the generalized Grassmannian means) is $k^* = k_0$. To generalize this construction, additional dimensions can be included to create points from a disjoint union of Grassmannians.
\begin{figure*}[!t]
\centering
\input{gmeb_fig04.tex}
\caption{Two examples of point sets from Gr$(1,3)$ on the unit ball, $\mathcal{B}_1(\*Z_1)$, sampled with nonuniform density on the boundary, embedded into $\mathbb R^2$ by multidimensional scaling. Points from $\mathcal{B}_1(\*Z_1)$ are indicated with x's, the true center is the green square, the Karcher mean is the blue circle, and the estimated GMEB center is the yellow diamond. }
\label{fig:4d}
\end{figure*}
For $i=1,\ldots,M$, the basis for $\*X_i$ can be completed to a $p_i$ dimensional subspace by taking the span of $X_i$ and $p_i-k_0$ random dimensions. If the $p_i-k_0$ random dimensions are mutually orthogonal for $i=1,\ldots, M$, then the optimal rank for the GMEB is still $k^* = k_0$. Figure~\ref{fig:4d} shows $2$-dimensional embeddings via multidimensional scaling of data sets on Gr$(1,3)$ that have been generated as a unit ball with higher sampling density along a random arc. The yellow diamond indicates the center of the GMEB (computed via the proposed method) and the blue circle marks the Karcher mean of each data collection.
It should be noted that using either data model the point at the center of $\mathcal{B}_{\epsilon_1}(\*Z_1)$ is only the ground-truth center of the minimum enclosing ball of the data collection, $\*U(k^*),$ if the points have been sampled with a high enough density from the surface of the ball. The minimum number uniformly distributed points needed grows with the ambient dimension, $n,$ so in high dimensional spaces the number of points, $M,$ needed to create a ground-truth center may become prohibitively large. The experimental data can be generated exclusively from the boundary of the balls or interior points can be added.\footnote{Matlab code for the data generation procedures, algorithms, and the numerical experiments are available at \url{https://sites.google.com/site/nicolasgillis/code}.}
\section{Numerical experiments}
\label{sec:numerical}
The experiments in this section are meant to illustrate three properties of the proposed GMEB algorithm and associated order-selection rule. First, we demonstrate the speed and accuracy of the proposed method for estimating the center of the GMEB. Second, we demonstrate that a warm-start on Gr$(k+1,n)$ using the optimal solution from Gr$(k,n)$ can reduce the number of iterations required for the algorithm to converge. And finally, we compare results of the proposed order-selection rule and the rule of~\cite{santamaria2016order} in a variety of scenarios to gain intuition about when and how they differ.
\subsection{Experiment 1: Accuracy of the GMEB}
\label{subsec:accuracy}
\begin{figure*}[!t]
\begin{subfigure}[t]{0.48\linewidth}
\centering
\input{gmeb_fig05a.tex}
\caption{\label{fig:ca_error1}Distance to the groundtruth at the $i$th iteration, $d(\*U^{(i)}(3),\*U^*(3))$}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\linewidth}%
\centering
\input{gmeb_fig05b.tex}
\caption{\label{fig:ca_errorVStime1}Distance to the groundtruth at time $t$, $d(\*U^{(i)}(3),\*U^*(3))$}
\end{subfigure}%
\caption{Median distance to the groundtruth and cumulative time for the GMEB on Gr$(3,10)$ of data generated with the asymmetrical nested ball model from Section~\ref{subsec:nested} over $100$ Monte Carlo trials. The data consists of $100$ points in Gr$(3,10).$ The proposed method is indicated by the dashed purple line and the method of Renard \textit{et al}.\@ ~\cite{renard2018grassmannian} is represented by the solid turquoise line. The shaded regions span the extreme values.}%
\label{fig:center_accuracy1}
\end{figure*}
To test the accuracy and efficiency of the proposed dual subgradient approach, data sets are generated according to the each of two data models from Section~\ref{sec:data}. For each data collection, the GMEB center is approximated using the proposed method and the algorithm of Renard \textit{et al}.\@ ~\cite{renard2018grassmannian}, and the residual error is measured as the between the approximate centers and the true centers. For the first data set, $M=100$ points are sampled from Gr$(3,10)$ using the asymmetrical nested ball model in Section~\ref{subsec:nested} with neither of the proposed generalizations. That is, $k_0 = k_1 = k_2=3$ so that all points are sampled from the same Grassmann manifold. $M_1 = 70$ of the points come from the boundary of $\mathcal{B}_1(\*Z_1)$ and $M_2 = 30$ from the boundary of $\mathcal{B}_{0.125}(\*Z_2)$. No points are sampled from the interior of either ball. Both algorithms are initialized using the extrinsic mean of the data~\cite{marrinan2014,rentmeesters2010efficient}, that is, $\bm{\lambda}^{(0)} = [ \nicefrac{1}{100},\nicefrac{1}{100}, \ldots, \nicefrac{1}{100}]^T,$ and $\*U^{(0)}(3)$ is the dominant $3$-dimensional eigenspace of $\sum_{i=1}^{100} \lambda^{(0)}_i X_i^{} X_i^T$. The groundtruth center is $\*U^*(3) = \*Z_1.$
Figure~\ref{fig:ca_error1} shows the median distance to the groundtruth over $100$ Monte Carlo trials between the iterate with the lowest primal cost and the ground-truth center. Figure~\ref{fig:ca_errorVStime1} shows the same median distance to the groundtruth relative to cumulative computation time for each algorithm. In both plots the proposed method is indicated by the dashed purple line and the method of~\cite{renard2018grassmannian} is represented by the solid turquoise line. The shaded regions denote the complete range of values across all trials. This is a setting in which all data points live on a single Grassmann manifold. Therefore the point-to-set distances reduce to the traditional Grassmannian distances and the technique of~\cite{renard2018grassmannian} is equivalent to that of~\cite{arnaudon2013approximating}.
The proposed method clearly outperforms the existing technique in terms of accuracy relative to both iterations and computation time for this collection of data. However, the cumulative computation time is affected by many of the parameters in the experimental setup. Let $P = \max_i\{\textrm{dim}(\*X_i)\}.$ For the technique of~\cite{renard2018grassmannian,arnaudon2013approximating}, the per iteration complexity is $\mathcal{O}\left(MP(nk+ k^2)\right)$ due to the $M$ matrix products and subsequent thin SVDs. The proposed method computes these same $M$ products and SVDs, but must additionally compute the compact SVD of a matrix of size $n \times MP$ in order to get the updated center. Assuming that $n \leq MP$ (as it is in all the experiments), the complexity of the proposed algorithm is then $\mathcal{O}\left(MP(nk+k^2 + n^2)\right).$ There are an additional $M$ SVDs for each back-tracking step taken, but those steps are infrequent and thus dominated by the other terms. From these complexities we can see that an increase in the ambient dimension, $n,$ number of subspaces, $M$, or subspace dimension, $P,$ would all lead to a relative decrease in the efficiency of the proposed method.
In the second example we employ the data model from Section~\ref{subsec:nonuni}, with the inclusion of interior points and the generalization that the data points come from a disjoint union of Grassmannians, that is, they are subspaces of differing dimensions. Initially, $M_1=100$ points are sampled from the boundary of $\mathcal{B}_1(\*Z_1)$ on Gr$(3,15)$. An additional $M_2=100$ points are selected from an arc on the boundary of the ball between two randomly selected points. Finally $M_3=100$ points are selected uniformly at random from the interior of the ball. Each of the $M = 300$ points is then completed to a basis for a $p_i$-dimensional subspace where $p_i$ is randomly selected from the set $\mathcal{P} = \{3,4,5,6\}$. Both algorithms are again initialized using the extrinsic mean of the data on Gr$(3,15)$ where $\bm{\lambda}^{(0)} = [ \nicefrac{1}{300},\nicefrac{1}{300}, \ldots, \nicefrac{1}{300}]^T,$ and $\*U^{(0)}(3)$ is the dominant $3$-dimensional eigenspace of $\sum_{i=1}^{300} \lambda^{(0)}_i X_i^{} X_i^T$. Figure~\ref{fig:ca_error2} shows the median distance to the groundtruth over $100$ Monte Carlo trials between the iterate with the lowest primal cost and the ground-truth center, while Figure~\ref{fig:ca_errorVStime2} shows the median error relative to cumulative computation time. The proposed method is indicated by the dashed purple line and the method of Renard \textit{et al}.\@ ~\cite{renard2018grassmannian} is represented by the solid turquoise line. The shaded regions span the extreme values. The groundtruth center is $\*U^*(3) = \*Z_1.$
\begin{figure*}[!t]
\begin{subfigure}[t]{0.48\linewidth}
\centering
\input{gmeb_fig06a.tex}
\caption{\label{fig:ca_error2}Distance to the groundtruth at the $i$th iteration, $d(\*U^{(i)}(3),\*U^*(3))$}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.48\linewidth}
\centering
\input{gmeb_fig06b.tex}
\caption{\label{fig:ca_errorVStime2}Distance to the groundtruth at time $t$, $d(\*U^{(i)}(3),\*U^*(3))$}
\end{subfigure}%
\caption{Median distance to the groundtruth and cumulative computation time for the GMEB on Gr$(3,15)$ of data generated with the nonuniform sampling model from Section~\ref{subsec:nonuni} over $100$ Monte Carlo trials. The data consists of $300$ points in $\coprod_{p \in \mathcal{P}}{\textrm{Gr}(p,15)}$ for $\mathcal{P} = \{3,4,5,6\}.$ The proposed method is indicated by the dashed purple line and the method of Renard \textit{et al}.\@ ~\cite{renard2018grassmannian} is represented by the solid turquoise line. The shaded regions span the extreme values.}
\label{fig:center_accuracy2}
\end{figure*}
As shown in Figure~\ref{fig:ca_error2}, the proposed method achieves a higher accuracy in fewer iterations than~\cite{renard2018grassmannian}. However, the greater complexity of the proposed method means that the primal algorithm initially achieves a lower error, as shown in Figure~\ref{fig:ca_errorVStime2}. The increased number of points in the data set and specifically in the support of the GMEB lead to a slower overall convergence for the proposed algorithm. This reduced efficiency would grow with the size of the data, however the subgradient technique is consistently achieves lower overall error given enough time. Moreover, the proposed method provides duality-gap optimality guarantees.
One direction for future work is to combine the two methods to get the best of both worlds; fast initial estimates of the center and high accuracy solutions over time. Using $\*U^{(t)}(k)$ computed via $t$ iterations of~\cite{renard2018grassmannian} as an estimate of the center, we can find dual-feasible variables that are non-zero only for points in the support set of the enclosing ball centered at $\*U^{(t)}(k).$ For example, let $\mathcal{I} = \{i : d_{\textrm{Gr}(k,n)}(\*U^{(t)}(k),\*X_i) = \max_i d_{\textrm{Gr}(k,n)}(\*U^{(t)}(k),\*X_i)\}.$ Then let $\lambda_i^{(0)} = \nicefrac{1}{|\mathcal{I}|}$ for $i \in \mathcal{I}$ and $\lambda_i^{(0)} = 0 $ otherwise, and proceed with the subgradient algorithm from this warm-start. An alternative initialization strategy is proposed in Section~\ref{subsec:warm_start}.
\subsection{Experiment 2: Faster convergence by initializing with previous solutions}
\label{subsec:warm_start}
\begin{figure*}[!t]
\begin{subfigure}[t]{0.48\linewidth}
\centering
\input{gmeb_fig07a.tex}
\caption{\label{fig:warm_iter1} Results from $100$ trials with the asymmetrical nested ball model where $k^* = 4$ and $M = 50$ points sampled from Gr$(p_i,10)$ with $p_i \in \{4,5,6\}$.}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.48\linewidth}
\centering
\input{gmeb_fig07b.tex}
\caption{\label{fig:warm_iter2} Results from $100$ trials with the nonuniform sampling model where $k^* = 4$ and $M = 300$ points sampled from Gr$(p_i,10)$ with $p_i \in \{4,5,6\}$.}
\end{subfigure}
\caption{\label{fig:warm}Number of iterations needed for the proposed subgradient algorithm to reach a stationary point using a naive initialization, $\bm{\lambda}^{(0)}(k+1) = [ \nicefrac{1}{M},\nicefrac{1}{M}, \ldots, \nicefrac{1}{M}]^T$ (light orange), and a warm start, $\bm{\lambda}^{(0)}(k+1) =\bm{\lambda}^{*}(k)$ (red) for two data sets. }
\end{figure*}
To apply the order selection criteria in Section~\ref{sec:ord_select}, the GMEB center must be computed for $k=1,\ldots,\max_i\{\textrm{dim}(\*X_i)\}.$ The example in Section~\ref{subsec:not_nested} demonstrates that the subspace at the center of the minimum enclosing ball cannot be built in a greedy fashion, because the center $\*U^*(k-1) \in \textrm{Gr}(k-1,n)$ is not in general a subspace of the center $\*U^*(k) \in \textrm{Gr}(k,n)$. However, the solutions are often \textit{nearly} nested. As a result, the vector, $\bm{\lambda}^*(k-1),$ that provides the optimal value of the dual objective function for the problem on Gr$(k-1,n)$ can offer a good initialization for the dual subgradient algorithm used to find the GMEB center on Gr$(k,n),$ significantly reducing the total computation time needed to identify the optimal dimension, $k^*$. By way of comparison, simple initializations of $\bm{\lambda}^{(0)}(k)$ would be to randomly select the dual variables or to set all of the dual variables equal so that $\bm{\lambda}^{(0)}(k)=[\nicefrac{1}{M},\ldots, \nicefrac{1}{M}]^T.$ For these experiments the latter strategy is chosen. The initial iterate for the primal variable when the dual variables are all equal is then the uniformly weighted extrinsic mean of the data, that is, $\*U^{(0)}(k)$ is the dominant $k$-dimensional eigenspace of $\sum_{i=1}^{M} \lambda^{(0)}_i X_i^{} X_i^T.$ On Gr$(1,n),$ no warm-start initialization is possible because $\bm{\lambda}^{*}(0)$ is undefined, so the algorithm is run using only the naive initialization. For $k=2,\ldots,\max_i\{\textrm{dim}(\*X_i)\}$ Figure~\ref{fig:warm} illustrates the relative speed-up due to smart initialization by comparing the number of iterations needed to find a stationary point for different choices of the initial dual variable using each of the data models. Both data models are intentionally structured so that the extrinsic mean is not the center of the GMEB on Gr$(k^*,n)$. The naive initialization is indicated by the light orange box-and-whisker plots, while the warm-start is denoted with red. The black dots mark the mean number of iterations and the solid line is the median.
In Figure~\ref{fig:warm_iter1} the data has been generated using the asymmetrical nested ball model with $M=50$ points sampled from Gr$(p_i,10)$ for $p_i \in \{4,5,6\}$ and an optimal dimension of $k^*=4$. The warm start converged in less iterations than the naive initialization in $359$ out of $500$ possible trials. An experiment using data generated by sampling more densely from a randomly selected arc of a unit ball is displayed in Figure~\ref{fig:warm_iter2}. Here, $M=300$ points were generated on Gr$(p_i,10)$ with $p_i \in \{4,5,6\}$ where $k^*=4$. In $415$ out of $500$ possible trials, the warm start converged in less iterations than the naive initialization.
\subsection{Experiment 3: Order-selection comparison}
\label{subsec:order_selection}
The previous experiments demonstrated the effectiveness of the proposed approach for computing the subspace at the center of the GMEB in a noise-free scenario. However the end-goal is to find a central subspace \textit{and} the optimal size to best represent the common dimensions in a collection of data. Adding noise to the subspaces makes it difficult to identify how many common dimensions exist, thus the third experiment compares the ability of the proposed order-selection rule to identify the optimal dimension of the common subspace with that of the technique from Santamaria \textit{et al}.\@ ~\cite{santamaria2016order} as the difficulty of the task varies.
In many machine learning applications, extracting a low-rank common subspace from data is a pre-processing task and the rank is selected with little care. Heuristic solutions often focus on different methods for locating include the elbow of the scree plot, that is, computing the SVD of the concatenated data sets, finding the the singular values that represent the significant information, and keeping the dimensions corresponding to these singular values. This can be done with a variety of techniques such as the L-method~\cite{salvador2004determining}, which estimates the elbow as the intersection of the two lines that minimize the root mean-squared error of the projection of the points in the of the scree plot onto the lines, the method of~\cite{zhu2006automatic}, which maximizes the profile log-likelihood under an independence assumption, and even just visually inspecting the scree plot to identify the first significant change in the first derivative~\cite{steyvers2006multidimensional}. To justify the need for a more principled way of selecting a subspace dimension, we additionally compare to the elbow of the scree plot using the L-method, and expect it to provide bad results. In the experiments this technique is denoted ``SVD.''
Figure~\ref{fig:opt_dim2} shows a comparison of order-selection rules for $M=20$ points generated using the asymmetrical nested ball model from Section~\ref{subsec:nested} with both generalizations. The data has $M_1 = 10$ points are sampled uniformly from the boundary of $\mathcal{B}_{1}(\*Z_1) \subset \textrm{Gr}(10,n)$ and $M_2=10$ points are sampled from the boundary of $\mathcal{B}_{.5}(\*Z_2) \subset \textrm{Gr}(15,n).$ Each of the points is then completed to a basis for a point on Gr$(p_i,n)$ for $p_i \in \{10,11, \ldots,20\}$ and $n = 20, 30, \ldots, 200.$ Zero-mean Gaussian noise is added to each basis to create noisy data sets. The signal-to-noise ratio (SNR) of the data is the total power of the signal divided by the total power of the noise. In order to have the same SNR for each subspace despite differing dimensions, the noise variance per component is scaled by the number of subspace dimensions. Since $X_i$ is an orthonormal basis for $\*X_i,$ the magnitude of each basis vector is $1.$ Thus the total power of signal subspace is $k^*,$ and the SNR is computed as $\textrm{SNR} = 10\log_{10}(\nicefrac{k^*}{\sigma_N^2}),$ where $\sigma_N^2$ is the total variance of the noise. In this example the order of the common subspace is $k^* = 10$ and $\sigma_N^2 = 1.259$ meaning that the data has an SNR of $9$dB.
\begin{figure*}[!t]
\begin{subfigure}[t]{0.5\linewidth}
\centering
\input{gmeb_fig08a.tex}
\caption{\label{fig:optdim_acc2}Accuracy, $\frac{\textrm{Number of times }k^*=10}{\textrm{Number of trials}}$}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\linewidth}
\centering
\input{gmeb_fig08b.tex}
\caption{\label{fig:optdim_mv2}Mean selected order}
\end{subfigure}
\caption{\label{fig:opt_dim2} Order-selection accuracy and mean selected order relative to the ambient dimension of the data from $100$ Monte Carlo trials using the proposed order-selection rule (purple dashed line with triangle markers), the method of Santamar{\'\i}a \textit{et al}.\@ ~\cite{santamaria2016order} (pink solid line with circle markers), the hybrid method (turquoise dotted line with square markers), and the elbow point of the SVD (orange dash-dotted line with circle markers). The data consists $20$ points from $\coprod_{p \in \mathcal{P}}{\textrm{Gr}(p,n)}$ for $\mathcal{P} = \{10,11, \ldots,20\}$ and $n = 20, 30, \ldots, 200$ with an SNR of $9$ generated according to the model in Section~\ref{subsec:nested}.}
\end{figure*}
Figure~\ref{fig:optdim_acc2} shows the percentage of $100$ Monte Carlo trials for which the proposed order-selection rule (purple dashed line with triangle markers), the method of Santamar{\'\i}a \textit{et al}.\@ ~\cite{santamaria2016order} (pink solid line with circle markers), the hybrid method (turquoise dotted line with square markers), and the elbow point of the SVD (orange dash-dotted line with circle markers) were able to correctly identify the optimal order of the common subspace relative to the ambient dimension. Figure~\ref{fig:optdim_mv2} shows the mean selected order, averaged across all trials. We can see that when the ambient dimension is small, all methods other than the SVD tend to overestimate the order of the common subspace. This is a result of the noise dimensions being relatively close in the low-dimensional spaces. The dimension of Gr$(k,n)$ is $k(n-k),$ so for $k\approx \max_i\{p_i\} \approx n$ all samples are very similar regardless of the data model. As the ambient dimension grows and the randomly selected dimensions become further apart on average, the proposed method and the hybrid method correctly select the order with a high degree of accuracy. The proposed method achieves slightly lower accuracy and has less stable performance than the hybrid method because $c_{\textrm{pen}}(k)$ can be significantly affected by even one subspace that is similar to $\*U^{*\perp}(k)$. However, this behavior is consistent with the assumption that every sample is valid and there are no outliers in the collection of data. As expected, \cite{santamaria2016order} initially estimates the order as the dimension of the common subspace for the smaller ball and over-estimates the order as $15$, while the two methods that rely on the minimum enclosing ball estimate the dimension of the common subspace for that support set. Predictably, the elbow point of the SVD has a very low accuracy regardless of the ambient dimension. In essence, this method is attempting to preserve all dimensions that are not pure noise.
\begin{figure*}[!t]
\begin{subfigure}[t]{0.5\linewidth}
\centering
\input{gmeb_fig09a.tex}
\caption{\label{fig:optdim_acc1}Accuracy, $\frac{\textrm{Number of times }k^*=3}{\textrm{Number of trials}}$}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\linewidth}
\centering
\input{gmeb_fig09b.tex}
\caption{\label{fig:optdim_mv1}Mean selected order}
\end{subfigure}
\caption{\label{fig:opt_dim1} Order-selection accuracy and mean selected order relative to the signal-to-noise ratio of the data (in dB) from $100$ Monte Carlo trials using the proposed order-selection rule (purple dashed line with triangle markers), the method of Santamar{\'\i}a \textit{et al}.\@ ~\cite{santamaria2016order} (pink solid line with circle markers), the hybrid method (turquoise dotted line with square markers), and the elbow point of the SVD (orange dash-dotted line with circle markers). The data consists $225$ points from $\coprod_{p \in \mathcal{P}}{\textrm{Gr}(p,100)}$ for $\mathcal{P} = \{3,4,5\}$ generated according to the model in Section~\ref{subsec:nonuni}.}
\end{figure*}
Figure~\ref{fig:opt_dim1} shows a comparison using data from the second model, a ball that is sampled more densely from a random arc. For some $\*Z_1 \in \textrm{Gr}(3,100),$ $M_1=200$ points are sampled uniformly from $\mathcal{B}_{0.5}(\*Z_1) \subset \textrm{Gr}(3,100)$ and $M_2 = 25$ additional points are then sampled from a random arc on the same ball. No points were sampled from the interior of the ball. Each of these $M = 225$ subspaces is completed to basis for a point on Gr$(p_i,100)$ for $p_i \in \{3,4,5\},$ and zero-mean Gaussian noise is added to each basis to create noisy data sets. In this experiment, the ambient dimension is fixed and we allow the SNR to vary from $-5$dB to $10$dB.
With this data the optimal order of the common subspace is $k^{*} = 3$ and center of the ball is $\*U^{*}(3) = \*Z_1.$ Figure~\ref{fig:optdim_acc1} shows the percentage of $100$ Monte Carlo trials for which the proposed order-selection rule (purple dashed line with triangle markers), the method of Santamar{\'\i}a \textit{et al}.\@ ~\cite{santamaria2016order} (pink solid line with circle markers), the hybrid method (turquoise dotted line with square markers), and the elbow point of the SVD (orange dash-dotted line with circle markers) were able to correctly identify the optimal order of the common subspace relative to the signal-to-noise ratio. Figure~\ref{fig:optdim_mv1} shows the mean selected order in the same trials. This experiment demonstrates the behavior of the different rules when all of the points are in the support of the minimum enclosing ball on Gr$(k^*,n)$. Each of the subspace averaging methods should theoretically select the same order in this experiment, because all of the points share the same number of dimensions and there is no ambiguity about the optimal solution. Thus even though the mean computed by~\cite{santamaria2016order} is not the same point as the center of the GMEB, they lead to the same estimated rank. We see that in this scenario, the behavior of the rules using $\ell_{\infty}$-norm and the $\ell_2$-norm are similar with a sharp phase transition when the power of the signal and the power of the noise are almost equal, although the $\ell_2$-norm transitions to the correct order at a slightly higher noise power. This suggests that for situations where the data is free from outliers and the $\ell_{\infty}$-mean is close to the $\ell_2$-mean, either technique will accurately estimate the number of common dimensions. The elbow point of the singular value decomposition fails to identify the common dimension in all trials.
\begin{figure*}[!t]
\begin{subfigure}[t]{0.5\linewidth}
\centering
\input{gmeb_fig10a.tex}
\caption{\label{fig:optdim_acc3}Accuracy, $\frac{\textrm{Number of times }k^*=0}{\textrm{Number of trials}}$}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\linewidth}
\centering
\input{gmeb_fig10b.tex}
\caption{\label{fig:optdim_mv3}Mean selected order}
\end{subfigure}
\caption{\label{fig:opt_dim3} Order-selection accuracy and mean selected order relative to the ambient dimension of the data when there is no common subspace. Results are from $100$ Monte Carlo trials using the proposed order-selection rule (purple dashed line with triangle markers), the method of Santamar{\'\i}a \textit{et al}.\@ ~\cite{santamaria2016order} (pink solid line with circle markers), the hybrid method (turquoise dotted line with square markers), and the elbow point of the SVD (orange dash-dotted line with circle markers). The data consists $50$ points from $\coprod_{p \in \mathcal{P}}{\textrm{Gr}(p,n)}$ for $\mathcal{P} = \{3,4,5\}$ and $n = 5, 6, \ldots, 15,20,25,\ldots, 40.$}
\end{figure*}
Finally, in Figure~\ref{fig:opt_dim3} we see the ability of each method to identify when there is no subspace common to a collection of points. This is a valuable test because estimating $k^*=0$ suggests that there is no information shared across all the data and that averaging the points is not an appropriate way to aggregate the information in the data. The data in this experiment consists of $50$ subspaces chosen uniformly at random from Gr$(p_i,n)$ for $p_i \in \{3,4,5\}$ for $i = 1, \ldots, 10$ with ambient dimensions $n = 5, 6, \ldots, 15,20,25,\ldots, 40.$ The noise variance does not affect performance in this task because there is no signal so SNR undefined. In Figure~\ref{fig:optdim_acc3} we see a similar phase transition to that of Figure~\ref{fig:opt_dim1}. The hybrid method is able to achieve perfect accuracy for ambient dimensions greater than $10,$ while \cite{santamaria2016order} and the proposed method transition shortly thereafter. The SVD fails every time, but that is to be expected in this scenario. The elbow point method computes two lines that minimize the residual for the scree plot, and chooses dimension as the index of the singular value just larger than the intersection of those lines. A line cannot be fit to zero points, so the method will not select $k^*=0$ or $k^* = n$ as a solution. However, in Figure~\ref{fig:optdim_mv3} we see that the SVD is significantly overestimating the dimension of the (non-existent) common subspace, so the poor performance is not an issue of the method being unable to select $0$ as the optimal dimension. When $n$ is small the proposed algorithm incorrectly identifies a relationship between the subspaces, but as the ambient dimension grows the optimal order, $k^*=0$, is selected with increasing accuracy. As noted in discussion of Figure~\ref{fig:opt_dim2}, the misidentifications in low dimensions are due to the minimum similarity between the points and $\*U^{*\perp}(k)$ being higher when $k\approx \max_i\{p_i\} \approx n$.
\section{Conclusions}
\label{sec:conclusions}
The recent trend of performing machine learning tasks on linear subspace data has created a need for flexible subspace averages, ones that can be computed accurately and in a principled manner for subspaces of differing dimension. In response to this need, we have proposed an algorithm to find the $\ell_{\infty}$-center of mass using a subgradient algorithm to solve the dual problem with respect to a point-to-set distance. We additionally proposed a flexible data generation model to create subspaces of differing dimensions with ground-truth for the GMEB that emulates realistic settings where an $\ell_{\infty}$-average would be appropriate. On this synthetic data, the proposed algorithm provides estimates of the GMEB center with high accuracy. However, the high computational complexity means that an existing primal method can provide low-accuracy solutions more quickly for large data sets. One direction for future expansion is to develop a core-set theory akin to that of~\cite{badoiu2003smaller} in order to estimate the GMEB on a subset of the data with theoretical accuracy guarantees. A related area for further study is to develop an active-set approach for $\ell_{\infty}$-averaging of mixed-dimensional subspaces, \`{a} la John~\cite{john2014extremum}. Active-set methods also attempt to minimize the cost function over a subset of the data. However, the active-set approach looks for a subset of the data that solves the original problem exactly, whereas the core-set technique computes error bounds on the solution provided by \textit{any} subset of a given size. One theoretical hurdle to achieving an active-set method is a theorem on the minimum number of points required to define a Grassmannian ball given a fixed Grassmann manifold and subspaces of differing dimensions.
Finally, we proposed a geometric order-fitting rule that estimates the best dimension for the common subspace. This rule fits the common dimensions of the subspaces in the support set of the minimum enclosing ball, which is appropriate for data where all subspace samples are assumed to be valid examples of the model of interest. We additionally implement a hybrid technique for estimating the dimension of the common subspace that modifies the order-selection rule of~\cite{santamaria2016order} for use with the $\ell_{\infty}$-average. This hybrid method would not be possible for existing techniques that estimate the GMEB, because it uses the values of the dual variables as weights for an eigenvalue decomposition at each potential order. The hybrid approach outperforms the proposed technique and that of~\cite{santamaria2016order} when the ambient dimension is close to the subspace dimension of the data points.
A high-accuracy estimate of the GMEB center combined with an order-selection rule for the number of common dimensions results in a powerful technique for detecting and estimating similarity in a collection of subspaces. We anticipate that many practical applications will arise in the form of distributed large-scale problems, where the subspace averaging can be used for aggregation, for example the sparse subspace clustering of~\cite{abdolali2019scalable}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,131 |
La Sonate pour piano en ré majeur est une sonate pour pianoforte de Joseph Haydn composée en 1779, dédiée à Caterina et Marianna Auenbrugger. Tirée de l'ensemble de six sonates publiées en 1780, elle est l'une de ses sonates pré-londoniennes les plus connues. Ce corpus fut dédié aux talentueuses sœurs Auenbrugger, dont Leopold Mozart et Haydn lui-même admirèrent le jeu dans les salons aristocratiques.
Structure
Allegro con brio : ce premier mouvement est, avec son brillant thème principal, très enjoué, et rappelle le plus joyeux Domenico Scarlatti. La principale difficulté technique de ce mouvement réside dans l'égalité et la régularité des doubles croches.
Largo e sostenuto : sarabande d'un sentiment grave, désolé, dont les rythmes pointés et les textures contrapuntiques rappellent une ouverture baroque à la française.
Finale: Presto ma non troppo : portant la mention de l'auteur innocentemente, ce rondo bâti autour d'un air séduisant aurait sans mal pu être entendu sifflé à n'importe quel coin de rue de Vienne.
Source
François-René Tranchefort, Guide de la musique de piano et clavecin, éd. Fayard 1987,
Liens externes
Sonate pour piano de Joseph Haydn
Œuvre en ré majeur | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,047 |
Q: PHP: Check if string is URL I`m trying to get all the css code from a website , I have difficuties trying to determine whether the href is realtive or absolute.I was using filter_var() , but it does not properly with urls starting with //.
<?php
$file = file_get_contents($url);
$doc = new DOMDocument();
$doc->loadHTML($file);
$domcss = $doc->getElementsByTagName('link'); //Get all .css file
foreach($domcss as $links) {
if( strtolower($links->getAttribute('rel')) == "stylesheet" ) {
$href = $links->getAttribute('href');
if(filter_var($href, FILTER_VALIDATE_URL)){ //Check if href is relative or absolute ,not working correctly
$css = file_get_contents($href);
}else{
$css = file_get_contents($this->url.$href);
}
//echo $css;
}
}
For example:
var_dump(filter_var('//google.bg' , FILTER_VALIDATE_URL)); //false
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,889 |
\section{Omitted Proofs in \cref{sec:preliminaries}}\label{sec:apd-FMM}
\MatMulI*
\begin{proof}
The proof is adapted from \cite[Lemma 7.7]{Blaser13}; readers familiar with tensor and tensor rank may refer to the proof of that lemma.
Let $g(n) = \mathsf{MM}(n^a, n^{b+r}, n^{c+r})$, then it is easy to see that in $g(n)$ operations we can compute $n^r$ matrix multiplication instances of size $n^a\times n^b\times n^c$. We will use induction to prove that for every integer $k$, $n^r$ matrix multiplication instances of size $n^{ka}\times n^{kb}\times n^{kc}$ can be computed in $\lceil g(n)/n^r\rceil^k\cdot n^r$ operations.
The case for $k=1$ is trivial. When $k > 1$, we can compute $n^r$ matrix multiplication instances of size $n^{ka}\times n^{kb}\times n^{kc}$ as follows. First, we partition every size-$(n^{ka}\times n^{kb})$ matrix into size-$(n^{(k-1)a}\times n^{(k-1)b})$ blocks, and partition every size-$(n^{kb}\times n^{kc})$ matrix into size-$(n^{(k-1)b}\times n^{(k-1)c})$ blocks. Then we can reduce the problem to computing $n^r$ matrix multiplication instances of size $n^a\times n^b\times n^c$ using ``big operations'', where each ``big operation'' is a matrix multiplication instance of size $n^{(k-1)a}\times n^{(k-1)b}\times n^{(k-1)c}$. It suffices to perform $g(n)$ ``big operations''. On the other hand, by the induction hypothesis, we can perform each $n^r$ ``big operations'' in $\lceil g(n)/n^r\rceil^{k-1}\cdot n^r$ operations. By partitioning these $g(n)$ ``big operations'' into groups of size $n^r$, we can compute all these ``big operations'' in $\lceil g(n)/n^r\rceil\cdot \lceil g(n)/n^r\rceil^{k-1}\cdot n^r$ operations, and we are done.
Now it is easy to see that
\[\omega(a, b, c) \le \inf_{n, k}\mleft\{\log_{n^k}\mleft(\lceil g(n)/n^r\rceil^k\cdot n^r\mright)\mright\}\le \inf_n\mleft\{\log_n\lceil g(n)/n^r\rceil\mright\}=\omega(a, b+r, c+r) - r.\qedhere\]
\end{proof}
\MatMulII*
\begin{proof}
Let $0\le \tau_1 < \tau_2 \le 1$. Then:\begin{itemize}
\item By \cref{lemma:matmul1}, $(\tau_2-\tau_1) + \omega(1, 1-\tau_2, 1-\tau_2) \le \omega(1, 1-\tau_1, 1-\tau_1)$, which means $\tau_1 + f(\tau_1) \ge \tau_2 + f(\tau_2)$.
\item For every integer $n$, we can compute the product of an $n\times n^{1-\tau_1}$ matrix and an $n^{1-\tau_1}\times n^{1-\tau_1}$ matrix, by using $n^{2(\tau_2-\tau_1)}$ invocations of multiplication algorithms for matrices of dimension $n\times n^{1-\tau_2}$ and $n^{1-\tau_2}\times n^{1-\tau_2}$. Therefore $\omega(1, 1-\tau_1, 1-\tau_1) \le 2(\tau_2 - \tau_1) + \omega(1, 1-\tau_2, 1-\tau_2)$, which means $2\tau_1 + f(\tau_1) \le 2\tau_2 + f(\tau_2)$.\qedhere
\end{itemize}
\end{proof}
\section{Computing Unique Shortest Paths in Directed Graphs}\label{sec:breaking-tie}
In this section, we show how to compute \emph{unique} shortest paths in a directed graph in $\tilde{O}(n^{2+\mu}M)$ time, matching the current best time bound for computing the all-pairs distances \cite{Zwick02}. Here $\mu < 0.5286$ is the solution of $\omega(1, 1, \mu) = 1 + 2\mu$~\cite{GallU18}. This algorithm is needed before we use \cref{lem:fast}.
We may assume that before we proceed, we have already computed the all-pairs distances $\|uv\|$ for every $u, v\in V$, using the APSP algorithm in \cite{Zwick02}.
Our tie-breaking method requires a (random) permutation $\pi$ of all vertices, or equivalently a bijection between the vertex set $V$ and $[n]$, i.e.~$\pi:V\to[n]$. According to $\pi$, for every graph $G$ on $V$ and every $u, v\in V$, we will specify a shortest path $\rho_G(u, v)$ in $G$ from $u$ to $v$ in a certain way. These shortest paths will be \emph{consistent} and \emph{easy to compute}, which is captured by the following theorem. (See also \cite[Theorem 1.3 and 1.4]{Ren20}.)
\begin{restatable}{theorem}{ThmUnique}\label{thm:breaking-tie}
Given a graph $G$ on $V$, a representation of the set of shortest paths $\{\rho_G(u, v)\}_{u, v\in V}$ can be computed in $\tilde{O}(n^{2+\mu}M)$ time, with high probability over the random choice of permutation $\pi$, such that the following hold.
\begin{enumerate}[({Property} a)]
\item Let $G$ be a graph on $V$. For every $u', v'\in\rho_G(u, v)$ such that $u'$ appears before $v'$, the portion of $u'\rightsquigarrow v'$ in $\rho_G(u, v)$ coincides with the path $\rho_G(u', v')$.\label{item:subpath-consistency}
\item Let $G$ be a graph on $V$, $u, v\in V$, and $G'$ be a subgraph of $G$. Suppose $\rho_G(u, v)$ is completely contained in $G'$, then $\rho_{G'}(u, v) = \rho_G(u, v)$.\label{item:subgraph-consistency}
\end{enumerate}
\end{restatable}
From (Property \ref{item:subpath-consistency}), for every vertex $u$, the shortest paths from $u$ to every other vertex in $G$ form a tree, and we call this tree the \emph{outgoing shortest path tree} rooted at $u$, denoted as $T^{\sf out}(u)$. Similarly, the shortest paths to $u$ from every other vertex in $G$ also form a tree, and we call this tree the \emph{incoming shortest path tree} rooted at $u$, denoted as $T^{\sf in}(u)$. Actually, the ``representation'' computed is exactly the set of $n$ outgoing shortest path trees $\{T^{\sf out}(u)\}_{u\in V}$ and the set of $n$ incoming shortest path trees $\{T^{\sf in}(u)\}_{u\in V}$.
\paragraph{The rest of this section.} We first define the paths $\rho_G(u, v)$ in \cref{sec:def-rhouv}. Then we explain how to compute them efficiently in \cref{sec:computing-rhouv}, by presenting an algorithm that computes the incoming and outgoing shortest path trees in $\tilde{O}(Mn^{2+\mu})$ time. Finally, we prove (Property~\ref{item:subpath-consistency}) and (Property~\ref{item:subgraph-consistency}) in \cref{sec:proof-of-consistency}.
\subsection{Defining $\rho_G(u, v)$}
\label{sec:def-rhouv}
Let $G$ be an input graph, and $\pi:V\to[n]$ be a (random) bijection. Let $u, v\in V$, $P$ be a path from $u$ to $v$, we will say that any vertex on $P$ that is neither $u$ nor $v$ is an \emph{internal vertex of $P$}.
Recall that we defined $|uv|$ as the \emph{largest} number of edges in any shortest path from $u$ to $v$. In particular: \begin{itemize}
\item $|uv| = 0$ if and only if $u = v$;
\item $|uv| = 1$ if and only if the edge $(u,v)$ is the \emph{only} shortest path from $u$ to $v$;
\item $|uv| = \infty$ if and only if there is no path from $u$ to $v$ in $G$;
\item otherwise, we have $2 \le |uv| < \infty$.
\end{itemize}
We claim that the set of vertices mapped to small values by $\pi$ is a good ``hitting set'' w.h.p:
\begin{claim}\label{claim:breaking-tie-hitting-set}
Fix the graph $G$. For some large constant $C$, with high probability over the choice of $\pi$, the following holds. For every pair of vertices $u, v\in V$ such that $2 \le |uv| < \infty$, there is a shortest path $\rho'(u, v)$ from $u$ to $v$, and an internal vertex $z$ on $\rho'(u, v)$, such that $\pi(z) \le CMn\ln n/\|uv\|$.
\end{claim}
\begin{proof}
Fix two vertices $u, v\in V$, and any shortest path $\rho'(u, v)$ from $u$ to $v$. Denote $r=\|uv\|$, if $r\le M\ln n$ then the claim is trivial. Otherwise, there are at least $r/1.1M$ vertices on $\rho'(u, v)$. Therefore, the probability over a random bijection $\pi:V\to [n]$ that $\pi$ maps every vertex on $\rho'(u, v)$ to an integer greater than $CMn\ln n/r$ is at most
\[(1-CM\ln n/r)^{r/1.1M} \le 1/n^{C/1.1}.\]
Thus by a union bound, the probability that the above condition holds (for every $u,v$) is at least $1-1/n^{C/1.1-2}$, which is a high probability.
\end{proof}
Let $u, v\in V$ such that $2 \le |uv| < \infty$. Define $w(u, v)$ as the intermediate vertex with the smallest label in any shortest path from $u$ to $v$, i.e.
\begin{equation}\label{eq:def-of-wuv}
w(u, v) = \arg_w\min\{\pi(w) : \|uv\| = \|uw\| + \|wv\|, w\ne u\text{ and }w\ne v\}.
\end{equation}
\cref{claim:breaking-tie-hitting-set} states that w.h.p.~for every vertices $u, v\in V$ such that $2 \le |uv| < \infty$, we have that
\begin{equation}\label{eq:wuv-is-small}
\pi(w(u, v)) \le CMn\ln n / \|uv\|.
\end{equation}
In the rest of this section, we assume that \cref{eq:wuv-is-small} holds for every vertices $u, v\in V$ such that $2 \le |uv| < \infty$. Now we define the paths $\rho_G(u, v)$.
\begin{definition}
Let $u, v\in V$ such that $|uv| \ne \infty$. The path $\rho_G(u, v)$ is recursively defined as follows. \begin{itemize}
\item If $u = v$, then $\rho_G(u, v)$ is the empty path that starts and ends at $u$.
\item If $|uv| = 1$, then $\rho_G(u, v)$ consists of a single edge, i.e.~the edge from $u$ to $v$.
\item Otherwise, let $w = w(u, v)$, then $\rho_G(u, v)$ is the concatenation of $\rho_G(u, w)$ and $\rho_G(w, v)$.
\end{itemize}
\end{definition}
For every $u, v$ such that $2 \le |uv| < \infty$, since $w$ is an intermediate vertex on some shortest path from $u$ to $v$, it is easy to see that $|uw| < |uv|$ and $|wv| < |uv|$. Therefore $\rho_G(u, v)$ is well defined --- it is inductively defined in the nondecreasing order of $|uv|$.
\subsection{Computing Shortest Path Trees in $\tilde{O}(Mn^{2+\mu})$ Time}
\label{sec:computing-rhouv}
We will need the following classical algorithm for computing distance products:
\begin{lemma}[\cite{Zwick02}]\label{lemma:distance-product-algo}
Let $A$ be an $n\times m$ matrix, and $B$ be an $m\times n$ matrix. Suppose every entry in $A$ or $B$ is either $+\infty$ or an integer with absolute value at most $M$. Then the distance product of $A$ and $B$ can be computed in $\tilde{O}(M\cdot \mathsf{MM}(n, m, n))$ time.
\end{lemma}
\paragraph{Computing $w(u, v)$.} We first show how to compute $w(u, v)$ for every $u, v\in V$ such that $2 \le |uv| < \infty$ in $\tilde{O}(Mn^{2+\mu})$ time. Then we use the values of all $w(u, v)$ to compute the incoming and outgoing shortest path trees in $\tilde{O}(n^2)$ additional time. Our strategy for computing $w(u, v)$ is to mimic the algorithm in \cite{KowalukL05, ShapiraYZ11} for computing maximum witness of Boolean matrix multiplication. In particular, we divide the possible witnesses into blocks, and use fast matrix multiplication algorithms to find the block containing $w(u, v)$, for every $u, v$. After that, we use brute force to find $w(u, v)$ inside that block. Details follow.
Let $r = 2^k$ be a parameter, we show how to compute $w(u, v)$ for every pair of vertices $u, v\in V$ such that $r \le \|uv\| < 2r$. Let
\[\mathcal{H}_r = \{z \in V : \pi(z) \le CMn\ln n/r\}.\]
By \cref{claim:breaking-tie-hitting-set}, for every vertices $u,v$ such that $\|uv\| \in [r, 2r)$, we have $w(u, v)\in\mathcal{H}_r$.
We define an $n\times |\mathcal{H}_r|$ matrix $A$ and an $|\mathcal{H}_r|\times n$ matrix $B$ as follows. For every $u\in V$ and $z\in \mathcal{H}_r$, we define
\[A[u, z] = \begin{cases} \|uz\| & \text{if }\|uz\| \le 2r\text{ and }u\ne z\\ +\infty & \text{otherwise}\end{cases},\text{ and }
B[z, u] = \begin{cases}\|zu\| & \text{if }\|zu\| \le 2r\text{ and }u\ne z\\ +\infty & \text{otherwise}\end{cases}.\]
Then we compute the \emph{minimum witness} of the distance product $A\star B$. To be more precise, we compute the matrix $W[\cdot, \cdot]$ such that for every $u, v\in V$,
\[W[u, v] = \arg_z\min \{\pi(z) : \|uv\| = A[u, z] + B[z, v]\}.\]
\subparagraph{Correctness.} Fix $u, v\in V$, where $\|uv\| \in [r, 2r)$. We will show that if $|uv| = 1$, then $W[u, v]$ does not exist; otherwise $W[u, v]$ coincides with $w(u, v)$ defined in \cref{eq:def-of-wuv}.
First, suppose $|uv| = 1$, then there are no intermediate vertex $z$ such that $\|uv\| = \|uz\| + \|zv\|$, which means $W[u, v]$ does not exist.
Now we assume $|uv| \ge 2$. Since $\|uv\| \ge r$, by \cref{claim:breaking-tie-hitting-set}, there is an intermediate vertex $z\in\mathcal{H}_r$ such that $\|uz\| + \|zv\| = \|uv\|$. Since $\|uz\|, \|zv\| \le \|uv\| < 2r$, we can see that $\|uv\| = A[u, z] + B[z, v]$, therefore $W[u, v]$ exists. Let $z = W[u, v]$, then by \cref{eq:def-of-wuv}, $\pi(w(u, v)) \le \pi(z)$. On the other hand, \cref{claim:breaking-tie-hitting-set} shows that $w(u, v)\in\mathcal{H}_r$, so by the definition of $z = W[u, v]$, we have $\pi(z) \le \pi(w(u, v))$. Therefore $z = w(u, v)$ and we have established the correctness of $W[\cdot, \cdot]$.
\subparagraph{Time complexity.} Now we show how to compute the matrix $W[\cdot, \cdot]$ efficiently.
Let $s = n^\mu$, where $\mu \in (0, 1)$ is a parameter to be determined later. If $|\mathcal{H}_r| < s$, then we can compute the matrix $W$ by brute force in $\tilde{O}(n^2s)$ time. Otherwise, we partition $\mathcal{H}_r$ into blocks of size $s$, where the $i$-th block contains vertices that are mapped by $\pi$ to values between $(i-1)\cdot s+1$ and $i\cdot s$. For every block $i$, we compute the distance product of $A$ and $B$ where only vertices in block $i$ are allowed as witnesses. In other words, we compute the following matrix
\[D^i[u, v] = \min\{A[u, z] + B[z, v] : (i-1)\cdot s+1 \le \pi(z) \le i\cdot s\}.\]
By \cref{lemma:distance-product-algo}, this matrix can be computed in $\tilde{O}(r\cdot \mathsf{MM}(n, s, n))$ time. There are $O(|\mathcal{H}_r|/s)=\tilde{O}(Mn/(rs))$ blocks, and we need to compute a distance product $D^i$ for each block $i$. Therefore the total time for computing all these distance products is
\[\tilde{O}(r\cdot \mathsf{MM}(n, s, n) \cdot Mn/(rs)) = \tilde{O}(M\cdot (n/s)\cdot \mathsf{MM}(n, s, n)).\]
Now for every $u, v\in V$ such that $\|uv\| \in [r, 2r)$ and $|uv|\ge 2$, we want to compute $W[u, v]$, which is the vertex $z\in\mathcal{H}_r$ with the minimum $\pi(z)$, such that $\|uv\| = A[u, z] + B[z, v]$. First, we find the smallest $i$ such that $D^i[u, v] = \|uv\|$, and we know that $W[u, v]$ is in the $i$-th block. (If such $i$ does not exist, then $W[u, v]$ does not exist either, and $|uv| = 1$.) This step takes $\tilde{O}(Mn/(rs))$ time. Then we iterate through the vertices in this block, and find the vertex $z$ with the smallest $\pi(z)$ such that $A[u, z] + B[z, v] = \|uv\|$. This step takes $O(s)$ time.
It follows that the time complexity for computing every $w(u, v)$ where $\|uv\| \in [r, 2r)$ is
\begin{align}
&\,\tilde{O}(M\cdot \mathsf{MM}(n, s, n) \cdot (n/s) + n^2\cdot Mn/(rs) + n^2s)\nonumber\\
\le&\,\tilde{O}(M\cdot\mathsf{MM}(n, s, n)\cdot (n/s) + n^2s)\label{eq:step1}\\
\le&\,\tilde{O}(M\cdot n^{\omega(1, \mu, 1) + 1 - \mu} + n^{2+\mu})\nonumber.
\end{align}
Here, \cref{eq:step1} is because $n^2\cdot Mn/(rs) \le n^2\cdot M \cdot (n/s)\le M\cdot \mathsf{MM}(n, s, n)\cdot (n/s)$.
Let $\mu$ be the solution to $\omega(1, \mu, 1) = 1 + 2\mu$, then $\mu < 0.5286$ (\cite{Zwick02, GallU18}). It follows that the time complexity for computing every $w(u, v)$, where $r \le \|uv\| < 2r$, is at most $\tilde{O}(Mn^{2 + \mu})$.
\subparagraph{Putting it together.} We run the above algorithm for $k$ from $0$ to $\lfloor\log (nW)\rfloor$, and for each $k$, we update the values $w(u, v)$ where $\|uv\| \in [2^k, 2^{k+1})$. The total time to compute $w(u, v)$ for all $u, v$ is thus $\tilde{O}(Mn^{2 + \mu})$.
\def\mathsf{parent}{\mathsf{parent}}
\paragraph{From $w(u, v)$ to unique shortest paths.} For every $u, v\in V$, we will compute the parent of $u$ in the tree $T^{\sf in}(v)$, denoted as $\mathsf{parent}_v(u)$. In other words, $\mathsf{parent}_v(u)$ is the second vertex in the path $\rho_G(u, v)$ (the first being $u$). After computing $\mathsf{parent}_v(u)$ for every $u, v\in V$, it is easy to construct $T^{\sf in}(v)$ for every vertex $v$. We can compute every $T^{\sf out}(u)$ in a symmetric fashion.
We proceed by nondecreasing order of $\|uv\|$. Suppose that for every $(u', v')$ such that $\|u'v'\| < \|uv\|$, we have already computed $\mathsf{parent}_{v'}(u')$. Now we compute $\mathsf{parent}_v(u)$ as follows. Let $w = w(u, v)$. If $w$ does not exist, let $\mathsf{parent}_v(u) = v$; otherwise $\mathsf{parent}_v(u) = \mathsf{parent}_w(u)$.
This algorithm (that given every $w(u, v)$, computes every $\mathsf{parent}_v(u)$) clearly runs in $\tilde{O}(n^2)$ time. Notice that if $w$ exists, then $w$ is an intermediate vertex in $\rho_G(u, v)$, thus $\|uw\| < \|uv\|$, and the second vertex in the path $\rho_G(u, v)$ coincides with the second vertex in the path $\rho_G(u, w)$. Hence, the correctness of the algorithm can be easily proved by induction on $\|uv\|$.
\subsection{Proof of \cref{thm:breaking-tie}}
\label{sec:proof-of-consistency}
\ThmUnique*
In this subsection, for any path $P$ and vertices $u',v'\in P$ such that $u'$ appears before $v'$ on $P$, we use $P[u',v']$ to denote the portion of $u'\rightsquigarrow v'$ on the path $P$.
\begin{proof}[Proof of (Property \ref{item:subpath-consistency})]
We prove it by induction on the number of edges of $\rho_G(u,v)$. Let $P=\rho_G(u, v)$. If $u=v$ or $P$ has only one edge, (Property \ref{item:subpath-consistency}) is trivial. Now suppose $P$ has $k$ edges where $k > 1$. Let $w = w(u, v)$, then $w$ must lie on $P$. Consider the following three cases:\begin{itemize}
\item Suppose $u'$ appears after (or coincides with) $w$ on $P$. By definition, $P[w,v] = \rho_G(w,v)$. Then $P[u',v'] = \rho_G(u',v')$ by induction hypothesis on $\rho_G(w,v)$ since it has fewer edges than $\rho_G(u,v)$.
\item Suppose $v'$ appears before (or coincides with) $w$. This case is symmetric to the above case.
\item Otherwise, $w$ lies between $u'$ and $v'$ on $P$.
First, we claim that $w = w(u', v')$. As $w$ lies on some shortest path from $u'$ to $v'$ (i.e.~$P[u', v']$), we have $\pi(w(u', v')) \le \pi(w)$. On the other hand, suppose there exists $w'$ such that $\pi(w')<\pi(w)$ and $w'$ is on some shortest path from $u'$ to $v'$. Then $w'$ also lies on some shortest path from $u$ to $v$, so it is a better candidate for $w(u, v)$, contradicting the definition of $w$.
Second, by induction hypothesis on $\rho_G(u, w)$, which has fewer edges than $\rho_G(u, v)$, we have $P[u',w] = \rho_G(u',w)$. Similarly, $P[w,v']=\rho_G(w, v')$. Therefore, by definition, $P[u',v']=P[u',w]\circ P[w,v']=\rho_G(u',v')$.\qedhere
\end{itemize}
\end{proof}
\begin{proof}[Proof of (Property \ref{item:subgraph-consistency})]
We prove it by induction on the number of edges of $\rho_G(u,v)$. Let $P=\rho_G(u, v)$. If $u=v$ or $P$ has only one edge, (Property \ref{item:subgraph-consistency}) is trivial.
Now suppose $P$ has more than one edge. Let $w=w_G(u,v)$ (i.e.~the vertex $w(u, v)$ defined in \cref{eq:def-of-wuv} in graph $G$), we claim that $w$ coincides with $w_{G'}(u,v)$ (i.e.~the vertex $w(u, v)$ defined in \cref{eq:def-of-wuv} in graph $G'$). Since $P$ is also a shortest path from $u$ to $v$ in $G'$, we have $\pi(w_{G'}(u, v)) \le \pi(w)$. On the other hand, suppose there exists $w'$ such that $\pi(w') < \pi(w)$ and $w'$ is on some shortest path from $u$ to $v$ in $G'$. Then $w'$ also lies on some shortest path from $u$ to $v$ in $G$, so it is a better candidate for $w_G(u,v)$, contradicting the definition of $w$.
Since $\rho_G(u, w)$ has fewer edges than $\rho_G(u, v)$, and $\rho_G(u, w)$ is completely contained in $G'$, we can use induction hypothesis on $\rho_G(u, w)$ to conclude that $P[u, w]=\rho_{G'}(u, w)$. Similarly, we can use the induction hypothesis on $\rho_G(w, v)$ to conclude that $P[w, v]=\rho_{G'}(w, v)$. Therefore, by definition, $\rho_{G'}(u,v) = \rho_{G'}(u, w) \circ \rho_{G'}(w, v) = P$.
\end{proof}
\section{Conclusions and Open Problems}
We presented an improved DSO for directed graphs with integer weights in $[1, M]$. The preprocessing time is $O(n^{2.5794}M)$ and the query time is $O(1)$. However, there is still a small gap between the preprocessing time of our DSO and the current best time bound for the APSP problem in directed graphs, which is $\tilde{O}(n^{2+\mu}M) \le O(n^{2.5286}M)$ \cite{Zwick02}. Can we improve the preprocessing time to $\tilde{O}(n^{2+\mu} M)$, matching the latter time bound? Another interesting problem is to investigate the complexity of preprocessing a DSO in undirected graphs --- here, the best time bound for APSP is $\tilde{O}(n^\omega M)$ \cite{Seidel95, ShoshanZ99}. Can we preprocess a DSO in $\tilde{O}(n^\omega M)$ time on undirected graphs?
Compared to other DSOs \cite{WeimannY13, GrandoniW12, ChechikC20}, our oracle has two drawbacks. First, our query algorithm only outputs the shortest distance, but we do not know how to find the actual shortest paths. So another open problem is whether we can find the actual shortest path with additional $O(l)$ query time, where $l$ is the number of edges in the returned shortest path. Second, since we used \cite[Observation 2.1]{Ren20}, our oracle can only deal with positive edge weights. Can we extend our oracle to also deal with negative edge weights?
For every parameter $f$, the $r$-truncated DSO in \cref{sec:r-truncated-DSO} can actually handle $f$ edge/vertex deletions in $\Tilde{O}(f^\omega r)$ query time. (See also \cite{vdBS19}.) However, as far as we know, \cite[Observation 2.1]{Ren20} only works for one failure. It would be exciting to extend \cite[Observation 2.1]{Ren20} or our (full) DSO to also handle $f$ failures.
\section{Constructing a DSO in $O(n^{2.5794}M)$ Time}\label{sec:DSO}
In this section, we show how to preprocess a distance sensitivity oracle in $O(n^{2.5794}M)$ time, such that every query can be answered in constant time. Our preprocessing algorithm is randomized; with high probability over the preprocessing algorithm, the query algorithm always returns the correct answer.
\subsection{Preliminaries}
First, our preprocessing algorithm will use the following algorithm for inverting a polynomial matrix. A detailed description of this algorithm will be given in \cref{sec:invert-poly-matrix}.
\ThmInvertAlgo*
Let $G$ be a directed graph whose edge weights are integers in $[1, M]$. We define its \emph{symbolic adjacency matrix} $\mathsf{SA}(G)$ as (see \cite{Sankowski05})
\[\mathsf{SA}(G)_{i, j} = \begin{cases}
1 & \text{if $i = j$},\\
z_{i, j}x^{l} & \text{if there is an edge from $i$ to $j$ with weight $l$ in $G$},\\
0 & \text{otherwise},
\end{cases}\]
where $z_{i, j}$ are unique variables corresponding to edges of $G$.
It will be inefficient to deal with these variables $z_{i, j}$, therefore we will pick a suitably large field $\mathbb{F}$, and substitute each variable $z_{i, j}$ by a random element in $\mathbb{F}$. However, we still keep the indeterminate $x$. Now, let $\mathbf{Z}$ be a matrix where each $\mathbf{Z}_{i, j} \in \mathbb{F}$, we will use $\mathsf{SA}_{\mathbf{Z}}(G)$ to denote the matrix $\mathsf{SA}(G)$ with each formal variable $z_{i, j}$ substituted by the field element $\mathbf{Z}_{i, j}$. Note that $\mathsf{SA}_{\mathbf{Z}}(G)$ is a polynomial matrix where every entry is a polynomial over $x$ with degree at most $M$.
We recall the definition of \emph{adjoint} matrix that will be crucial to our algorithm. Let $\mathbf{A}$ be an $n\times n$ matrix over a commutative ring $\mathcal{R}$, and $i, j\in [n]$. We denote by $\mathbf{A}^{i, j}$ the matrix $\mathbf{A}$ with every element in the $i$-th row and the $j$-th column set to zero, except that $(\mathbf{A}^{i, j})_{i, j} = 1$. The adjoint matrix of $\mathbf{A}$, denoted as $\adj(\mathbf{A})$, is an $n\times n$ matrix such that $\adj(\mathbf{A})_{i, j} = \det(\mathbf{A}^{j, i})$ for every $i, j\in[n]$. A basic fact about $\adj(\mathbf{A})$ is that if $\det(\mathbf{A})$ is a unit of $\mathcal{R}$, then $\adj(\mathbf{A}) = \det(\mathbf{A}) \cdot \mathbf{A}^{-1}$.
There is a close relationship between the distances in the graph $G$ and the entries in the adjoint of $\mathsf{SA}(G)$. Let $p$ be a multivariate polynomial, we define $\deg^*_x(p)$ as the lowest degree of the variable $x$ in any monomial of $p$. If $p=0$, then we define $\deg^*_x(p):=+\infty$. We have:
\begin{theorem}[{\cite[Lemma 4]{Sankowski05}}]\label{thm:Sankowski-adjoint}
Let $G$ be a directed graph with positive integer weights, $i,j$ be two vertices. Then the distance from $i$ to $j$ in $G$ is $\deg^*_x(\adj(\mathsf{SA}(G))_{i, j})$.
\end{theorem}
We need the following theorem that allows us to maintain the adjoint of a matrix under \emph{rank-$1$} queries. (This theorem is a special case of \cite[Lemma 1.6]{vdBS19}.)
\begin{theorem}\label{thm:SMW-formula}
Let $\mathcal{R}$ be an arbitrary commutative ring, $\mathbf{A}\in \mathcal{R}^{n\times n}$ be an invertible matrix, $\mathbf{u}, \mathbf{v}\in \mathcal{R}^n$ be column vectors, and $\gamma = 1+\mathbf{v}^\mathsf{T}\mathbf{A}^{-1}\mathbf{u}$. Suppose $\gamma$ is invertible, then $\mathbf{A}+\mathbf{u}\mathbf{v}^\mathsf{T}$ is also invertible, and
\[\adj(\mathbf{A}+\mathbf{u}\mathbf{v}^\mathsf{T}) = \det(\mathbf{A})(\gamma\mathbf{A}^{-1} - (\mathbf{A}^{-1}\mathbf{u}\mathbf{v}^\mathsf{T}\mathbf{A}^{-1})).\]
\vspace{-1.5em}
\end{theorem}
\begin{proof}[Proof Sketch]
By the matrix determinant lemma, we have
\[\det(\mathbf{A}+\mathbf{u}\mathbf{v}^\mathsf{T}) = \gamma\cdot \det(\mathbf{A}).\]
Since $\gamma$ is invertible, we can use the Sherman-Morrison-Woodbury formula \cite{ShermanM50, Woodbury50}:
\[(\mathbf{A}+\mathbf{u}\mathbf{v}^\mathsf{T})^{-1} = \mathbf{A}^{-1} - \gamma^{-1}(\mathbf{A}^{-1}\mathbf{u}\mathbf{v}^\mathsf{T}\mathbf{A}^{-1}).\]
The theorem is proved by multiplying the above two formulas together.
\end{proof}
We need the Schwartz-Zippel lemma that guarantees the correctness of our randomized algorithm.
\begin{theorem}[{Schwartz-Zippel Lemma, \cite{Schwartz80, Zippel79}}]\label{thm:schwartz-zippel}
Let $p(x_1,x_2,\dots,x_m)$ be a non-zero polynomial of (total) degree $d$ over a field $\mathbb{F}$. Let $S$ be a finite subset of $\mathbb{F}$, and $r_1,r_2,\dots,r_m$ be independently and uniformly sampled from $S$. Then
\[\Pr[p(r_1,r_2,\dots,r_m) = 0] \le \frac{d}{|S|}.\]
\end{theorem}
We also need the following algorithm that computes the determinant of a polynomial matrix.
\begin{theorem}[\cite{Storjohann03,labahn2017fast}]\label{lem:det}
Let $\mathbf{B}\in\mathbb{F}[x]^{n\times n}$ be a matrix of degree at most $d$, then we can compute $\det(\mathbf{B})$ in $\tilde{O}(dn^\omega)$ field operations.
\end{theorem}
\subsection{Constructing an $r$-Truncated DSO}\label{sec:r-truncated-DSO}
Recall that for a failure $f$ (which is either a vertex or an edge), $\|uv\diamond f\|$ denotes the length of the shortest path from $u$ to $v$ that avoids $f$. An \emph{$r$-truncated} DSO, as defined in \cite{Ren20}, is a DSO that given a query $(u, v, f)$, outputs the value $\min\{\|uv\diamond f\|, r\}$. The main result of this subsection is that given an integer $r$ and an input graph $G$, an $r$-truncated DSO can be constructed in time
\[\tilde{O}(n^\omega M) + r^2/M\cdot \mathsf{MM}(n, nM/r, nM/r)\cdot n^{o(1)}.\]
\paragraph{Preprocessing algorithm.} Let $C$ be a large enough constant. First, we choose a prime $p \in [n^C, 2n^C]$ and let $\mathbb{F}=\mathbb{Z}_p$. Then we let $\mathbf{Z}$ be an $n\times n$ matrix over $\mathbb{F}$, where every $\mathbf{Z}_{i, j}$ is sampled independently from $\mathbb{F}$ uniformly at random. We substitute $\mathbf{Z}$ into $\mathsf{SA}(G)$ to obtain the matrix $\mathsf{SA}_{\mathbf{Z}}(G)$. Recall that each element of $\mathsf{SA}_{\mathbf{Z}}(G)$ is a polynomial over $x$ with coefficients in $\mathbb{F}$, whose degree is at most $M$. Then we compute $\mathsf{SA}_{\mathbf{Z}}(G)^{-1}$ and $\det(\mathsf{SA}_{\mathbf{Z}}(G))$ using \cref{thm:invert-algo} and \cref{lem:det} respectively.
Since we only want an $r$-truncated DSO, we only need to compute $\mathsf{SA}_{\mathbf{Z}}(G)^{-1}$ modulo $x^r$, i.e.~we only preserve the monomials with degree less than $r$ in every entry of $\mathsf{SA}_{\mathbf{Z}}(G)^{-1}$. Note that $\mathsf{SA}_{\mathbf{Z}}(G)$ is of the form $\mathbf{I} + x\mathbf{M}$ for some matrix $\mathbf{M}\in \mathbb{F}[x]^{n\times n}$, therefore its determinant is of the form $1 + x\cdot p(x)$ for some polynomial $p(x)$. As the determinant is invertible modulo $x^r$, $\mathsf{SA}_{\mathbf{Z}}(G)$ is also invertible modulo $x^r$. By \cref{thm:invert-algo}, we can compute $\mathsf{SA}_{\mathbf{Z}}(G)^{-1}\bmod x^r$ in time
\[\tilde{O}(n^\omega M)+(r^2/M)\cdot \mathsf{MM}(n, nM/r, nM/r)\cdot n^{o(1)}.\]
By \cref{lem:det}, we can compute $\det(\mathsf{SA}_{\mathbf{Z}}(G))$ in $\tilde{O}(n^\omega M)$ time. Again, we only need to store the polynomial $\det(\mathsf{SA}_{\mathbf{Z}}(G)) \bmod x^r$. This concludes the preprocessing algorithm.
For the following query algorithms, we use $\mathbf{e}_i$ to denote the $i$-th standard unit vector, i.e.~$(\mathbf{e}_i)_i = 1$, and $(\mathbf{e}_i)_j = 0$ for every index $j \ne i$. %
\paragraph{Query algorithm for an edge failure.} A query consists of vertices $u, v\in V$ and a failed edge $e$. We assume that $e$ goes from vertex $a$ to vertex $b$, and has weight $l$. Let $G'$ be the graph obtained by removing $e$ from $G$, then we have $\mathsf{SA}(G') = \mathsf{SA}(G) + \mathbf{u}\mathbf{v}^\mathsf{T}$, where $\mathbf{u}=\mathbf{e}_a$ and $\mathbf{v}=-z_{a, b}x^l\mathbf{e}_b$. Let
\begin{itemize}
\item $\gamma = 1 + \mathbf{v}^\mathsf{T}\mathsf{SA}(G)^{-1}\mathbf{u} = 1 - z_{a, b}x^l\mathsf{SA}(G)^{-1}_{b, a}$,
\item $\beta = (\mathsf{SA}(G)^{-1}\mathbf{u}\mathbf{v}^{\mathsf{T}}\mathsf{SA}(G)^{-1})_{u, v} = -\mathsf{SA}(G)^{-1}_{u, a}z_{a, b}\mathsf{SA}(G)^{-1}_{b, v}x^l$, and
\item $\alpha = \det(\mathsf{SA}(G))(\gamma\cdot\mathsf{SA}(G)^{-1}_{u, v} - \beta)$,
\end{itemize}
then by \cref{thm:SMW-formula}, we have $\alpha = \adj(\mathsf{SA}(G'))_{u, v}$. (Note that since $l \ge 1$, $\gamma$ is always invertible.) %
\paragraph{Query algorithm for a vertex failure.} A query consists of vertices $u,v\in V$ and a failed vertex $f\in V$. It suffices to remove every outgoing edge from $f$ (and we do not need to also remove incoming edges to $f$), as $f$ already cannot appear as an intermediate vertex in every path from $u$ to $v$. Therefore, we need to compute $\adj(\mathsf{SA}(G'))_{u,v}$, where $G'$ is obtained by removing all outgoing edges from $f$ in $G$. Let $\mathbf{u} = \mathbf{e}_f$, and $\mathbf{v}$ be the negation of the transpose of the $f$-th row of $\mathsf{SA}(G)$, except that $\mathbf{v}_f = 0$, i.e.,
\[\mathbf{v}_j = \begin{cases}
-z_{f, j}x^{l} & \text{if there is an edge from $f$ to $j$ with weight $l$ in $G$},\\
0 & \text{otherwise},
\end{cases}\] It is easy to see $\mathsf{SA}(G')=\mathsf{SA}(G)+\mathbf{u}\mathbf{v}^\mathsf{T}$. To compute $\adj(\mathsf{SA}(G'))_{u,v}$ using \cref{thm:SMW-formula}, we let
\begin{itemize}
\item $\gamma=1+\mathbf{v}^\mathsf{T}\mathsf{SA}(G)^{-1}\mathbf{u}$. Note that $(\mathbf{e}_f - \mathbf{v})^\mathsf{T}$ is exactly the $f$-th row of $\mathsf{SA}(G)$, so $(\mathbf{e}_f - \mathbf{v})^\mathsf{T}\mathsf{SA}(G)^{-1} = \mathbf{e}_f^\mathsf{T}$, and $\mathbf{v}^\mathsf{T}\mathsf{SA}(G)^{-1} = \mathbf{e}_f^\mathsf{T}\mathsf{SA}(G)^{-1} - \mathbf{e}_f^\mathsf{T}$. We have $\gamma = 1+\mathbf{e}_f^\mathsf{T}\mathsf{SA}(G)^{-1}\mathbf{u} - \mathbf{e}_f^\mathsf{T}\mathbf{u} = \mathsf{SA}(G)^{-1}_{f,f}$;%
\item $\beta = (\mathsf{SA}(G)^{-1}\mathbf{u}\mathbf{v}^\mathsf{T}\mathsf{SA}(G)^{-1})_{u,v}=(\mathbf{e}^\mathsf{T}_u\mathsf{SA}(G)^{-1}\mathbf{u})(\mathbf{v}^\mathsf{T}\mathsf{SA}(G)^{-1}\mathbf{e}_v)=\mathsf{SA}(G)^{-1}_{u,f}(\mathbf{e}_f^\mathsf{T}\mathsf{SA}(G)^{-1}\mathbf{e}_v) = \mathsf{SA}(G)^{-1}_{u,f}\mathsf{SA}(G)^{-1}_{f,v}$;
\item and $\alpha = \det(\mathsf{SA}(G))(\gamma\cdot\mathsf{SA}(G)^{-1}_{u,v}-\beta)$,
\end{itemize} then we have $\alpha = \adj(\mathsf{SA}(G'))_{u, v}$. (Note that $\gamma$ is always invertible since the constant term of $\mathsf{SA}(G)^{-1}_{f,f}$ must be $1$.)
In the actual query algorithm, we will substitute each formal variable $z_{i, j}$ by $\mathbf{Z}_{i, j}$. Let $\gamma_\mathbf{Z}$ denote the resulting polynomial after this substitution. Note that $\gamma_\mathbf{Z}$ is a polynomial in $\mathbb{F}[x]$. Similarly we can define $\beta_\mathbf{Z}$ and $\alpha_\mathbf{Z}$. If $\alpha_\mathbf{Z} \not\equiv 0\pmod{x^r}$, then our query algorithm outputs $\deg_x^*(\alpha_\mathbf{Z})$; otherwise it outputs $r$.
From the above formulas, we can compute $\gamma_\mathbf{Z}$, $\beta_\mathbf{Z}$, and $\alpha_\mathbf{Z}$ in $O(1)$ arithmetic operations over polynomials. Note that we only need to compute these polynomials modulo $x^r$, so each such arithmetic operation takes $\tilde{O}(r)$ time. The total query time is thus $\tilde{O}(r)$.
\begin{remark}[Query Algorithm for Undirected Graphs]
Our $r$-truncated DSO can also deal with undirected graphs, but the details are a bit different from the case of directed graphs. To remove an undirected edge, we need to update two entries in $\mathsf{SA}(G)$, which corresponds to a rank-$2$ update to $\mathsf{SA}(G)$. To remove a vertex, we need to update one row and one column in $\mathsf{SA}(G)$, which is also a rank-$2$ update to $\mathsf{SA}(G)$. Therefore, we need to use the rank-$2$ version of \cref{thm:SMW-formula} (see \cite[Lemma 1.6]{vdBS19}). Actually, our $r$-truncated DSOs also support deleting $f$ failures, and the query time is $\tilde{O}(f^\omega r)$. We omit the details here and refer the interested readers to \cite{vdBS19}.
\end{remark}
\begin{theorem}\label{thm:r-trunc}
For every integer $r$, we can construct an $r$-truncated DSO with preprocessing time
\[\tilde{O}(n^\omega M)+r^2/M\cdot \mathsf{MM}(n,nM/r,nM/r)\cdot n^{o(1)},\]
and query time $\tilde{O}(r)$. Our $r$-truncated DSO is correct w.h.p.
\end{theorem}
(Recall that by saying our $r$-truncated DSO is correct w.h.p, we mean that w.h.p.~over its randomized preprocessing algorithm, it answers every query correctly.)
\begin{proof}[Proof of \cref{thm:r-trunc}]
We only need to prove the correctness of our $r$-truncated DSO. Consider a query $(u,v,f)$ where $f$ is an edge or a vertex, and let $G'$ be the graph obtained by removing $f$ from $G$. By \cref{thm:SMW-formula}, we have $\alpha_\mathbf{Z} = \adj(\mathsf{SA}_\mathbf{Z}(G'))_{u, v}$. (Note that the constant term of $\gamma_\mathbf{Z}$ is always $1$, so $\gamma_\mathbf{Z}$ is always invertible.)
If $\|uv\diamond f\|\ge r$, then by \cref{thm:Sankowski-adjoint}, $\adj(\mathsf{SA}(G'))_{u, v}$ must be a polynomial whose minimum degree over $x$ is at least $r$. In this case, we have $\alpha_\mathbf{Z} \equiv 0\pmod {x^r}$ for every $\mathbf{Z}$. Therefore, our algorithm returns $r$, which is correct.
If $\|uv\diamond f\|=k<r$, then by \cref{thm:Sankowski-adjoint}, $\adj(\mathsf{SA}(G'))_{u,v}$ must be a polynomial whose minimum degree is exactly $k$. In this case, the coefficient of $x^k$ in $\alpha$ is a polynomial of $z_{i, j}$ with (total) degree at most $n$. (This is because $\adj(\mathsf{SA}(G'))_{u, v}$ is the determinant of a certain $n\times n$ matrix in which every entry has total degree at most one in the variables $z_{i, j}$.) If this polynomial is nonzero at $\mathbf{Z}$, then $\deg_x^*(\alpha_\mathbf{Z}) = k$ and our query algorithm is correct. By \cref{thm:schwartz-zippel}, this polynomial is $0$ with probability at most $1/n^{C-1}$. Therefore, our query algorithm returns the correct answer $k$ with probability at least $1-1/n^{C-1}$.
In conclusion, for every fixed query $(u, v, f)$, our query algorithm is correct with probability $1-1/n^{C-1}$ over the choice of $\mathbf{Z}$. By a union bound over $O(n^4)$ possible queries, the probability (over our randomized preprocessing algorithm) that every query is answered correctly is at least $1-1/\Theta(n^{C-5})$, which is a high probability.
\end{proof}
\subsection{Constructing the Full DSO}\label{sec:full-DSO}
Now we have constructed an $r$-truncated DSO, which we denote by $\caD^{\mathsf{start}}$. In this subsection, we will extend it to a \emph{full} DSO using the techniques in \cite{Ren20}. Specifically, we use the following two algorithms from \cite{Ren20}.
The first algorithm transforms an ($r$-truncated) DSO with a possibly large query time into an ($r$-truncated) DSO with query time $O(1)$. More precisely:
\begin{lemma}[{\cite[Observation 2.1]{Ren20}}]\label{lem:fast}
Given an $r$-truncated DSO $\mathcal{D}$ with preprocessing time $P$ and query time $Q$, we can build an $r$-truncated DSO $\mathsf{Fast}(\mathcal{D})$ with query time $O(1)$ which is correct w.h.p. The preprocessing algorithm of $\mathsf{Fast}(\mathcal{D})$ is as follows:
\begin{itemize}
\item It needs the all-pairs distance matrix of the input graph $G$, as well as the set of \emph{consistent} (incoming and outgoing) shortest path trees rooted at each vertex in $G$. By \cref{thm:unique-shortest-paths}, these shortest path trees can be computed in $O(n^{2.5286}M)$ time. For details, see \cref{sec:breaking-tie}.
\item It invokes the preprocessing algorithm of $\mathcal{D}$ on the input graph $G$ once, and makes $\tilde{O}(n^2)$ queries to $\mathcal{D}$. The preprocessing time is $P+\tilde{O}(n^2)Q$.
\end{itemize}
\end{lemma}
The second algorithm we use is implicit in the argument of \cite[Section 2.3]{Ren20}. We formalize it as the following lemma.
\begin{lemma}\label{lem:extend}
Given an $r$-truncated DSO $\mathcal{D}$ with preprocessing time $P$ and query time $O(1)$, we can build a $(3/2)r$-truncated DSO $\mathsf{Extend}(\mathcal{D})$ with preprocessing time $P+O(n^2)$ and query time $\tilde{O}(nM/r)$. The new DSO is correct w.h.p.
\end{lemma}
Now, we are ready to explain our algorithm to build a full DSO. Given an $r$-truncated DSO $\caD^{\mathsf{start}}$, we first obtain an $r$-truncated DSO $\mathcal{D}_0$ with query time $O(1)$ by applying \cref{lem:fast}.
\defi^{\star}{i^{\star}}
Let $i^{\star} = \lfloor\log_{3/2}(nM/r)\rfloor$. For every $0\le i\le i^{\star}$, we construct an $r(3/2)^{i+1}$-truncated DSO $\mathcal{D}_{i+1}$ by applying \cref{lem:extend} and \cref{lem:fast} sequentially on $\mathcal{D}_i$, i.e.~$\mathcal{D}_{i+1}=\mathsf{Fast}(\mathsf{Extend}(\mathcal{D}_i))$. Let the resulting DSO be $\caD^{\mathsf{final}}=\mathcal{D}_{i^{\star} + 1}$, since $r(3/2)^{i^{\star} + 1} \ge nM$, $\caD^{\mathsf{final}}$ is a full DSO.
We can also summarize our construction algorithm in one formula:
\[\caD^{\mathsf{final}}=\underbrace{\mathsf{Fast}(\mathsf{Extend}(\mathsf{Fast}(\mathsf{Extend}(\cdots \mathsf{Fast}(\caD^{\mathsf{start}})))))}_{O(\log (nM/r)) \text{ times}}.\]
\paragraph{Complexity of our DSO.}
Let $r=Mn^\alpha$, where $\alpha\in[0,1]$ is a parameter to be determined.
By \cref{thm:r-trunc}, the preprocessing time of $\caD^{\mathsf{start}}$ is
\[\tilde{O}(n^\omega M)+r^2/M\cdot \mathsf{MM}(n,nM/r,nM/r)\cdot n^{o(1)}\le \tilde{O}(n^\omega M) + n^{2\alpha+\omega(1,1-\alpha,1-\alpha) + o(1)}M,\]
and the query time of $\caD^{\mathsf{start}}$ is $\tilde{O}(r) = \tilde{O}(n^\alpha M)$. By \cref{lem:fast}, the preprocessing time of $\mathcal{D}_0$ is
\[\tilde{O}(n^{2+\alpha}M + n^\omega M)+n^{2\alpha+\omega(1,1-\alpha,1-\alpha) + o(1)}M.\]
Now consider the preprocessing algorithm of $\caD^{\mathsf{final}}$. We need to compute the all-pairs distance matrix and in/out shortest path trees of $G$ as required by \cref{lem:fast}, which takes $\tilde{O}(n^{2+\mu}M)$ time by \cref{thm:unique-shortest-paths}. We also need to run the preprocessing algorithm of $\mathcal{D}_0$. Also, for every $0\le i\le i^{\star}$, we need to preprocess the oracle $\mathcal{D}_{i+1}$, which takes $n^2\cdot \tilde{O}(nM / (r(3/2)^{i+1})) = \tilde{O}\mleft(\frac{n^{3-\alpha}M}{(3/2)^i}\mright)$ time.
Therefore, the preprocessing time of $\caD^{\mathsf{final}}$ is:
\begin{align*}
&\,\tilde{O}(n^{2 + \alpha}M + n^\omega M + n^{2+\mu}M)+n^{2\alpha + \omega(1,1-\alpha,1-\alpha) + o(1)}M+\sum_{i=0}^{\lfloor \log_{3/2} (nM/r)\rfloor} \tilde{O}\mleft(\frac{n^{3-\alpha}M}{(3/2)^i}\mright)\\
\le &\, n^{\max\{2+\alpha, 2+\mu, 3-\alpha, 2\alpha + \omega(1, 1-\alpha, 1-\alpha)\} + o(1)}M.
\end{align*}
Let $\alpha = 0.420645$, $\beta = \frac{1}{1-\alpha}$, then $1.5 < \beta < 1.75$. Recall that for any real number $\lambda$, $\omega(\lambda)$ is a shorthand for $\omega(1, 1, \lambda)$. We have
\begin{align}
\omega(1, 1-\alpha, 1-\alpha) =&\, (1-\alpha)\omega(\beta)\nonumber\\
\le&\,(1-\alpha)\cdot\frac{(1.75-\beta)\omega(1.5) + (\beta - 1.5)\omega(1.75)}{1.75 - 1.5}\label{eq:exponent-step2}\\
\le&\,0.579355 \cdot 4\cdot (0.023943\cdot \omega(1.5) + 0.226058\cdot\omega(1.75))\nonumber\\
\le&\,1.738094.\label{eq:exponent-step3}
\end{align}
Here, \cref{eq:exponent-step2} uses the convexity of the $\omega(\cdot)$ function \cite{LottiR83}, and \cref{eq:exponent-step3} uses the recent bounds in \cite{GallU18} that $\omega(1.5) \le 2.796537$ and $\omega(1.75) \le 3.021591$. We can see that
\[\max\{2+\alpha, 2+\mu, 3-\alpha, 2\alpha + \omega(1, 1-\alpha, 1-\alpha)\} = 2\alpha + \omega(1, 1-\alpha, 1-\alpha) \le 2.579384.\]
By \cref{lem:fast}, the query time of $\caD^{\mathsf{final}}$ is $O(1)$. Therefore, we can construct a DSO with $O(n^{2.5794}M)$ preprocessing time and $O(1)$ query time.
As the DSOs constructed in \cref{lem:fast} always have size $\tilde{O}(n^2)$, our final DSO only occupies $\tilde{O}(n^2)$ space. However, we remark that the preprocessing algorithm of our DSO requires $\tilde{O}(rn^2) = O(n^{2.4207})$ space (in particular, to store $\mathsf{SA}_{\mathbf{Z}}(G)^{-1}\bmod x^r$).
\section{Introduction}
In this paper, we consider the problem of constructing a \emph{distance sensitivity oracle} (DSO). A DSO is a data structure that preprocesses a directed graph $G = (V, E)$ with $n$ vertices and $m$ edges, and supports queries of the following form: Given a source vertex $u$, a target vertex $v$, and a failure $f$ (which can be either a vertex or an edge), output the length of the shortest path from $u$ to $v$ that does not go through $f$.
One motivation for constructing DSOs is the fact that real-life networks often suffer from failures. Consider a communication network among $n$ servers. When a server $u$ wants to send a message to another server $v$, the most efficient way would be to send the message along the shortest path from $u$ to $v$. However, if a failure happens in a server or a link between two servers, we would need to recompute the shortest path with the failure taken into account. It may be too slow to compute the shortest path from scratch each time a failure happens. A better solution is to construct a DSO for the communication network, and invoke the query algorithm of the DSO whenever a failure happens.
\subsection{Related Work}
The problem of constructing DSOs has received a lot of attention in the literature. A na\"ive solution is to precompute the answers for every possible query $(u, v, f)$, but it requires $\Omega(n^2m)$ space to store this DSO. Demetrescu et al.~\cite{DemetrescuTCR08} constructed a DSO with $O(n^2\log n)$ space that answers a query in constant time. However, the preprocessing time of the DSO in \cite{DemetrescuTCR08} is $O(mn^2 + n^3\log n)$, which is inefficient for large networks. Subsequently, Bernstein and Karger improved the preprocessing time to $\tilde{O}(n^2\sqrt{m})$ \cite{BernsteinK08}, and finally $\tilde{O}(mn)$ \cite{BernsteinK09}.\footnote{$\tilde{O}$ hides $\operatorname{polylog}(n)$ factors.} The preprocessing time $\tilde{O}(mn)$ matches the current best time bound for the easier problem of computing \emph{all-pairs shortest paths} (APSP), and it is conjectured that APSP requires $mn^{1-o(1)}$ time \cite{LincolnWW18}. In this sense, the $\tilde{O}(mn)$ time bound of \cite{BernsteinK09} is optimal. Duan and Zhang \cite{DuanZ17a} improved the space complexity of the DSO to $O(n^2)$, eliminating the last $\log n$ factor, while preserving constant query time and $\tilde{O}(mn)$ preprocessing time.
However, for dense graphs (i.e.~$m=\Theta(n^2)$) with edge weights in $[-M, M]$, it is possible to compute APSP in time faster than $\tilde{O}(mn) = \tilde{O}(n^3)$. The best APSP algorithm for undirected graphs runs in $\tilde{O}(n^\omega M)$ time \cite{Seidel95, ShoshanZ99}, and the best APSP algorithm for directed graphs runs in $O(n^{2.5286}M)$ time \cite{AlonGM97, Zwick02}. (Here $\omega < 2.3728596$ is the exponent of matrix multiplication \cite{CW90, Sto10, Wil12, LeGall, AlmanW21}.) Therefore, it is natural to ask whether one can beat $\tilde{O}(n^3)$ preprocessing time for DSOs in this regime.
The answer turned out to be \emph{yes}. Weimann and Yuster \cite{WeimannY13} showed that for any constant $0 < \alpha < 1$, there is a DSO with $\tilde{O}(n^{1-\alpha + \omega}M)$ preprocessing time and $\tilde{O}(n^{1+\alpha})$ query time. Subsequently, Grandoni and Williams \cite{GrandoniW12} showed that for any constant $0 < \alpha < 1$, there is a DSO with $\tilde{O}(n^{\omega + 1/2}M + n^{\omega + \alpha(4-\omega)}M)$ preprocessing time and $\tilde{O}(n^{1-\alpha})$ query time. Recently, Chechik and Cohen \cite{ChechikC20} constructed the first DSO that achieves both sub-cubic ($O(n^{2.873}M)$) preprocessing time and poly-logarithmic query time simultaneously. For the case that edge weights are positive, Ren \cite{Ren20} improved the previous results by presenting a much simpler DSO with $\tilde{O}(n^{2.7233}M)$ preprocessing time and constant query time.
Note that most DSOs mentioned above are randomized. Recently, there are also some efforts on derandomizing these DSOs, see e.g.~\cite{AlonCC19, KarthikP21}.
\subsection{Our Results}
Our main result is an improved DSO for directed graphs with integer edge weights in $[1, M]$. In particular, our DSO has preprocessing time $O(n^{2.5794}M)$ and constant query time.
\begin{theorem}[Main]\label{thm:main}
Given as input a directed graph $G=(V, E)$ with edge weights in $\{1, 2, \dots, M\}$, we can construct a DSO with $O(n^{2.5794}M)$ preprocessing time and constant query time. With high probability over the randomized preprocessing algorithm, the DSO answers every possible query correctly.
\end{theorem}
\begin{remark}
Our preprocessing algorithm uses fast \emph{rectangular} matrix multiplication algorithms. To express our time bound as a function of $\omega$, we could also simulate rectangular matrix multiplications by square matrix multiplications, e.g.~multiply an $n\times m$ matrix and an $m\times n$ matrix by $\lceil m/n\rceil$ square matrix multiplications of dimension $n$. In this case, the preprocessing time becomes $\tilde{O}(n^{2+1/(4-\omega)}M) < O(n^{2.6146}M)$.
\end{remark}
\begin{remark}[Comparison with Prior Works]
The biggest advantage of our DSO is, of course, its fast preprocessing algorithm. In fact, the preprocessing time bound is only an $O(n^{0.051})$ factor away from the current best time bound for APSP. Our DSO is also the first one to break a barrier of $\tilde{\Omega}(n^{8/3})$ preprocessing time while keeping constant query time.\footnote{There are three previous DSOs with both sub-cubic preprocessing time and constant query time: \cite{GrandoniW12}, \cite{ChechikC20}, and \cite{Ren20}. (The query time of the first two DSOs can be brought down to constant using Observation 2.1 of \cite{Ren20}. In the case of \cite{GrandoniW12}, this increases the preprocessing time by an additive factor of $\tilde{O}(n^{3-\alpha})$.) Even when $\omega = 2$, the preprocessing time bounds of these DSOs are $\tilde{O}(n^{8/3})$ (setting $\alpha$ appropriately), $\tilde{O}(n^{14/5})$, and $\tilde{O}(n^{8/3})$ respectively.} However, our DSO has two drawbacks. First, it can only return the length of the shortest path. It does not suggest an efficient way to produce this path. Second, it does not support negative edge weights.
\end{remark}
We highlight two technical ingredients that are crucial for the preprocessing algorithm of our DSO.
\paragraph{Inverting a polynomial matrix modulo $x^r$.} Let $r$ be an integer parameter, and $\mathbf{F}$ be a polynomial matrix of degree $d$ (i.e.~each entry of $\mathbf{F}$ is a degree-$d$ polynomial over some formal variable $x$) that is invertible. We show how to compute $\mathbf{F}^{-1} \bmod x^r$ in time
\[\tilde{O}(dn^\omega)+(r^2/d)\cdot \mathsf{MM}(n, nd/r, nd/r)\cdot n^{o(1)}.\]
(That is, we only preserve the monomials in $\mathbf{F}^{-1}$ with degrees at most $r-1$.) Here, $\mathsf{MM}(n_1, n_2, n_3)$ is the time complexity of multiplying an $n_1\times n_2$ matrix and an $n_2\times n_3$ matrix.
It is shown in \cite{ZhouLS15} that we can compute the full $\mathbf{F}^{-1}$ (instead of $\mathbf{F}^{-1}\bmod x^r$) in $\tilde{O}(n^3d)$ time. We examine their algorithm carefully and adapt it to our case where we only want to compute $\mathbf{F}^{-1}\bmod x^r$. We modulo each polynomial in the intermediate steps of the algorithm by $x^r$, and use fast rectangular matrix multiplication to speed up the algorithm.
\begin{restatable}{theorem}{ThmInvertAlgo}\label{thm:invert-algo}
Let $r$ be an integer, $\mathbb{F}$ be a finite field. Let $\mathbf{F}\in (\mathbb{F}[x]/\langle x^r\rangle)^{n\times n}$ be an $n\times n$ matrix over the ring of polynomials modulo $x^r$, and let $d\ge 1$ be an upper bound on the degrees of entries of $\mathbf{F}$. If $\mathbf{F}$ is invertible over $(\mathbb{F}[x]/\langle x^r\rangle)^{n\times n}$, the number of field operations to compute $\mathbf{F}^{-1}\bmod x^r$ is at most
\[\tilde{O}(dn^\omega)+(r^2/d)\cdot\mathsf{MM}(n, nd/r, nd/r) \cdot n^{o(1)}.\]
\end{restatable}
\begin{remark}
A square matrix $\mathbf{F}$ over the commutative ring $\mathcal{R}$ is invertible if and only if $\det(\mathbf{F})$ is a unit in $\mathcal{R}$. In our case where $\mathcal{R} = \mathbb{F}[x] / \langle x^r\rangle$, this is true if and only if the constant term of $\det(\mathbf{F})$ is nonzero.
\end{remark}
\begin{remark}
The idea of using polynomial matrices to capture distances is a common technique in graph algorithms. It has found many applications in static algorithms \cite{Sankowski05}, fault-tolerant algorithms \cite{vdBS19}, and dynamic algorithms \cite{Sankowski05-dynamic, BrandN19, BrandNS19}.
\end{remark}
\paragraph{Computing consistent shortest path trees.} Our DSO needs to invoke \cite[Observation 2.1]{Ren20} (see also \cite{BernsteinK09}), which needs a \emph{consistent} set of (incoming and outgoing) shortest path trees rooted at each vertex. Here, by \emph{consistent}, we mean that for every pair of vertices $u, v$ and any two shortest path trees $T_1$ and $T_2$ (from the $2n$ trees; recall they are \emph{directed} rooted trees), if $u$ can reach $v$ in both $T_1$ and $T_2$, then the $u\rightsquigarrow v$ paths in $T_1$ and $T_2$ are the same path. In other words, we want to specify a \emph{unique} shortest path between each pair of vertices, such that for every vertex $v$, the shortest paths starting from $v$ (or ending at $v$, respectively) form a tree.
Note that this problem is quite nontrivial in small-weighted graphs. There may be many shortest paths between two vertices, and it is not obvious how to pick one shortest path for each vertex pair while guaranteeing consistency. Also, we cannot randomly perturb the edge weights by small values, as that would break the property that edge weights are small integers. It is also unclear how to construct such a set of shortest path trees from the APSP algorithm in \cite{Zwick02}. Previously, combining ideas in \cite[Section 3.4]{DemetrescuI04} and an algorithm in \cite{DuanP09}, \cite{Ren20} showed how to compute such shortest path trees in $\tilde{O}(n^{(3+\omega)/2}M) \le O(n^{2.6865}M)$ time; unfortunately, this time bound is worse than our claimed time bound $O(n^{2.5794}M)$ in \cref{thm:main}.
In this paper, we show how to construct consistent shortest paths trees in $O(n^{2.5286}M)$ time, matching the currently best time bound for APSP \cite{Zwick02}. Below is an informal statement, see \cref{thm:breaking-tie} for the precise version.
\begin{theorem}[Informal Version]\label{thm:unique-shortest-paths}
Given a directed graph $G=(V, E)$ with edge weights in $\{1, 2, \dots, M\}$, we can compute a set of incoming and outgoing shortest path trees rooted at each vertex that are consistent, in $O(n^{2.5286}M)$ time.
\end{theorem}
\subsection{Warm-Up: DSO in $\tilde{O}(n^{(3+\omega)/2}M)$ Preprocessing Time}
Actually, the ideas in \cite{vdBS19} of maintaining the \emph{adjoint} of the \emph{symbolic adjacency matrix} (see \cref{sec:DSO}), together with ideas in \cite{Ren20}, already give us a DSO with $\tilde{O}(n^{(3+\omega)/2}M)$ preprocessing time and constant query time. As a warm-up, we briefly describe this DSO before we proceed into the details of \cref{thm:main}.
An \emph{$r$-truncated DSO} \cite{Ren20} is a DSO that only needs to be correct for the queries $(u, v, f)$ whose answer (i.e.~length of the corresponding shortest path) is at most $r$. If the answer is greater than $r$, it should return $r$ instead. In what follows, we will describe how to construct an $r$-truncated DSO in $\tilde{O}(rn^\omega)$ preprocessing time and $\tilde{O}(r)$ query time. Using techniques in \cite{Ren20} (see also \cref{sec:full-DSO}), this implies a DSO with $\tilde{O}(n^{(3+\omega)/2}M)$ preprocessing time and constant query time.
Let $\mathbb{F}$ be a sufficiently large finite field, and $\mathbf{A}$ be the following matrix. For every vertices $u, v$, if there is an edge from $u$ to $v$ with weight $l$, then let $\mathbf{A}_{u, v} = a_{u, v}x^l$, where $a_{u, v}$ is a random element in $\mathbb{F}$, and $x$ is an indeterminate. Furthermore, for every vertex $v$, let $\mathbf{A}_{v, v} = 1$. It is well-known \cite{Sankowski05} that with high probability over the choices of $a_{u, v}$, the \emph{adjoint} matrix of $\mathbf{A}$ encodes the shortest path information of the input graph, as follows. Let $\adj(\mathbf{A})$ be the adjoint matrix of $\mathbf{A}$, and $u, v$ be two vertices, then the lowest degree of $\adj(\mathbf{A})_{u, v}$ is exactly the distance from $u$ to $v$. For example, if $\adj(\mathbf{A})_{u, v} = 7x^8 + 6x^5 - 9x^4$, then the distance from $u$ to $v$ is $4$.
A big advantage of the adjoint matrix, exploited in \cite{vdBS19} and also this work, is that it is easy to perform \emph{low-rank} updates, by the Sherman-Morrison-Woodbury formula (see \cref{thm:SMW-formula}). Given a matrix $\mathbf{A}$, its adjoint $\adj(\mathbf{A})$, and a low-rank matrix $\mathbf{B}$, we can compute a specific element of $\adj(\mathbf{A} + \mathbf{B})_{u, v}$, in time much faster than brute force. Therefore, we answer a query $(u, v, f)$ as follows: We first express the failure as a \emph{rank-one} matrix $\mathbf{F}$, such that $\mathbf{A} + \mathbf{F}$ is the matrix corresponding to the graph with $f$ removed. Then we can compute $\adj(\mathbf{A} + \mathbf{F})_{u, v}$ quickly. Given this element (a polynomial over $\mathbb{F}$), we can easily compute the answer to the query.
What is the time complexity of this DSO? Recall that we only want to construct an $r$-truncated DSO, so we can modulo every entry in the process of computing $\adj(\mathbf{A})$ by the polynomial $x^r$. Every arithmetic operation in the commutative ring $\mathbb{F}[x] / \langle x^r\rangle$ only takes $\tilde{O}(r)$ time. Computing the adjoint of a matrix reduces to inverting that matrix, which takes $\tilde{O}(n^\omega)$ arithmetic operations \cite{MatInv}. Therefore it takes $\tilde{O}(rn^\omega)$ time to compute $\adj(\mathbf{A})\bmod x^r$. A close inspection of the Sherman-Morrison-Woodbury formula shows that each query can be completed in $O(1)$ arithmetic operations, i.e.~$\tilde{O}(r)$ time.
The $\tilde{O}(rn^\omega)$-time algorithm for inverting a polynomial matrix modulo $x^r$ is not optimal; the time bound in \cref{thm:invert-algo} is better. In \cref{sec:invert-poly-matrix}, we use fast rectangular matrix multiplication algorithms to speed up the algorithm in \cite{ZhouLS15}, obtaining a faster algorithm for inverting polynomial matrices modulo $x^r$.
\section{Inverting a Polynomial Matrix Modulo $x^r$}\label{sec:invert-poly-matrix}
As we see in \cref{sec:DSO}, the algorithm in \cref{thm:invert-algo} for inverting a polynomial matrix modulo $x^r$ is very crucial for our results.
\ThmInvertAlgo*
In this section, we work in a (large enough) field $\mathbb{F}$, and regard each polynomial in the matrix as an element of the commutative ring $\mathcal{R} = \mathbb{F}[x] / \langle x^r\rangle$. Without loss of generality, we assume $n$ and $r$ are powers of $2$ throughout this section.
\subsection{An Informal Treatment}\label{sec:informal-invert-algo}
Our algorithm is essentially the algorithm in \cite{ZhouLS15}. In fact, the only difference is that we only consider polynomials modulo $x^r$. In \cref{sec:proof-invert-algo}, we will provide an improved analysis of this algorithm by using rectangular matrix multiplication. Here we present a brief exposition of the algorithm in \cite{ZhouLS15}.
Let $\mathbf{F}$ be an input polynomial matrix where each entry has degree at most $d$. Suppose $\mathbf{F}$ is invertible over $(\mathbb{F}[x]/\langle x^r\rangle)^{n\times n}$. We will compute a \emph{kernel basis decomposition} of $\mathbf{F}$, which is a chain of matrices $\mathbf{A}_1, \mathbf{A}_2, \dots, \mathbf{A}_{\log n}$ and a diagonal matrix $\mathbf{B}$, such that
\begin{equation}
\mathbf{F}^{-1} = \mathbf{A}_1\mathbf{A}_2\dots\mathbf{A}_{\log n}\mathbf{B}^{-1}.\label{eq:kernel-basis-decomp}
\end{equation}
Then, to compute $\mathbf{F}^{-1}$, we simply multiply the above matrices. Note that $\mathbf{B}$ is a \emph{diagonal} matrix that is invertible\footnote{Every diagonal element of $\mathbf{B}$ is a divisor of the \emph{largest invariant factor} of $\mathbf{F}$ (see \cite[Section 5.1]{ZhouLS15}), which is (again) a divisor of $\det(\mathbf{F})$. Since $\det(\mathbf{F})$ is invertible modulo $x^r$, every diagonal element of $\mathbf{B}$ is also invertible modulo $x^r$.}, so its inverse is easy to compute.
To start, we write $\mathbf{F} = \begin{bmatrix}\mathbf{F}_{\mathsf{U}}\\\mathbf{F}_{\mathsf{D}}\end{bmatrix}$, where each $\mathbf{F}_\mathsf{U}$ or $\mathbf{F}_\mathsf{D}$ is an $(n/2)\times n$ matrix. Then we compute two $n\times (n/2)$ matrices $\mathbf{N}_\mathsf{R}$ and $\mathbf{N}_\mathsf{L}$ with full rank, such that $\mathbf{F}_\mathsf{U}\mathbf{N}_\mathsf{R} = {\bf 0}$, and $\mathbf{F}_\mathsf{D}\mathbf{N}_\mathsf{L} = {\bf 0}$. (This can be done by \cite[Theorem 4.2]{ZhouLS12}.) Let $\mathbf{A}_1 = \begin{bmatrix}\mathbf{N}_\mathsf{L} & \mathbf{N}_\mathsf{R}\end{bmatrix}$, then $\mathbf{A}_1$ has full rank, and
\[\mathbf{F}\cdot \mathbf{A}_1 = \begin{bmatrix}\mathbf{F}_\mathsf{U}\mathbf{N}_\mathsf{L} & \mathbf{F}_\mathsf{U}\mathbf{N}_\mathsf{R} \\ \mathbf{F}_\mathsf{D}\mathbf{N}_\mathsf{L} & \mathbf{F}_\mathsf{D}\mathbf{N}_\mathsf{R}\end{bmatrix} = \begin{bmatrix}\mathbf{F}_\mathsf{U}\mathbf{N}_\mathsf{L} & \\ & \mathbf{F}_\mathsf{D}\mathbf{N}_\mathsf{R}\end{bmatrix}.\]
Therefore, $\mathbf{F}\cdot \mathbf{A}_1$ is a block diagonal matrix with two blocks, each of size $(n/2) \times (n/2)$. We can then recursively invoke the kernel basis decomposition of these two blocks, and form the matrices $\mathbf{A}_2, \dots, \mathbf{A}_{\log n}$. The diagonal matrix $\mathbf{B}$ is created at the base case of the recursion, where the diagonal blocks of $\mathbf{F}\cdot \mathbf{A}_1\cdot\dots\cdot \mathbf{A}_{\log n}$ are of size $1\times 1$. It is shown in \cite{ZhouLS15} that the kernel basis decomposition takes only $\tilde{O}(dn^\omega)$ time to compute.
We still need to compute \cref{eq:kernel-basis-decomp}. From the above algorithm, we can see that each $\mathbf{A}_i$ is a block-diagonal matrix, which consists of $2^{i-1}$ blocks of size $(n/2^{i-1})\times (n/2^{i-1})$. Now we \emph{assume} that each entry in $\mathbf{A}_i$ also has degree at most $d\cdot 2^{i-1}$. (In reality, the behavior of degrees in $\mathbf{A}_i$ may be complicated, and we need the notion of \emph{shifted column degree} (see \cref{def:shifted-column-degree}) to control it.)
To compute \cref{eq:kernel-basis-decomp}, we define $\mathbf{M}_i = \mathbf{A}_1\mathbf{A}_2\dots \mathbf{A}_i$, and compute each $\mathbf{M}_i$ by the formula
\begin{equation}
\mathbf{M}_{i+1} = \mathbf{M}_i\mathbf{A}_{i+1}.
\label{eq:Mi+1}
\end{equation}
The degree of each entry in $\mathbf{M}_i$ will be at most $O(2^i\cdot d)$. As we only need the results modulo $x^r$, we can assume the degrees are actually $O\mleft(\min\{r, 2^i\cdot d\}\mright)$. Note that $\mathbf{A}_{i+1}$ consists of $2^i$ blocks, each of size $(n/2^i)\times (n/2^i)$, and the degree of each (nonempty) entry in $\mathbf{A}_{i+1}$ is also $O\mleft(\min\{r, 2^i\cdot d\}\mright)$. Therefore, we can compute \cref{eq:Mi+1} in
\begin{equation}
O\mleft(\min\{r, 2^i\cdot d\}\mright)\cdot 2^i\cdot \mathsf{MM}(n, n/2^i, n/2^i)
\label{eq:time-for-Mi+1}
\end{equation}
time. (It is basically $2^i$ matrix products of size $n\times (n/2^i)$ and $(n/2^i)\times (n/2^i)$; we need to multiply another factor of $\min\{r, 2^i\cdot d\}$ which is the degree of polynomials in these matrices.)
Now, it is easy to see that the bottleneck of this algorithm occurs when $r = 2^i\cdot d$, and the time for computing \cref{eq:Mi+1} is:
\[(\text{\ref{eq:time-for-Mi+1}}) = (r^2/d)\cdot \mathsf{MM}(n, nd/r, nd/r).\]
\subsection{Proof of \cref{thm:invert-algo}}\label{sec:proof-invert-algo}
As opposed to the informal description above, the \emph{maximum} degrees in the matrices may not behave well. We need to introduce the concept of \emph{column degrees} and \emph{shifted column degrees} to capture the behavior of the degrees in these matrices.
\begin{definition}[{\cite[Section 2.2]{ZhouLS15}}]\label{def:shifted-column-degree}
Let $\vec{\mathbf{p}}$ be a length-$n$ column vector whose entries are polynomials. Then the \emph{column degree} of $\vec{\mathbf{p}}$, denoted as $\cdeg\vec{\mathbf{p}}$, is the maximum of the degrees of the entries in $\vec{\mathbf{p}}$. That is:
\[\cdeg\vec{\mathbf{p}} = \max_{i=1}^n\{\deg(\mathbf{p}_i)\}.\]
Let $\vec{s}$ be a length-$n$ vector of integers, called the \emph{shift} of the degrees. Then the \emph{$\vec{s}$-shifted column degree} of $\vec{\mathbf{p}}$, or simply the \emph{$\vec{s}$-column degree} of $\vec{\mathbf{p}}$, denoted as $\cdeg_{\vec{s}}\vec{\mathbf{p}}$, is defined as
\[\cdeg_{\vec{s}}\vec{\mathbf{p}} = \max_{i=1}^n\{s_i + \deg(\mathbf{p}_i)\}.\]
It is easy to see that $\cdeg\vec{\mathbf{p}} = \cdeg_{\vec{\bf 0}}\vec{\mathbf{p}}$, where $\vec{\bf 0}$ is the all-zero vector.
Let $\mathbf{A}$ be an $m\times n$ polynomial matrix, then the \emph{column degree} (\emph{$\vec{s}$-column degree} resp.) of $\mathbf{A}$, denoted as $\cdeg\mathbf{A}$ ($\cdeg_{\vec{s}}\mathbf{A}$ resp.), is the length-$n$ row vector whose $i$-th entry is the column degree ($\vec{s}$-column degree resp.) of the $i$-th column of $\mathbf{A}$.
\end{definition}
We need the following theorem. It is essentially Theorem 3.7 of \cite{ZhouLS12}, where we replace the invocations of square matrix multiplication algorithms with (the faster) rectangular matrix multiplication algorithms. It is straightforward to adapt the original proof in \cite{ZhouLS12} to use rectangular matrix multiplication, but for completeness, we will include a proof in \cref{sec:unbalanced-mat-mul}.
\begin{restatable}{theorem}{ThmUnbalancedMatMul}\label{thm:unbalanced-mat-mul}
Let $\mathbf{A}$ be an $n^p\times n^q$ polynomial matrix, and $\mathbf{B}$ be an $n^q\times n^r$ polynomial matrix. Suppose $\vec{s} \ge \cdeg\mathbf{A}$ is a shift that bounds the corresponding column degrees of $\mathbf{A}$, and
\[\xi = \max\mleft\{\frac{1}{n^q}\sum_{i=1}^{n^q} s_i, \frac{1}{n^r}\sum_{i=1}^{n^r} (\cdeg_{\vec{s}}\mathbf{B})_i\mright\} + 1.\]
Then the product $\mathbf{A} \cdot \mathbf{B}$ can be computed in $\xi\cdot n^{\omega(p, q, r) + o(1)}$ field operations.
\end{restatable}
Now we can prove \cref{thm:invert-algo}.
\ThmInvertAlgo*
\begin{proof}[Proof Sketch]
In this sketch, we will use some results in \cite{ZhouLS15} directly. We will also use some notation introduced in \cref{sec:informal-invert-algo}.
Let $\vec{s} = \cdeg\mathbf{F}$. We first invoke the kernel basis decomposition algorithm \textsc{Inverse} of \cite{ZhouLS15}:
\[(\mathbf{A}_1, \mathbf{A}_2, \dots, \mathbf{A}_{\log n}, \mathbf{B}) \gets \textsc{Inverse}(\mathbf{F}, \vec{s}).\]
By \cite[Theorem 8]{ZhouLS15}, the algorithm \textsc{Inverse} takes only $\tilde{O}(dn^\omega)$ time.Then we compute
\[\mathbf{F}^{-1} = \mathbf{A}_1\mathbf{A}_2\dots\mathbf{A}_{\log n}\mathbf{B}^{-1}.\]
Note that $\mathbf{B}$ is a diagonal matrix, so it suffices to compute $\mathbf{A}_1\mathbf{A}_2\dots \mathbf{A}_{\log n}$. Also recall that for every $0\le i < \log n$, $\mathbf{A}_{i+1}$ is a block diagonal matrix that consists of $2^i$ diagonal blocks of size $(n/2^i)\times (n/2^i)$. Let $\mathbf{A}^{(j)}_{i+1}$ denote the $j$-th block, we write
\[\mathbf{A}_{i+1} = \diag(\mathbf{A}^{(1)}_{i+1}, \dots, \mathbf{A}^{(2^i)}_{i+1}).\]
Let $\mathbf{M}_i = \mathbf{A}_1\mathbf{A}_2\dots \mathbf{A}_i$. Then for every $1\le i < \log n$,
\begin{equation}
\mathbf{M}_{i+1} = \mathbf{M}_i\mathbf{A}_{i+1}.
\tag{\ref{eq:Mi+1}}
\end{equation}
In order to use results in \cite[Lemma 10]{ZhouLS15}, we need to partition each $\mathbf{A}_{i+1}^{(k)}$ into two kernel bases. Like how $\mathbf{A}_1$ was formed in \cref{sec:informal-invert-algo}, we denote $\mathbf{A}_{i+1}^{(k)} = \begin{bmatrix}\mathbf{N}^{(k)}_{i+1, \mathsf{L}} & \mathbf{N}^{(k)}_{i+1, \mathsf{R}}\end{bmatrix}$. Here, each $\mathbf{N}^{(k)}_{i+1, \mathsf{L}}$ or $\mathbf{N}^{(k)}_{i+1, \mathsf{R}}$ is of dimension $(n/2^i) \times (n/2^{i+1})$. We divide $\mathbf{M}_i$ into submatrices (``column blocks'') of dimension $n\times (n/2^i)$ accordingly:
\[
\mathbf{M}_i = \begin{bmatrix}
\mathbf{M}_i^{(1)} & \mathbf{M}_i^{(2)} & \dots & \mathbf{M}_i^{(2^i)}
\end{bmatrix}.
\]
Then \cref{eq:Mi+1} is equivalent to
\begin{equation}\label{eq:Mi+1-equiv}
\mathbf{M}^{(2k-1)}_{i+1} = \mathbf{M}^{(k)}_i \cdot \mathbf{N}^{(k)}_{i+1, \mathsf{L}}, \text{ and }\mathbf{M}^{(2k)}_{i+1} = \mathbf{M}^{(k)}_i \cdot \mathbf{N}^{(k)}_{i+1, \mathsf{R}}.
\end{equation}
We use \cref{thm:unbalanced-mat-mul} to multiply these matrices. For each $1\le i < \log n$, in \cref{eq:Mi+1-equiv}, we need to perform $2^{i+1}$ matrix multiplications of the form $\mathbf{M} \cdot \mathbf{N}$. Here $\mathbf{M}=\mathbf{M}^{(k)}_i$, and $\mathbf{N}$ is either $\mathbf{N}^{(k)}_{i+1, \mathsf{L}}$ or $\mathbf{N}^{(k)}_{i+1, \mathsf{R}}$. The dimension of $\mathbf{M}$ is $n\times (n/2^i)$, and the dimension of $\mathbf{N}$ is $(n/2^i)\times (n/2^{i+1})$. Moreover, let $\vec{t} = \cdeg_{\vec{s}}\mathbf{M}_i^{(k)}$, then by \cite[Lemma 10]{ZhouLS15}:
\begin{enumerate}[(a)]
\item $\sum_{j=1}^{n/2^i}t_j \le \sum_{j=1}^n s_j \le dn$.
\item $\sum_{j=1}^{n/2^{i+1}}(\cdeg_{\vec{t}}\mathbf{N}^{(k)}_{i+1, \mathsf{L}})_j \le \sum_{j=1}^n s_j \le dn$; similarly, $\sum_{j=1}^{n/2^{i+1}}(\cdeg_{\vec{t}}\mathbf{N}^{(k)}_{i+1, \mathsf{R}})_j \le dn$.
\end{enumerate}
(Recall that $\vec{s}$ is the column degree of $\mathbf{F}$.)
Let
\[\xi_i = \max\mleft\{\frac{1}{n/2^i}\sum_{j=1}^{n/2^i}t_j, \frac{1}{n/2^{i+1}}\sum_{k=1}^{n/2^{i+1}}(\cdeg_{\vec{t}}\mathbf{N})_k\mright\} \le 2^{i+1}\cdot d.\]
Note that we are only interested in the polynomials modulo $x^r$, thus by definition, every element in $\vec{t}$ and $\cdeg_{\vec{t}}\mathbf{N}$ should be upper bounded by $O(r)$. Therefore if $2^{i+1}d \ge r$, we use the bound $\xi_i \le O(r)$ instead. By \cref{thm:unbalanced-mat-mul}, the time complexity for computing $\mathbf{M} \cdot \mathbf{N}$ is $\xi_i \cdot n^{\omega(1, 1-\tau, 1-\tau) + o(1)}$, where $\tau = \log_n (2^{i+1})$.
Let $\tau^\star = \frac{\log(r/d)}{\log n}$ be the threshold such that $2^{i+1}d\le r$ if and only if $\tau \le \tau^\star$. Suppose $2^{i+1}d \le r$, then the time complexity for computing all $2^{i+1}$ ($=n^\tau$) matrix products is
\begin{align*}
&~n^\tau\cdot \xi_i \cdot n^{\omega(1, 1-\tau, 1-\tau) + o(1)} \\
\le&~d\cdot n^{2\tau + \omega(1, 1-\tau, 1-\tau) + o(1)}\\
\le&~d\cdot n^{2\tau^\star + \omega(1, 1-\tau^\star, 1-\tau^\star) + o(1)} & \text{By \cref{lemma:matmul2}}\\
\le&~(r^2/d)\cdot \mathsf{MM}(n, nd/r, nd/r) \cdot n^{o(1)}.
\end{align*}
On the other hand, suppose $2^{i+1}d > r$, then the time complexity for computing all $n^\tau$ matrix products is
\begin{align*}
&~n^\tau\cdot r\cdot n^{\omega(1, 1-\tau, 1-\tau) + o(1)}\\
\le&~r\cdot n^{\tau^\star + \omega(1, 1-\tau^\star, 1-\tau^\star) + o(1)} & \text{By \cref{lemma:matmul2}}\\
\le&~(r^2/d)\cdot \mathsf{MM}(n, nd/r, nd/r) \cdot n^{o(1)}.
\end{align*}
Summing over every $1\le i < \log n$, we can see that the time complexity for inverting $\mathbf{F}$ is at most
\[\tilde{O}(dn^\omega)+(r^2/d)\cdot \mathsf{MM}(n, nd/r, nd/r) \cdot n^{o(1)}. \qedhere\]
\end{proof}
\subsection{Proof of \cref{thm:unbalanced-mat-mul}}\label{sec:unbalanced-mat-mul}
\ThmUnbalancedMatMul*
\begin{proof}
W.l.o.g.~we assume that $n^p, n^q, n^r$ are powers of $2$. For every $1\le c\le r\log n-1$, let $\mathbf{B}^c$ denote the set of columns of $\mathbf{B}$ whose $\vec{s}$-column degrees are in the range $(2^c\xi, 2^{c+1}\xi]$; let $\mathbf{B}^0$ denote the rest columns of $\mathbf{B}$, i.e.~those with $\vec{s}$-column degrees no more than $2\xi$. Then $\mathbf{B}^0, \mathbf{B}^1, \dots, \mathbf{B}^{r\log n-1}$ form a partition of the columns of $\mathbf{B}$. By the definition of $\xi$, for every $0\le c\le r\log n-1$, there are at most $n^r/2^c$ columns in $\mathbf{B}^c$. To compute $\mathbf{A}\cdot \mathbf{B}$, it suffices to compute $\mathbf{A}\cdot \mathbf{B}^c$ for each $c$.
Now fix an integer $c$, we need to compute $\mathbf{A} \cdot \mathbf{B}^c$. Using the same method above, we can also partition the columns of $\mathbf{A}$ into $q\log n$ groups. More precisely, for every $1\le c' \le q\log n-1$, let $\mathbf{A}^{c'}$ be the set of columns of $\mathbf{A}$ whose column degrees are in the range $(2^{c'}\xi, 2^{c'+1}\xi]$; let $\mathbf{A}^0$ be the rest columns of $\mathbf{A}$, i.e.~those with column degrees no more than $2\xi$. For notational convenience, we may assume that
\[\mathbf{A} = \begin{bmatrix}\mathbf{A}^0 & \mathbf{A}^1 & \ldots & \mathbf{A}^{q\log n - 1}\end{bmatrix},\]
as otherwise we can rearrange the columns of $\mathbf{A}$ (along with the rows of $\mathbf{B}$ and the entries in $\vec{s}$). We also note that for every $0 \le c' \le q\log n - 1$, there are at most $n^q / 2^{c'}$ columns in $\mathbf{A}^{c'}$.
The partition of columns of $\mathbf{A}$ induces a partition of rows of $\mathbf{B}^c$. In particular, we define $\mathbf{B}^{c, c'}$ as the rows of $\mathbf{B}^c$ corresponding to columns of $\mathbf{A}^{c'}$, so
\[\mathbf{B}^c = \begin{bmatrix}\mathbf{B}^{c, 0}\\ \mathbf{B}^{c, 1}\\ \vdots\\ \mathbf{B}^{c, q\log n-1}\end{bmatrix}.\]
We can see that for every $c' > c$, $\mathbf{B}^{c, c'}$ is the zero matrix. In fact, suppose the entry in the $j$-th row and $k$-th column of $\mathbf{B}^c$ is nonzero, and this entry belongs to $\mathbf{B}^{c, c'}$ for some $c' > c$. Denote this column as $\mathbf{b}_k$, then $\cdeg_{\vec{s}}\mathbf{b}_k \ge s_j$. As the $j$-th column of $\mathbf{A}$ belongs to $\mathbf{A}^{c'}$, we have $s_j > 2^{c'}\xi \ge 2^{c+1}\xi$. However, by definition of $\mathbf{B}^c$, we also have $\cdeg_{\vec{s}}\mathbf{b}_k \le 2^{c+1}\xi$, a contradiction. Therefore
\[\mathbf{A}\cdot \mathbf{B}^c = \sum_{c' = 0}^c\mathbf{A}^{c'} \cdot \mathbf{B}^{c, c'}.\]
Again, fix $c' \in [0, c]$, we want to compute $\mathbf{A}^{c'} \cdot \mathbf{B}^{c, c'}$. Recall that the dimension of $\mathbf{A}^{c'}$ is at most $n^p\times (n^q/2^{c'})$, and each entry in $\mathbf{A}^{c'}$ is a polynomial of degree at most $2^{c'+1}\xi$; the dimension of $\mathbf{B}^{c, c'}$ is at most $(n^q/2^{c'}) \times (n^r/2^c)$, and each entry in $\mathbf{B}^{c, c'}$ is a polynomial of degree at most $2^{c+1}\xi$. Let $\Delta = 2^{c'+1}\xi$, we ``decompose'' $\mathbf{B}^{c, c'}$ into $\ell = 2^{c-c'}$ matrices $\{\mathbf{B}^{c, c', i}\}_{i=0}^{\ell-1}$, such that:
\[\mathbf{B}^{c, c'} = \mathbf{B}^{c, c', 0} + \mathbf{B}^{c, c', 1}\cdot x^\Delta + \mathbf{B}^{c, c', 2}\cdot x^{2\Delta} + \dots + \mathbf{B}^{c, c', \ell-1}\cdot x^{(\ell-1)\Delta},\]
and each entry in each matrix $\mathbf{B}^{c, c', i}$ has degree at most $\Delta$.
We concatenate these degree-$\Delta$ matrices together, to form a matrix
\[\widehat{\mathbf{B}^{c, c'}} = \begin{bmatrix}\mathbf{B}^{c, c', 0} & \mathbf{B}^{c, c', 1} & \ldots & \mathbf{B}^{c, c', \ell-1}\end{bmatrix}.\]
This matrix has at most $(n^r/2^c) \cdot \ell \le (n^r/2^{c'})$ columns.
Then we compute $\widehat{\mathbf{C}^{c, c'}} = \mathbf{A}^{c'}\cdot \widehat{\mathbf{B}^{c, c'}}$. We can see that
\[\widehat{\mathbf{C}^{c, c'}} = \begin{bmatrix}\mathbf{A}^{c'}\mathbf{B}^{c, c', 0} & \mathbf{A}^{c'}\mathbf{B}^{c, c', 1} & \ldots & \mathbf{A}^{c'}\mathbf{B}^{c, c', \ell-1}\end{bmatrix}.\]
And we can directly compute $\mathbf{A}^{c'}\cdot \mathbf{B}^{c, c'}$ from $\widehat{\mathbf{C}^{c, c'}}$, as
\[\mathbf{A}^{c'} \cdot \mathbf{B}^{c, c'} = \sum_{i=0}^{\ell-1}\mathbf{A}^{c'}\mathbf{B}^{c, c', i}\cdot x^{i\cdot \Delta}.\]
Now we finished the description of the algorithm.
We analyze the time complexity. Fix constants $0\le c' \le c$, we need to multiply $\mathbf{A}^{c'}$ and $\widehat{\mathbf{B}^{c, c'}}$. Let $\tau = \log_n (2^{c'})$. In both of these matrices, the degree of every entry is at most $\Delta = O(2^{c'}\xi) = O(n^\tau\xi)$. The dimensions of $\mathbf{A}^{c'}$ and $\widehat{\mathbf{B}^{c, c'}}$ are upper bounded by $n^p\times (n^{q-\tau})$ and $(n^{q-\tau})\times (n^{r-\tau})$ respectively. Therefore the time complexity for this step is
\[\tilde{O}\mleft(n^\tau\xi \cdot n^{\omega(p, q-\tau, r-\tau)}\mright),\]
which is at most $\xi\cdot n^{\omega(p, q, r) + o(1)}$ by \cref{lemma:matmul1}. As we only need to consider $O(\log^2 n)$ pairs of $(c, c')$, it follows that the total time complexity of our algorithm is $\xi\cdot n^{\omega(p, q, r) + o(1)}$.
\end{proof}
\section*{Acknowledgment}
We thank Ran Duan and Tianyi Zhang for their helpful discussions during the initial stage of this research. We are grateful to anonymous reviewers for their helpful comments. We would also like to thank an anonymous reviewer for suggesting the title of \cref{sec:breaking-tie}, and another anonymous reviewer for pointing out a subtle issue regarding the invertibility of polynomial matrices (and fixing the issue).
\section{Preliminaries}\label{sec:preliminaries}
In this paper, we say an event happens \emph{with high probability} (w.h.p.) if it happens with probability at least $1-1/n^c$, for a constant $c$ that can be made arbitrarily large. Our DSOs (or $r$-truncated DSOs) will have a randomized preprocessing algorithm and a deterministic query algorithm. We say a DSO is \emph{correct with high probability} if w.h.p.~over its (randomized) preprocessing algorithm, it answers every possible query $(u, v, f)$ correctly.
\paragraph{Notation.} We use the following notation in \cite{DuanP09a, Ren20}.\begin{itemize}
\item Let $p$ be a path, we use $|p|$ to denote the number of edges in $p$, and use $\|p\|$ to denote the length of $p$ (i.e.~total weight of edges in $p$).
\item Let $u, v$ be two vertices, we define $\|uv\|$ as the length of the shortest path from $u$ to $v$. Furthermore, let $f$ be a failure (which is either an edge or a vertex), we define $\|uv\diamond f\|$ as the length of the shortest path from $u$ to $v$ that does not go through $f$.
\item Let $u, v$ be two vertices, we define $|uv|$ as the number of edges in the shortest path from $u$ to $v$. In the case that there are many shortest paths from $u$ to $v$, it turns out that the following definition will be convenient in \cref{sec:breaking-tie}: We define $|uv|$ as the \emph{largest} number of edges in any shortest path from $u$ to $v$. %
\end{itemize}
\paragraph{Fast matrix multiplication.} Let $\omega$ be the exponent of matrix multiplication; the current best upper bound is $\omega < 2.3728596$ \cite{AlmanW21}. For positive integers $n_1, n_2, n_3$, let $\mathsf{MM}(n_1, n_2, n_3)$ denote the minimum number of arithmetic operations needed to multiply an $n_1\times n_2$ matrix and an $n_2\times n_3$ matrix. We define $\omega(a, b, c)$ to be the exponent of multiplying an $n^a\times n^b$ matrix and an $n^b\times n^c$ matrix, i.e.
\[\omega(a, b, c) = \inf\{w : \mathsf{MM}(n^a, n^b, n^c) = O(n^w)\}.\]
It is a classical result that $\omega(1, 1, \lambda) = \omega(1, \lambda, 1) = \omega(\lambda, 1, 1)$ for any real number $\lambda > 0$ \cite{LottiR83}; we denote $\omega(\lambda) = \omega(1, 1, \lambda)$.
We will need the following lemmas about the exponent of rectangular matrix multiplication. For completeness, we include proofs for these lemmas in \cref{sec:apd-FMM}.
\begin{restatable}{lemma}{MatMulI}\label{lemma:matmul1}
Let $a, b, c, r$ be positive real numbers, then $r+\omega(a, b, c) \le \omega(a, b+r, c+r)$.
\end{restatable}
\begin{restatable}{lemma}{MatMulII}\label{lemma:matmul2}
Consider the function $f(\tau) = \omega(1, 1-\tau, 1-\tau)$, where $\tau \in [0, 1]$. Then $\tau+f(\tau)$ is monotonically non-increasing in $\tau$, and $2\tau + f(\tau)$ is monotonically non-decreasing in $\tau$.
\end{restatable}
\paragraph{Polynomial operations.} Let $p, q\in\mathbb{F}[x]$ be two polynomials of degree $d$. It is easy to compute $p+q$ or $p-q$ in $O(d)$ field operations. We can also compute $p\cdot q$ in $\tilde{O}(d)$ field operations using fast Fourier transform. (Here, $\tilde{O}$ hides $\operatorname{polylog}(d)$ factors.) When $p$ is invertible, it is also possible to compute $p^{-1} \bmod x^d$ in $\tilde{O}(d)$ field operations \cite[Section 8.3]{AhoHU74}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 259 |
\subsubsection*{Introduction}
In the paper \cite{Du}, Dunkl introduced the following
difference-differential operators acting on functions
on a Euclidean space $V$, related to arbitrary
finite groups $G$ generated by orthogonal reflections in
$V$:
\begin{equation}\label{e1}
\nabla_\xi=\partial_\xi+\sum_{\alpha\in R_+}
k_\alpha(\alpha,\xi){1\over (\alpha,x)}\hat s_\alpha.
\end{equation}
Here $\partial_\xi$ denotes the partial
derivative in direction $\xi\in V$,
$R$ is the root system of the group $G$, i.e., the set
of unit normals to the reflection hyperplanes, $R_+$ is
its positive part with respect to some generic linear form on $V$,
$k_\alpha=k(\alpha)$ is a $G$-invariant function on $R$,
$s_\alpha$ is the reflection corresponding to the root $\alpha\in R$,
and $\hat s_\alpha$ is the operator on the space of functions on
$V$:
\begin{displaymath}
\hat s_\alpha f(x)=f(s_\alpha(x)).
\end{displaymath}
To be precise, Dunkl used slightly different
operators, which are conjugated to \Ref{e1}
by the operator of multiplication by $\prod(\alpha,x)^{k_\alpha}$.
The main property of the Dunkl operators
is given by the following
\begin{thm} (Dunkl)
The operators \Ref{e1} commute with each other:
\begin{equation}\label{e2}
[\nabla_\xi,\nabla_\eta]=0,
\end{equation}
for all $\xi$, $\eta\in V$.
\end{thm}
The goal of this work is to describe certain generalizations
of the Dunkl operators \Ref{e1}, preserving the property
\Ref{e2}. Some of these results were announced in \cite{Ve}.
In Section 1 we consider generalizations of the form
\begin{equation}\label{e3}
\nabla_\xi=\partial_\xi+\sum_{\alpha\in A_+}
k_\alpha(\alpha,\xi){1\over (\alpha,x)}\hat s_\alpha,
\end{equation}
where $A_+$ is the set of unit normals to some set $S$
of hyperplanes in $V$ passing through the origin,
$A_+$ is its positive part, and $k_\alpha=k(\alpha)$ is
some function on $A_+$. We show that the commutativity
of the operators $\nabla_\xi$ implies that $S$ is the set
of reflection hyperplanes of some Coxeter group $G$,
$A_+=R_+$, and $k$ is $G$-invariant.
In the Section 2, we consider operators of the form
\begin{equation}\label{e4}
\nabla_\xi=\partial_\xi+\sum_{\alpha\in R_+}
(\alpha,\xi)f_\alpha((\alpha,x))\hat s_\alpha,
\end{equation}
where $f_\alpha(z)$ are functions of one variable,
not identically $0$.
The commutation relations \Ref{e2} are equivalent
to a system of functional equations for the functions
$f_\alpha$, $\alpha\in R_+$.
In the case when $G$ is the Weyl group $W$ of a simple Lie
algebra, we show that, with the exception of
$A_1$, $B_2$, the only $W$-invariant solutions,
i.e., such that
\begin{displaymath}
\hat s_\alpha\nabla_\xi=\nabla_{s_\alpha(\xi)}\hat s_\alpha,
\end{displaymath}
for all $\alpha\in R_+$, are Dunkl's solutions, in accordance
with \cite{Ch}.
While the $A_1$ case is trivial, the $B_2$ case
leads to the classical theory of {\em Landen's and Jacobi's
transformations} of elliptic functions. We give the general
$W$-invariant solution in this case. This result
has an interesting topological application in the theory
of elliptic genera (see \cite{BuVe}).
In Section 3, we give a solution of
the functional equations in terms of elliptic functions,
for arbitrary reduced root systems. This solution gives
families of commuting differential-difference operators
that we call elliptic Dunkl operators.
We then show in Section 4, using techniques from \cite{Bu}, \cite{Bu2},
that these are essentially all solutions in the
$A_{n-1}$ case.
In Section 5, quantum elliptic Dunkl operators are introduced.
These are pairwise commutative families
of difference operators. They depend on a parameter $\mu$, and
are such that elliptic Dunkl operators appear in the
first order term (semiclassical approximation) of their
expansion in power of $\mu$. These operators
are related to the transfer matrices associated with
the $R$-matrix of \cite{ShiUe}, \cite{FePa}.
We conclude by discussing the possible applications
of our results to the theory of integrable $n$-body
systems.
\subsubsection*{1. Operators of Dunkl type and Coxeter groups}
Let $S$ be a finite set of hyperplanes in a Euclidean
space $V$ passing through the origin, $A$ the set of
unit normals to the hyperplanes in $S$
(two for each hyperplane), $A_+$ the positive half
of $A$ with respect to some linear form on $V$, and
$k_\alpha$ some non-zero coefficients.
Let $\nabla_\xi$ be the operator
\begin{equation}\label{e5}
\nabla_\xi=\partial_\xi+\sum_{\alpha\in A_+}
k_\alpha(\alpha,\xi){1\over (\alpha,x)}\hat s_\alpha,
\qquad k_\alpha\neq 0.
\end{equation}
\begin{thm}
The operator $\nabla_\xi$ and $\nabla_\eta$ commute
for arbitrary $\xi$ and $\eta\in V$ if and only if
$A_+$ coincides with $R_+$ for some Coxeter group
$G$ and $k(\alpha)=k_\alpha$ is a $G$-invariant
function.
\end{thm}
\begin{proof}
The commutator $[\nabla_\xi,\nabla_\eta]$ can be
rewritten in the form I+II+III (see \cite{He}),
where
\begin{eqnarray*}
{\rm I}&=&[\partial_\xi,\partial_\eta]=0,\\
{\rm II}&=&\partial_\xi(\sum_\alpha k_\alpha
\frac{(\alpha,\eta)}{(\alpha,x)}\hat s_\alpha)
-
\partial_\eta(\sum_\alpha k_\alpha
\frac{(\alpha,\xi)}{(\alpha,x)}\hat s_\alpha)=0,
\,\\
{\rm III}&=&
\sum_{\alpha,\beta\in A_+}
k_\alpha k_\beta
\langle \alpha,\beta\rangle_{\xi,\eta}
\frac1{(\alpha,x)(s_\alpha(\beta),x)}\hat s_\alpha\hat s_\beta,
\end{eqnarray*}
where $\langle \alpha,\beta\rangle_{\xi,\eta}
=(\alpha,\xi)(\beta,\eta)-(\alpha,\eta)(\beta,\xi)$.
One sees that to have a cancellation of the terms in the sum
corresponding to fixed rotation $s_\alpha s_\beta$, it is necessary
that, together with $\alpha$, $\beta$, the vector
$\gamma=s_\alpha(\beta)$, also belongs to $A$. This means
that $A$ is the root system $R$ for some Coxeter group.
To prove that $k$ is $G$-invariant, let us recall that if
it is so, then $[\nabla_\xi,\nabla_\eta]=0$ according
to Dunkl's theorem.
Suppose that $k$ is not invariant, i.e., there exist two
roots
$\alpha$, $\beta$ such that $k_\beta\neq k_\gamma$,
$\gamma=s_\alpha(\beta)$. Subtracting
from III the Dunkl identity with appropriate
coefficients, gives a relation of the form
\begin{displaymath}
\sum_{\alpha,\beta\in A_+}
c_{\alpha,\beta}
\langle \alpha,\beta\rangle_{\xi,\eta}
\frac1{(\alpha,x)(s_\alpha(\beta),x)}\hat s_\alpha\hat s_\beta=0,
\end{displaymath}
where $c_{\alpha,\beta}=0$, $c_{\gamma,\alpha}\neq 0$.
But this is impossible, since the pole at $(\gamma,x)=0$
cannot be canceled.
\end{proof}
Thus we have shown that Coxeter groups arise naturally
as a commutativity condition for a natural generalization
of Dunkl operators.
\subsubsection*{2. Functional equations for the
coefficients of generalized
Dunkl operators}\label{s2}
Let us now consider for given Coxeter group $G$ the
following generalizations of Dunkl's operators:
\begin{equation}\label{e6}
\nabla_\xi=\partial_\xi+\sum_{\alpha\in R_+}
(\alpha,\xi)f_\alpha((\alpha,x))\hat s_\alpha.
\end{equation}
It is convenient to extend the definition of
$f_\alpha$ to all $\alpha\in R$, by setting
\begin{equation}\label{fa}
f_{-\alpha}(z)=-f_\alpha(-z).
\end{equation}
With this definition, $\nabla_\xi$ is independent
of the choice of the positive part $R_+$ of $R$.
In fact, we can replace $R_+$ by any subset of
$R$ consisting of one normal vector for each hyperplane,
without changing $\nabla_\xi$.
We may choose as before $R$ to consist of unit vectors,
but, since Weyl groups will be considered later,
it is convenient to allow normal vectors to have arbitrary
length, still preserving the condition that we have two
normal vectors $\alpha$ and $-\alpha$ for each hyperplane
in $S$. At this point,
this is no generalization, since the operators
do not change if we replace $\alpha$ by a multiple
$\alpha'=c\alpha$ and replace the corresponding function
$f_\alpha$ by the function $f_{\alpha'}(z)=c^{-1}f_\alpha(c^{-1}z)$.
A calculation analogous to the previous one
leads to the following formula
\begin{equation}\label{e7}
[\nabla_\xi,\nabla_\eta]=
\sum_{\alpha,\beta\in R_+}
\langle\alpha,\beta\rangle_{\xi,\eta}
f_\alpha((\alpha,x))
f_\beta((s_\alpha(\beta),x))
\hat s_\alpha\hat s_\beta,
\end{equation}
where, as above, $\langle\alpha,\beta\rangle_{\xi,\eta}
=(\alpha,\xi)(\beta,\eta)-(\alpha,\eta)(\beta,\xi)$.
\begin{thm}
The commutativity condition for the operators \Ref{e6}
\begin{displaymath}
[\nabla_\xi,\nabla_\eta]
\end{displaymath}
is equivalent to the system of functional equations
\begin{equation}\label{e8}
\sum_{\alpha,\beta\in R_+:s_\alpha s_\beta=r}
\langle\alpha,\beta\rangle
f_\alpha((\alpha,x))
f_\beta((s_\alpha(\beta),x))
=0,
\end{equation}
for any given rotation $r$.
\end{thm}
Here $\langle\alpha,\beta\rangle$ denotes the oriented
area of the parallelogram with sides $\alpha$, $\beta$,
with respect to some orientation of the plane spanned
by these two vectors, which is perpendicular
to the rotation axis of $r$. Obviously, the equation \Ref{e8}
is independent of the choice of orientation.
\begin{example}
For the root system of type $A_2$ (see Fig.~\ref{a2}),
we have the following
functional equation for the three functions associated with
the three roots labeled in Fig.~\ref{a2}. The functions
associated to the other roots are then determined by \Ref{fa}.
\begin{equation}\label{e9}
f(x-y)g(x-z)+g(y-z)h(y-x)+h(z-x)f(z-y)=0.
\end{equation}
\end{example}
\setlength{\unitlength}{40pt}
\begin{figure}[t]
\begin{picture}(8,3)(-4,-1.3)
\Root101\put(1.2,-0.05){\it f}
\Root35{0.5}
\Root{-3}5{0.5}\put(-0.7,1){\it g}
\Root{-1}01
\Root{-3}{-5}{0.5}\put(-0.7,-1){\it h}
\Root{3}{-5}{0.5}
\end{picture}
\caption{The root system of type $A_2$}\label{a2}
\end{figure}
Let us now discuss $G$-invariant solutions of
the functional equations \Ref{e9}: $f_{g(\alpha)}=
f_\alpha$ for all $g\in G$, or, equivalently,
\begin{displaymath}
\hat g\nabla_\xi=\nabla_{g(\xi)}\hat g, {\rm\ for\ all\ } g\in G.
\end{displaymath}
We assume
that none of the functions $f_\alpha$ vanishes
identically.
We restrict ourselves
to the case where $G$ is the Weyl group $W$ a semisimple
Lie algebra. Obviously it is sufficient to consider
the case of a simple Lie algebra. Excluding the
one dimensional case $A_1$ where commutativity does
not give any restriction on $f_\alpha$, the first
case we consider is the case $A_{n-1}$, where $G=W$ is
the symmetric group $S_n$.
\begin{proposition}
For root systems of type $A_{n-1}$, $n\geq 3$, the only
$S_n$-invariant solution of \Ref{e8} meromorphic
in a neighborhood of the origin is
\begin{displaymath}
f_\alpha(z)=\frac Cz,
\end{displaymath}
which corresponds to the usual Dunkl operators.
\end{proposition}
\begin{proof}
Consider the functional equation \Ref{e8}. For given
$r$, it involves only roots in the 2-plane orthogonal
to the rotation axis of $r$. This plane is spanned by
any two roots $\alpha$, $\beta$ with $s_\alpha s_\beta=r$.
In the $A_{n-1}$ case, the roots in any such plane build
a root system of type $A_1\times A_1$ or $A_2$.
In the former case roots are orthogonal and
\Ref{e8} is identically satisfied for all $f$.
It is therefore sufficient to consider the case $n-1=2$.
Let
us rewrite \Ref{e9} in the form
\begin{equation}\label{e10}
f(u)g(u+v)+g(v)h(-u)+h(-u-v)f(-v)=0.
\end{equation}
$S_3$-invariant solutions correspond to $f=g=h$, where
$f$ is an odd function, since $f_\alpha(z)=f_{-\alpha}(z)
=-f_\alpha(-z)$.
This leads to the following equation for
$f$:
\begin{displaymath}
f(u)f(u+v)-f(v)f(u)+f(u+v)f(v)=0.
\end{displaymath}
If $f$ vanishes at a point $u$, then $f(v)f(u+v)=0$ for
all $v$, and $f$ vanishes identically.
If $f$ does not vanish anywhere,
we get,
after substitution $\phi=1/f$,
\begin{displaymath}
\phi(u+v)=\phi(u)+\phi(v),
\end{displaymath}
which has the only solution $\phi(z)=cz$.
\end{proof}
Let us consider some more examples of low rank. It will
be shown below that these examples essentially cover the
whole theory.
\begin{example}
In the case of $G_2$ (see Fig.~\ref{g2}), one has two
$A_2$ systems. Hence the the invariant solutions have
the form
\begin{displaymath}
f(z)=\frac Az,\qquad g(z)=\frac Bz,
\end{displaymath}
for arbitrary $A$ and $B$.
\end{example}
\begin{figure}[h]
\begin{picture}(8,4.5)(-4,-2.3)
\Root101 \put(1.2,-0.05){\it f}
\Root35{0.5}\put(1.75, 1){\it g}
\Root{-3}5{0.5}\put(0.7,1){\it f}
\Root{-1}01
\Root{-3}{-5}{0.5}\put(-0.6,1){\it f}
\Root{3}{-5}{0.5}\put(-1.7, 1){\it g}
\Root01{1.7}\put(-0.05,1.9){\it g}
\Root53{1.5}
\Root5{-3}{1.5}
\Root0{-1}{1.7}
\Root{-5}{-3}{1.5}
\Root{-5}{3}{1.5}
\end{picture}
\caption{The root system of type $G_2$}\label{g2}
\end{figure}
\begin{example}
In the $B_2$ case (see Fig.~\ref{b2}) one has the following
functional equation for the symmetric solution:
\begin{equation}\label{e11}
f(x)(g(x+y)+g(x-y))
+f(y)(g(x+y)-g(x-y))=0,
\end{equation}
where $f$ and $g$ are odd functions. In this
case we have more complicated solutions, such
as
\begin{displaymath}
f(z)=\cot(z)
\qquad g(z)=
\frac1{\sin(z)}.
\end{displaymath}
The general solution will be given below.
\end{example}
\begin{figure}[t]
\begin{picture}(8,2.5)(-4,-1.3)
\Root101\put(1.2,-0.05){\it f}
\Root111\put(1.2,1.2){\it g}
\Root011\put(0,1.2){\it f}
\Root{-1}11\put(-1.2,1.2){\it g}
\Root{-1}01
\Root{-1}{-1}1
\Root0{-1}1
\Root1{-1}1
\end{picture}
\caption{The root system of type $B_2$}\label{b2}
\end{figure}
\begin{example}
In the $B_3$ case, $R=\{\pm e_i,\pm e_i\pm e_j\}$,
where $\{e_1,\, e_2,\, e_3\}$ is an orthonormal basis of $V$.
Let us find the most general invariant solution. The
roots $\alpha=e_1+e_2$, and $\beta=e_2+e_3$
form a subsystem isomorphic to $A_2$. Thus
$f_\alpha(z)=f_\beta(z)=C/z$. Now consider the
$B_2$ system generated by $\gamma=e_1$ and $\alpha$.
So we have to solve the functional equation \Ref{e11}
with $g=C/z$:
\begin{displaymath}
f_\gamma(x)(\frac C{x+y}+\frac C{x-y})
+
f_\gamma(y)(\frac C{x+y}-\frac C{x-y})=0,
\end{displaymath}
or, equivalently (if $C\neq 0$),
\begin{displaymath}
xf_\gamma(x)-yf_\gamma(y)=0,
\end{displaymath}
which implies $f_\gamma(z)=D/z$. It follows
that the most general invariant
solution is Dunkl's solution.
\end{example}
\begin{thm}\label{T1}
For all root systems of simple Lie algebras
except $A_1$, $B_2$, all $W$-invariant solutions
of the functional equation \Ref{e8} have the form
\begin{displaymath} f_\alpha(z)={k_\alpha\over z},
\end{displaymath}
where $\alpha\mapsto k_\alpha$ is a $W$-invariant function on $R$.
\end{thm}
\begin{proof}
For all root systems except $A_1$, $B_2$, there exists
a subsystem isomorphic to $A_2$ (see \cite{Bou})
All other two-dimensional subsystems are
isomorphic either to $A_2$ or $B_2$ (the
exceptional case of $G_2$ was considered above).
Continuing as in the $B_3$ case, we complete the proof.
\end{proof}
In the rest of this section we consider the invariant
$B_2$ case. In other words, we want to find the general
solution of the equation \Ref{e11}
\begin{equation}\label{ff}
f(x)(g(x+y)+g(x-y))
+f(y)(g(x+y)-g(x-y))=0,
\end{equation}
which are meromorphic in the vicinity of the
origin.
There are the following obvious solutions:
$f(z)\equiv 0$, $g$ an arbitrary function,
and $g(z)\equiv 0$, $f$ an arbitrary function.
We call these solutions {\em trivial}.
\begin{thm}\label{Tb2}
The general non-trivial solution of the functional
equation \Ref{ff} has the form
\begin{displaymath}
g(z)=\frac A{{\rm sn}(\alpha z,k)}\quad,\qquad
f(z)= B(\log g(z))',
\end{displaymath}
or, more explicitly
\begin{equation}\label{vv4}
g(z)=\frac A{{\rm sn}(\alpha z,k)}\quad,\qquad
f(z)=\frac B{{\rm sn}(\epsilon\alpha z,\tilde k)}\quad,
\end{equation}
where $\tilde k=(1-k)/(1+k)$, $\epsilon=-i(1+k)$, and
$A$, $B$ and $\alpha$ are arbitrary constants.
\end{thm}
\noindent Here ${\rm sn}(z, k)$ is the classical
Jacobi elliptic function (see, e.g., \cite{WW}).
In the degenerate cases we have the following
solutions:
\begin{eqnarray*}
f(z)&=&\frac A{\sin\alpha z}\quad,\qquad g(z)= B\cot\frac{\alpha z}2\quad,\\
f(z)&=&A\cot{\alpha z}\quad,\qquad g(z)=\frac B{\sin\alpha z}\quad,\\
f(z)&=&\frac Az\quad,\qquad g(z)= \frac Bz\quad,
\end{eqnarray*}
with arbitrary constants $A$, $B$, $\alpha$. In particular,
we see that all non-trivial solutions are {\em odd} functions,
and thus lead to invariant Dunkl operators for root
systems of type $B_2$.
We proceed to prove the Theorem.
\begin{lemma}
If $f(z)$, $g(z)$ satisfy the equation
\Ref{ff}, then the same is true
for $\tilde f(z)$, $\tilde g(z)$, where
\begin{equation}\label{v5}
{\rm (i)}\qquad \tilde f(z)=\lambda f(\alpha z),
\qquad \tilde g(z)=\mu g(\alpha z),
\end{equation}
for arbitrary
constants $\lambda$, $\mu$, $\alpha$, or
\begin{equation}\label{v6}
{\rm (ii)} \qquad \tilde f(z)=g(z), \qquad
\tilde g(z)=f(z/2).
\end{equation}
\end{lemma}
\begin{proof}
The first symmetry is evident. To prove
the second one, it is sufficient to
change variables
\begin{displaymath}
u=x+y,\qquad v=x-y.
\end{displaymath}
Then equation \Ref{ff} takes the form
\begin{displaymath}
g(u)
\left(
f\left(\frac{u+v}2\right)
+f\left(\frac{u-v}2\right)
\right)
+g(v)
\left(
f\left(\frac{u+v}2\right)
-f\left(\frac{u-v}2\right)
\right)=0.
\end{displaymath}
\end{proof}
We will use these symmetries again and again,
in particular in the proof of the following
\begin{lemma}
There are no non-trivial solutions of \Ref{ff},
for which $f(z)$ or $g(z)$ is regular at the origin.
\end{lemma}
\begin{proof}
Because
of the symmetry \Ref{v6},
it is enough to consider only the case when $f(z)$ is regular
at the origin.
Putting $y=0$ in \Ref{ff}, we get $2f(x)g(x)=0$,
which means that the solution is trivial.
\end{proof}
\begin{lemma}
Non-trivial solutions of $\Ref{ff}$ are
odd functions:
\begin{displaymath}
f(-z)=-f(z),\qquad g(-z)=-g(z).
\end{displaymath}
\end{lemma}
\begin{proof}
Rewrite the equation \Ref{ff}
in the form
\begin{displaymath}
\phi(y)\left(
g(x+y)+g(x-y)\right)
+\phi(x)\left(
g(x+y)-g(x-y)\right)=0,
\end{displaymath}
where $\phi(z)=1/f(z)$ is regular at vanishes at $z=0$.
By putting $x=0$ in this relation, we obtain
\begin{displaymath}
\phi(y)(g(y)+g(-y))=0
\end{displaymath}
which implies for a non-trivial solution
\begin{displaymath}
g(-y)=-g(y).
\end{displaymath}
The fact that $f$ is odd follows then from the symmetry
\Ref{v6}.
\end{proof}
Let us introduce $\lambda=\phi'(0)$, where
$\phi(z)=1/f(z)$, as before.
\begin{lemma} If $f(z)$, $g(z)$ is a non-trivial
solution of \Ref{ff}, then
\begin{eqnarray}
& {\rm (i)}& \lambda =\phi'(0)\neq 0, \nonumber\\
\label{v7} & {\rm (ii)}& \frac{ g'(x)}{g(x)}=-\lambda f(x),\\
\label{v8} &{\rm (iii)}&
\lambda
f(x+y)=
\frac
{f'(y)f(x)-f'(x)f(y)}
{f^2(x)-f^2(y)}.
\end{eqnarray}
\end{lemma}
\begin{proof}
Rewrite equation \Ref{ff} in the form
\begin{equation}\label{v9}
g(x+y)=\frac{f(y)-f(x)}{f(y)+f(x)}g(x-y).
\end{equation}
Taking the logarithm of both sides and applying
the operator $\partial_x+\partial_y$, gives
\begin{eqnarray}\label{v10}
\frac{g'(x+y)}{g(x+y)}
&=&\frac{f(x)f'(y)-f(y)f'(x)}
{f^2(y)-f^2(x)}\\
\label{v10a} &=&\frac{\phi'(x)\phi(y)-\phi'(y)\phi(x)}
{\phi^2(x)-\phi^2(y)}\quad.
\end{eqnarray}
Putting $y=0$ in \Ref{v10a} implies
\begin{displaymath}
\frac{g'(x)}{g(x)}=-\lambda
f(x).
\end{displaymath}
In particular, if $\lambda=0$, $g$ is constant and therefore
regular, which is impossible. Now the formula \Ref{v8}
follows from \Ref{v7} and \Ref{v10}
\end{proof}
Using the symmetry \Ref{v5}, \Ref{v6}, we may set without
loss of generality $\lambda=1$, so that
\begin{displaymath}
f(z)=\frac 1z +O(z),\qquad g(z)=\frac 1z+O(z),\qquad (z\to 0).
\end{displaymath}
The function $f$ satisfies the functional equation
(addition theorem)
\begin{equation}\label{v13}
f(x+y)=
\frac{f(x)f'(y)-f(y)f'(x)}
{f^2(x)-f^2(y)}.
\end{equation}
Rewrite it in the following form
\begin{equation}\label{v14}
f(x+y)=
\frac
{\phi'(y)f(x)+f'(x)\phi(y)}
{1-\phi^2(y)f^2(x)},
\end{equation}
and expand the right hand side near $y=0$, using
$\phi(z)=z+az^3+O(z^5)$:
\begin{displaymath}
f(x+y)=f(x)+f'(x)y+(3a\,f(x)+f^3(x))y^2+O(y^3).
\end{displaymath}
By comparing this with the Taylor expansion,
one has
\begin{equation}\label{v15}
f''(x)=2f^3(x)+6a\,f(x),
\end{equation}
implying, after multiplication by $f'$ and integration, that
\begin{equation}\label{v16}
(f')^2=f^4+6af^2+b,
\end{equation}
for some constant $b$. The function $\phi=1/f$ is thus
a regular odd solution of the equation
\begin{equation}\label{v17}
(\phi')^2=1+6a\phi^2+b\phi^4,
\end{equation}
and therefore coincides with the Jacobi
elliptic function
\begin{equation}\label{v18}
\phi(x)=\frac{{\rm sn}(\epsilon x, k)}\epsilon,
\end{equation}
where $(1+k^2)\epsilon^2=-6a$, $k^2\epsilon^4=b$.
Recall that ${\rm sn}$ is the solution of
the equation $(s')^2=(1-s^2)(1-k^2s^2)$, with
initial condition $s(0)=0$.
It satisfy the addition formula discovered by
A. Cayley (see \cite{WW})
\begin{displaymath}
s(x+y)=
\frac{s^2(x)-s^2(y)}
{s(x)s'(y)-s(y)s'(x)}.
\end{displaymath}
This implies the relation \Ref{v13} for
$f(x)=\epsilon/{\rm sn}(\epsilon x,k)$. Note that
by \Ref{v6}, also $g$ has the same form as $f$,
in general with different values of $\epsilon$ and
$k$.
\begin{lemma}
If $f(x)=\epsilon/{\rm sn}(\epsilon x,k)$ and
$g'(x)/g(x)=-f(x)$ then $f(x)$, $g(x)$ are
a non-trivial solution of the functional equation
\Ref{ff}.
\end{lemma}
\begin{proof}
We have
\begin{eqnarray*}
2\,\frac{g'(x+y)}{g(x+y)}
&=&
-2\,f(x+y)\\
&=& 2\,\frac{f'(x)f(y)-f'(y)f(x)}{f^2(x)-f^2(y)}\\
&=&\frac{f'(y)-f'(x)}{f(y)-f(x)}-
\frac{f'(y)+f'(x)}{f(y)+f(x)}.
\end{eqnarray*}
So
\begin{displaymath}
(\partial_x+\partial_y)\log g(x+y)
=
(\partial_x+\partial_y)\log\frac{f(y)-f(x)}{f(y)+f(x)}.
\end{displaymath}
This means that
\begin{displaymath}
g(x+y)=\frac{f(y)-f(x)}{f(y)+f(x)}\psi(x-y),
\end{displaymath}
for some function $\psi$. Rewriting this equation
as
\begin{displaymath}
g(x+y)=
\frac{\phi(x)-\phi(y)}{\phi(x)+\phi(y)}\psi(x-y),
\qquad \phi=1/f,
\end{displaymath}
and putting $y=0$, gives $\psi(x)=g(x)$, and therefore
\begin{displaymath}
g(x+y)=\frac{f(y)-f(x)}{f(y)+f(x)}g(x-y),
\end{displaymath}
which is equivalent to \Ref{ff}.
\end{proof}
To finish the proof of Theorem \ref{Tb2},
we have to prove that the relation between
$k$, $\tilde k$, $\epsilon$ in
\begin{displaymath}
g(x)=\frac1{{\rm sn}(x,k)},\qquad
f(x)=\frac\epsilon{{\rm sn}(\epsilon x,\tilde k)}\quad,
\end{displaymath}
where $g'/g=-f$ is the one given in
Theorem \ref{Tb2}.
The function $1/f=-g/g'={\rm sn}(\epsilon x,\tilde k)/\epsilon$
satisfies the equation
\begin{equation}\label{vv20}
[(g/g')']^2
=
[1-\epsilon^2(g/g')^2]
[1-\tilde k^2\epsilon^2(g/g')^2],
\end{equation}
or, equivalently,
\begin{displaymath}
\left(
(g')^2-gg''
\right)^2
=\left((g')^2-\epsilon^2g^2\right)
\left((g')^2-\tilde k^2\epsilon^2g^2\right).
\end{displaymath}
Substituting $(g')^2=(g^2-1)(g^2-k^2)$,
$g''=2g^3-(k^2+1)g$, into \Ref{vv20}, gives
the following two possibilities
\begin{enumerate}
\item[(i)]
$2k=-(k^2+1)-\epsilon^2$, $-2k=-(k^2+1)-\tilde k^2\epsilon^2$
\item[(ii)]
$-2k=-(k^2+1)-\epsilon^2$, $2k=-(k^2+1)-\tilde k^2\epsilon^2$
\end{enumerate}
In the first case, we get $\tilde k=(1-k)/(1+k)$,
$\epsilon=i(k+1)$, as required. The second case leads
to an equivalent answer. Theorem \ref{Tb2} is proved.
\par\vspace{.5\baselineskip}
\noindent{\em Remark.} The transformation of elliptic
functions $g\to f$ is of
second order and therefore can be reduced to the well-known Landen
transformation (see \cite{WW}).
One can check that it is the composition of the
``imaginary Jacobi transformation'', which is the unimodular
transformation with the action on the homology of the curve
described by the matrix $J$
\begin{displaymath}
J=\left(\begin{array}{cc}0 & 1\\-1& 0\end{array}\right).
\end{displaymath}
and Landen's transformation with matrix $L$,
\begin{displaymath}
L=\left(\begin{array}{cc}2 & 0\\0& 1\end{array}\right).
\end{displaymath}
Let us call it LJ-transformation. Thus the functional equation
\Ref{ff}
describes a pair of elliptic functions related by the
LJ-transformation. This fact has found recently an interesting
topological application in the theory of elliptic genera
\cite{BuVe}.
Note that $f$ and $g$, as elliptic functions of second order
``live'' on different elliptic curves,
one of which is a double cover of the other. In particular,
the solutions found here are not special cases of the
elliptic solutions considered in the next Section.
\subsubsection*{3. Elliptic Dunkl operators}
In this section we consider only the
case where $R$ is the root system
of a semisimple Lie algebra with
Weyl group $G=W$.
Let us consider the elliptic curve
with modular parameter $\tau$, Im$(\tau)>0$,
and the family of functions
\begin{equation}\label{sigma}
\sigma_\lambda(z)=\frac
{\theta_1(z-\lambda)\theta_1'(0)}
{\theta_1(z)\theta_1(-\lambda)}, \qquad \lambda\in{\bf C}\setminus
{\bf Z}+\tau{\bf Z},
\end{equation}
given in terms of Jacobi's theta function
\begin{displaymath}
\theta_1(z)=-\sum_{n=-\infty}^{\infty}
e^{2\pi i(z+\frac12)(n+\frac12)+\pi i\tau(n+\frac12)^2}.
\end{displaymath}
The functions $\sigma_\lambda$ have the following
defining properties:
\begin{enumerate}
\item[(i)] $\sigma_\lambda(z+1)=\sigma_\lambda(z)$.
\item[(ii)] $\sigma_\lambda(z+\tau)=
e^{2\pi i\lambda}\sigma_\lambda(z)$.
\item[(iii)] $\sigma_\lambda$ is meromorphic, its poles
are on the lattice ${\bf Z}+\tau {\bf Z}$, and $\sigma_\lambda(z)
=1/z+{O}(1)$ as $z\to 0$.
\end{enumerate}
More properties of this functions are given in the Appendix.
\begin{thm}\label{tfa}
For any generic $\lambda\in V_{\bf C}=V\otimes_{\bf R}{\bf C}$,
and $W$-invariant function
$k_\alpha$,
the functions
\begin{displaymath} f_\alpha(z)=k_\alpha\sigma_{(\alpha^\vee,\lambda)}(z),\end{displaymath}
where $\alpha^\vee=2\alpha/(\alpha,\alpha)$, satisfy
the functional equations \Ref{e8}.
\end{thm}
\begin{proof}
Fix the rotation $r$. All roots involved in the left hand
side of the functional equation \Ref{e8}
\begin{equation}\label{I}
I(x)=\sum_{\alpha,\beta\in R_+:s_\alpha s_\beta=r}
\langle\alpha,\beta\rangle
\sigma_{(\alpha^\vee,\lambda)}((\alpha,x))
\sigma_{(\beta^\vee,\lambda)}((s_\alpha(\beta),x))
\end{equation}
lie on the same two-dimensional plane. Consider $I(x)$
as a meromorphic function of $x\in V_{\bf C}$,
and let $P^\vee=\{p\in V\,|(p,\alpha)\in{\bf Z}\; \forall\alpha\in R\}$.
Then if $p\in P^\vee$, $I(x+p)=I(x)$, and as
$x\to x+p\tau$, the term labeled by $(\alpha,\beta)$ in
the sum \Ref{I} gets multiplied by
\begin{displaymath}
e^{2\pi i\left( (\alpha^\vee,\lambda)(\alpha,p)
+(\beta^\vee,\lambda)(s_\alpha(\beta),p)\right)}.
\end{displaymath}
Since $s_\alpha(\lambda)=\lambda-(\alpha^\vee,\lambda)\alpha$,
we see that the multiplier can be rewritten as
\begin{displaymath}
e^{2\pi i(\lambda-s_\alpha s_\beta(\lambda),p)},
\end{displaymath}
and is therefore the same for all terms in the sum. It
follows that $I(x)$ has the quasi-periodicity property
\begin{equation}\label{per}
I(x+q+p\tau)=
e^{2\pi i(\lambda-r(\lambda),p)}
I(x), \qquad q+p\tau\in P^\vee+\tau P^\vee.
\end{equation}
Let us now consider the poles of the function $I$.
Poles may
appear when $x$ is on the hyperplanes $(\alpha,x)=0$,
$\alpha\in R_+$, or
their translates by $P^\vee+\tau P^\vee$.
If $x$ approaches the hyperplane $(\alpha,x)=0$,
the singular terms in the sum \Ref{I} are the
terms indexed by $\alpha,\beta$ and $\gamma,\delta$ where
$s_\gamma(\delta)=\pm\alpha$. As all roots are on a plane
this implies that $\gamma=\mp \beta$. The sign is $+$
(thus $s_\gamma(\delta)=-\alpha$) since $\gamma>0$.
In particular only two terms are singular in \Ref{I}.
The coefficient of the (simple) pole is (cf.\ (iii) above)
\begin{displaymath}
\langle\alpha,\beta\rangle
\sigma_{(\beta^\vee,\lambda)}
((s_\alpha(\beta),x))
-
\langle\beta,\delta\rangle
\sigma_{(\beta^\vee,\lambda)}
((\beta,x)).
\end{displaymath}
This expression vanishes on the hyperplane $(\alpha,x)=0$
because $s_\alpha(x)=x$ there, and
\begin{displaymath}
\langle \beta,\delta\rangle=
-\langle s_\beta(\beta),s_\beta(\delta)\rangle
=-\langle\beta,\alpha\rangle=
\langle\alpha,\beta\rangle.
\end{displaymath}
It follows that the singularity at $(\alpha,x)=0$ (and thus on
all affine hyperplanes $(\alpha,x)=n+m\tau$, $n$, $m\in {\bf Z}$
by \Ref{per}) is removable. We conclude that $I$ has no
singularity on $V_{\bf C}$, and has the quasi-periodicity property
\Ref{per}. It therefore vanishes, by Fourier series theory.
\end{proof}
\begin{corollary}
The operators
\begin{displaymath}
\nabla^\lambda_\xi=\partial_\xi+\sum_{\alpha\in R_+}
k_\alpha(\alpha,\xi)
\sigma_{(\alpha^\vee,\lambda)}((\alpha,x))\hat s_\alpha,
\end{displaymath}
form a commutative family:
\begin{displaymath}
[\nabla^\lambda_\xi,\nabla^\lambda_\eta]=0.
\end{displaymath}
\end{corollary}
Let us call these operators {\em elliptic Dunkl operators}.
They are not $W$-invariant but $W$-equivariant
\begin{equation}
\label{equi}
\hat w\nabla_\xi^\lambda\hat w^{-1}
=
\nabla^{w(\lambda)}_{w(\xi)}.
\end{equation}
\begin{example}
In the $A_{n-1}$ case, we identify functions on
$V=\{{\bf R}^n\,|\, \Sigma_ix_i=0\}$ with functions
$f$ on ${\bf R}^n$ such that $f(x_1+a,\dots,x_n+a)$
is independent of $a\in{\bf R}$. Let $e_1,\dots,e_n$
be the standard basis of ${\bf R}^n$ and put
\begin{displaymath}
\bar e_i=e_i-\frac1n\sum_{j=1}^ne_j\in V.
\end{displaymath}
Then Dunkl operators are linear combinations of
the commuting operators $\nabla^\lambda_i=\nabla^\lambda_{\bar e_i}$:
\begin{equation}\label{edo}
\nabla_i^\lambda=
\frac\partial{\partial x_i}+k\sum_{j:j\neq i}
\sigma_{\lambda_i-\lambda_j}(x_i-x_j)\hat s_{ij},
\end{equation}
where $\hat s_{ij}$ is the operator that interchanges the
$i$th and $j$th variable.
\end{example}
\par\vspace{.5\baselineskip}
\noindent{\em Remark.} It is interesting to note that
the usual Dunkl operator reminds Moser's $L$-matrix
for the Calogero problem \cite{Mo}, whereas the elliptic
Dunkl operator \Ref{edo} reminds Krichever's generalization
\cite{Kr}, see also \cite{OP1}.
The general solution of the corresponding
functional equation (different from ours) was found
by Bruschi and Calogero \cite{BC}. More general functional
equations motivated by these problems, as well as
some topological problems, were introduced and solved by
one of the authors \cite{Bu}, \cite{Bu2}.
\subsubsection*{4. General solution of the functional equation in the
$A_{n-1}$ case}
In this section we show that elliptic Dunkl operators
are essentially the only solutions of the functional
equation \Ref{e8}, in the $A_{n-1}$ case, $n-1\geq 2$.
As before, we start with $A_2$. In this case, the
functional equation is \Ref{e10}
\begin{equation}\label{f}
f(u)g(u+v)+g(v)h(-u)+h(-u-v)f(-v)=0.
\end{equation}
We must find the most general solutions of this
functional equation, where $f$, $g$, and $h$ are assumed
to be meromorphic functions defined in a neighborhood
of the origin.
First of all, if one of the three functions vanishes identically,
then the functional equation says that the product of the
other two vanishes, and we get a solution where two functions
vanish and the third is arbitrary. We call these solution
trivial, and consider from now on only solutions where
$f$, $g$ and $h$ are not identically zero.
\begin{lemma}\label{le1}
If $f$, $g$, $h$ satisfy \Ref{f}, then
{\rm (i)}
$\tilde f(z)=g(z)$,
$\tilde g(z)=h(z)$,
$\tilde h(z)=f(z)$,
{\rm (ii)}
$\tilde f(z)=af(bz)e^{\alpha z}$,
$\tilde g(z)=ag(bz)e^{\beta z}$,
$\tilde h(z)=af(bz)e^{\gamma z}$,
for arbitrary constants $a,\dots,\gamma$ such that
$\alpha+\beta+\gamma=0$,
\noindent also satisfy \Ref{f}.
\end{lemma}
\begin{proof}
Replacing $(u,v)$ by $(-u-v,u)$ in \Ref{f} implies (i).
Property (ii) is easy to check.
\end{proof}
\begin{proposition}\label{pr2}
The only non-trivial solutions of \Ref{f} holomorphic around
the origin are
\begin{displaymath}
f(u)=ae^{\alpha u},\qquad
g(u)=be^{\beta u},\qquad
h(u)=ce^{\gamma u},
\end{displaymath}
where $a$, $b$, $c\neq 0$,
$ab+bc+ac=0$, $\alpha+\beta+\gamma=0$.
\end{proposition}
\begin{proof} It is easy to check that these are solutions.
We prove uniqueness. Suppose $f$, $g$, $h$ are a non-trivial solution
of \Ref{f}, defined and holomorphic in a neighborhood of the
origin. Taking $v=0$ in \Ref{f}, we get
\begin{equation}\label{p1}
f(u)g(u)+(f(0)+g(0))h(-u)=0.
\end{equation}
We have $f(0)+g(0)\neq 0$ since $fg$ does not vanish identically.
Similarly, by Lemma \ref{le1},
\begin{eqnarray}
\label{p2}g(u)h(u)+(g(0)+h(0))f(-u)&=&0,\\
\label{p3}h(u)f(u)+(h(0)+f(0))g(-u)&=&0.
\end{eqnarray}
Introduce the functions $F(u)=f(u)f(-u)$,
$G(u)=g(u)g(-u)$, $H(u)=h(u)h(-u)$, and
$S(u)=f(u)g(u)h(u)$. Multiplying \Ref{p1},
\Ref{p2}, \Ref{p3} by $h(u)$, $f(u)$ and
$g(u)$, respectively, we obtain
\begin{equation}\label{p4}
S(u)=\lambda F(u)=\mu G(u)=\nu H(u),
\end{equation}
for some non-zero constants $\lambda$, $\mu$, $\nu$.
In particular $S$ is an even function, and thus
\begin{displaymath}
S(u)^2=S(u)S(-u)=F(u)G(u)H(u).
\end{displaymath}
Hence $F(u)^3={\rm const}F(u)$, and $F$ is a constant. Similarly,
$G$ and $H$ are constant functions.
Thus we have
\begin{displaymath}
f(u)f(-u)=a^2,\qquad
g(u)g(-u)=b^2,\qquad
h(u)h(-u)=c^2,
\end{displaymath}
for some constants $a$, $b$, $c\neq 0$.
We can therefore write
\begin{equation}\label{p6}
f(u)=ae^{\phi(u)},\qquad g(u)=be^{\psi(u)},
\qquad h(u)=ce^{\eta(u)},
\end{equation}
for some odd functions $\phi$, $\psi$, $\eta$, holomorphic
in a neighborhood of $0$.
Since $S(u)=f(u)g(u)h(u)$ is constant, we have
$\phi+\psi+\eta=0$.
Inserting \Ref{p6}
in the functional equation gives
\begin{displaymath}
ab\,e^{\phi(u)+\psi(u+v)}
+bc\,e^{\psi(v)-\eta(u)}
+ac\,e^{-\eta(u+v)-\phi(v)}
=0,
\end{displaymath}
which, after elimination of $\eta=-\phi-\psi$ can be
recast in the more convenient form
\begin{equation}\label{p7}
ab+bc\,\Psi(u,v)+ac\,\Phi(u,v)^{-1}=0,
\end{equation}
with $\Phi(u,v)=\exp(\phi(u+v)-\phi(u)-\phi(v))$, and
$\Psi(u,v)=\exp(\psi(u+v)-\psi(u)-\psi(v))$. In particular,
setting $u=v=0$, we obtain the condition $ab+bc+ac=0$.
Moreover, we have $\Phi(-u,-v)=\Phi(u,v)^{-1}$, and
$\Psi(-u,-v)=\Psi(u,v)^{-1}$, since $\phi$ and $\psi$
are odd functions. Replacing $(u,v)$ by $(-u,-v)$ in
\Ref{p7} yields the equation
\begin{equation}\label{p8}
ab+bc\,\Psi(u,v)^{-1}+ac\,\Phi(u,v)=0.
\end{equation}
Elimination of $\Psi$ from \Ref{p7}, \Ref{p8} gives
a non trivial quadratic equation with constant coefficients
for $\Phi$. Thus $\Phi$ is constant, implying that
$\Psi$ is constant as well. we conclude that
$\phi(u+v)=\phi(u)+\phi(v)$, and $\psi(u+v)=\psi(u)+\psi(v)$,
which leaves us with the solution $\phi(u)=\alpha u$,
$\psi(u)=\beta u$.
\end{proof}
{}From now on, we consider the case when $f(u)$ has a
pole of order $p>0$ at the origin. It is easy to see that
$h$ and $g$ also have a pole of the same order $p$ at
$u=0$.
Inserting the expansion
\begin{eqnarray}
f(u)&=&\frac a{u^p}+\cdots,\qquad\\
g(u)&=&\frac b{u^p}+\cdots,\qquad\\
h(u)&=&\frac c{u^p}+\cdots,
\end{eqnarray}
into \Ref{f}, and multiplying the relation by
$u^p(u+v)^pv^p$, yields a series in $u$ and $v$
which starts as
\begin{displaymath}
ac\, u^p+ ab\,v^p+(-1)^pbc\,(u+v)^p.
\end{displaymath}
We see that $p=1$ is the only case in which cancellation is possible.
In this case,
\begin{displaymath}
ac-bc=0,\qquad ab-bc=0,
\end{displaymath}
which implies $a=b=c$. Without loss of generality,
we consider the case $a=b=c=1$.
Suppose that $f$, $g$, $h$ are a solution of \Ref{f} with
a simple pole at the origin with unit residue:
\begin{eqnarray}\label{o1}
f(u)&=&\frac1u+f_0+\cdots,\\
g(u)&=&\frac1u+g_0+\cdots,\\
h(u)&=&\frac1u+h_0+\cdots .
\end{eqnarray}
The left hand side of \Ref{f} has a Laurent expansion
at $v=0$:
\begin{displaymath}
f(u)g(u)+(\frac1v+g_0)h(-u)
+(h(-u)-vh'(-u))(-\frac1v+f_0)+O(v).
\end{displaymath}
The constant term gives the equation
\begin{equation}
\label{o2}f(u)g(u)+(f_0+g_0)h(-u)+h'(-u)=0.
\end{equation}
Similarly, by Lemma \ref{le1} (i),
\begin{eqnarray}
\label{o3}g(u)h(u)+(g_0+h_0)f(-u)+f'(-u)&=&0,\\
\label{o4}h(u)f(u)+(h_0+f_0)g(-u)+g'(-u)&=&0.
\end{eqnarray}
Introduce $F(u)=f(u)f(-u)$,
$G(u)=g(u)g(-u)$, $H(u)=h(u)h(-u)$, and
$S(u)=f(u)g(u)h(u)$, as above. Note that
$F$, $G$ and $H$ are even functions.
Then $F'(u)=f'(u)f(-u)-f'(-u)f(u)$, and \Ref{o3}
implies that $F'(u)=S(u)-S(-u)$. More generally,
(\ref{o2}--\ref{o4}) imply
\begin{displaymath}
F'(u)=G'(u)=H'(u)=S(u)-S(-u).
\end{displaymath}
We can then write
\begin{equation}\label{abc}
F(u)=a-P(u), \qquad
G(u)=b-P(u),\qquad H(u)=c-P(u),
\end{equation}
where $P(u)=1/u^2+O(u^2)$ has no constant term,
and $a$, $b$, $c$ are integration constants. The
even function $P$ obeys
\begin{equation}\label{o8}
P'(u)=-S(u)+S(-u).
\end{equation}
Let us compute the derivative
$S'=(f'/f+g'/g+h'/h)S$ using (\ref{o2}--\ref{o4})
\begin{eqnarray*}
S'(u)&=&\alpha S(u)-G(u)H(u)-H(u)F(u)-F(u)G(u)
\\
&=&\alpha S(u)-3P(u)^2+\beta P(u)+\gamma,
\end{eqnarray*}
for some constants $\alpha$, $\beta$, $\gamma$.
Let us consider the even and odd part of this equation
separately:
\begin{eqnarray}
\label{o9}
\frac d{du}(S(u)+S(-u))&=&\alpha(S(u)-S(-u))=-\alpha P'(u)\\
\label{o10}
\frac d{du}(S(u)-S(-u))&=&\alpha(S(u)+S(-u))
-6P(u)^2+2\beta P(u)+2\gamma.
\end{eqnarray}
Integrating \Ref{o9} gives
\begin{equation}\label{o9a}
S(u)+S(-u)=-\alpha P(u)+\delta,
\end{equation}
for some $\delta$.
By subtracting \Ref{o8} from this equation, we obtain
\begin{equation}\label{o11}
S(u)=\frac12(-\alpha P(u)+\delta-P'(u)).
\end{equation}
Inserting \Ref{o9a} and \Ref{o8}
into \Ref{o10}
yields finally the differential equation
\begin{displaymath}
P''(u)=6P(u)^2+(\alpha^2-2\beta) P(u)-\alpha\delta-2\gamma.
\end{displaymath}
Let us multiply this equation by $P'(u)$ and integrate.
We get
\begin{displaymath}
P'(u)^2=4P(u)^3+c_1P(u)^2+c_2 P(u)+c_3.
\end{displaymath}
In fact $c_1=0$, since, by construction,
the Laurent expansion of $P$ has no constant term.
This equation is well-known to have a unique meromorphic
solution with double pole at the origin. It is
the Weierstrass function $\wp$ for some elliptic
curve determined by the coefficients $c_2$, $c_3$,
or one of its degenerations $\omega^2(1/\sin(\omega u)^2-1/3)$,
$1/u^2$ (see \cite{WW}).
After rescaling of the variables as in Lemma \ref{le1}
if necessary, we may assume that the periods of $\wp$
are $1$ and $\tau$, and that $\omega=\pi$.
By writing the constants $a$, $b$, $c$ in
\Ref{abc}
as $P(\lambda)$, $P(\mu)$, $P(\nu)$ respectively (this is
always possible since $P$ defines a surjective map
from the elliptic curve
${\bf C}/{\bf Z}+\tau{\bf Z}$, or, in the degenerate cases, $({\bf C}/{\bf Z})\cup\{i\infty\}$,
${\bf C}\cup\{i\infty\}$,
onto the Riemann sphere), we see that the equation
$f(u)f(-u)=P(\lambda)-P(u)$ has the solution (see the Appendix)
\begin{displaymath}
f(u)=\sigma_\lambda(u),
\end{displaymath}
and, in the degenerate cases,
\begin{eqnarray*}
f(u)&=&\frac{\pi\sin(\pi(u-\lambda))}
{\sin(\pi u)\sin(-\pi\lambda)}=\pi(\cot(\pi u)-\cot(\pi\lambda)),\\
f(u)&=&\frac1u-\frac1\lambda.
\end{eqnarray*}
The general solution in a neighborhood of the origin
is $f(u)\exp\phi(u)$ for some odd
function $\phi(u)$ regular at the origin.
Similar formulas hold for $g$ and $h$: the functions
\begin{displaymath}
f(u)=\sigma_\lambda(u),\qquad
g(u)=\sigma_\mu(u),\qquad
h(u)=\sigma_\nu(u),
\end{displaymath}
and their degenerations, are a solution of the functional
equation, provided that $\lambda+\mu+\nu=0$ (modulo the lattice).
We now show that these are the only (up to transformations
of Lemma \ref{le1} (ii))
solutions such that \Ref{abc} holds
with $a=P(\lambda)$, $b=P(\mu)$, $c=P(\nu)$.
Suppose $\tilde f$, $\tilde g$, $\tilde h$ is another solution. Then
\begin{equation}\label{o19}
\tilde f(u)=e^{\phi(u)}f(u),\qquad
\tilde g(u)=e^{\psi(u)}g(u),\qquad
\tilde h(u)=e^{\eta(u)}h(u),
\end{equation}
for some odd functions $\phi$, $\psi$, $\eta$. Since the product
$S=fgh$ is expressed in terms of $P$ (see \Ref{o11}), the
sum $\phi+\psi+\eta$ vanishes identically. Inserting
\Ref{o19} in \Ref{o3}, \Ref{o4}, we deduce immediately
that $\phi'(u)=\psi'(u)=0$ for all $u$.
Thus $\phi$ and $\psi$ are linear functions.
Combining this result with Prop.\ \ref{pr2}, we
obtain the following theorem.
\begin{thm}\label{Ta2}
The following list exhausts all non-trivial solutions
of the functional equation
\begin{displaymath}
f(u)g(u+v)+g(v)h(-u)+h(-u-v)f(-v)=0.
\end{displaymath}
\noindent Elliptic solutions:
\begin{displaymath}
f(u)=a\sigma_\lambda(bu)e^{\alpha u},\qquad
g(u)=a\sigma_\mu(bu) e^{\beta u},\qquad
h(u)=a\sigma_\nu(bu) e^{\gamma u},
\end{displaymath}
where the function
$\sigma_\lambda$ is defined in \Ref{sigma}.
\noindent Trigonometric solutions:
\begin{eqnarray}\label{deg}
f(u)=\frac{a\, \sin(b(u-\lambda))}
{\sin(bu)\sin(-b\lambda)}e^{\alpha u},& &
g(u)=\frac{a\, \sin(b(u-\mu))}
{\sin(bu)\sin(-b\mu)}e^{\beta u},\\
h(u)&=&\frac{a\, \sin(b(u-\nu))}
{\sin(bu)\sin(-b\nu)}e^{\gamma u}.\nonumber
\end{eqnarray}
Rational solutions:
\begin{equation}\label{ra}
f(u)=(\frac au-\frac1\lambda)e^{\alpha u},\qquad
g(u)=(\frac au-\frac1\mu)e^{\beta u},\qquad
h(u)=(\frac au-\frac1\nu)e^{\gamma u}.
\end{equation}
Regular solutions:
\begin{equation}\label{re}
f(u)=-\frac1\lambda e^{\alpha u},\qquad
g(u)=-\frac1\mu e^{\beta u},\qquad
h(u)=-\frac1\nu e^{\gamma u}.
\end{equation}
The parameters in these solutions are complex
numbers $a$, $b$, $\alpha$, $\beta$, $\gamma$,
$\lambda$, $\mu$, $\nu$ satisfying
$\alpha+\beta+\gamma=0$, $\lambda+\mu+\nu=0$.
In the trigonometric and rational cases, the limiting
cases where $\lambda$, $\mu$, $\nu$ take the value
$\pm i\infty$ are permitted.
\end{thm}
\noindent{\em Remark.} Notice that all these solutions
are limiting cases (degenerations) of the elliptic solutions.
This theorem covers the case $A_2$, but gives
immediately the answer in the case $A_{n-1}$, $n-1\geq 2$.
In this case, we identify, as above, functions
on $V$ with translation invariant functions on
${\bf R}^n$.
\begin{corollary} Let $n\geq 3$. The $n$ operators
\begin{displaymath}
\nabla_i=\partial_i+\sum_{j:j\neq i}f_{ij}(x_i-x_j)\hat s_{ij},
\qquad i=1,\dots,n,
\end{displaymath}
with $f_{ij}\not\equiv 0$, $f_{ij}(u)=-f_{ji}(-u)$
are pairwise commutative iff
$f_{ij}(x)=k\,\sigma_{\lambda_i-\lambda_j}(bx)
e^{(\alpha_i-\alpha_j)x}$
where $\sigma_\lambda$ is
the function defined in \Ref{sigma}, or one of its
degenerations (see above).
\end{corollary}
\subsubsection*{5. Quantum Dunkl operators}
Fix complex parameters $\tau$, $\mu$ and $\kappa$ such
that Im$(\tau)>0$, $\mu\not\in{\bf Z}+\tau{\bf Z}$ and $\kappa\neq 0$.
Consider the operator $R(\lambda)$, depending on the ``spectral
parameter'' $\lambda\in{\bf C}$, and acting on the space of,
say, meromorphic functions of two complex variables $x_1$
and $x_2$:
\begin{displaymath}
R(\lambda)f(x_1,x_2)=
\frac1{\sigma_\mu(\lambda)}
\{\sigma_\mu(x_{12}+{\scriptstyle\frac\mu\kappa})
f(x_1+{\scriptstyle\frac\mu\kappa},x_2-{\scriptstyle\frac\mu\kappa})
-
\sigma_\lambda(x_{12}+{\scriptstyle\frac\mu\kappa})
f(x_2,x_1)\}.
\end{displaymath}
We use the notation $x_{12}$ to denote the difference
$x_1-x_2$.
The operator $R$ obeys the quantum Yang--Baxter equation
\begin{displaymath}
\RR 12\RR 13\RR 23=\RR 23\RR 13\RR 12.
\end{displaymath}
The two sides of this equation are operators
acting on functions of three variables, and the
notation $R(\lambda)^{(ij)}$ indicates the operator
$R(\lambda)$ acting on a function of several variables,
by viewing it as a function of the $i$th and $j$th
variable. This solution of the Yang--Baxter equation
is essentially the one introduced in \cite{FePa} as three-parameter
generalization of the two-parameter solution of
Shibukawa and Ueno \cite{ShiUe}. For positive integer
values of $\kappa$ it admits a restriction to a finite
dimensional
subspace coinciding with Belavin's solution \cite{Be}.
With the normalization used here we have ``unitarity''
\begin{displaymath}\RR 12\RR 21=\rm{Id},\end{displaymath} and $R(0)f(x_1,x_2)=f(x_2,x_1)$.
Moreover $R(\lambda)$ tends to the identity as $\mu$
goes to zero.
Solutions of the quantum Yang--Baxter equation
with these properties can be used to construct
commuting operators $T_i(\lambda_1,\dots,\lambda_n)$
(related to transfer matrices of integrable models
of statistical mechanics):
\begin{equation}\label{ti}
T_i(\lambda_1,\dots,\lambda_n)=
R^{(i,i+1)}\cdots R^{(i,n)}R^{(i,0)}\cdots R^{(i,i-1)}.
\end{equation}
These operators act on functions of $n$ variables, and,
as before, $R^{(ij)}=R(\lambda_i-\lambda_j)^{(ij)}$ is
the operator $R(\lambda_i-\lambda_j)$
acting on a function of $n$ variables
by viewing it as a function of the $i$th and $j$th variable.
Our result is that elliptic Dunkl operator $\nabla_i^\lambda$
defined in \Ref{edo} can be
obtained as semiclassical limit of the quantum operators,
if $k$ is integer.
\begin{thm} Let $T_i(\lambda_1,\dots,\lambda_n)$ be
the operators \Ref{ti} acting on functions $f$ on ${\bf R}^n$,
such that $f(x_1+a,\dots,x_n+a)=f(x_1,\dots,x_n)$ for all
$a\in {\bf R}$.
For all integer $k$, we have
\begin{displaymath}
T_i(\lambda_1,\dots,\lambda_n)={\rm Id}+\frac nk\mu
g^{-1}_{\kappa/n}\nabla_i^\lambda
g_{\kappa/n}+O(\mu^2),
\end{displaymath}
where $g_m$ is the function
\begin{displaymath}
g_m(x)=
\prod_{i<j}\theta_1(x_{ij})^m
\exp(m\sum_{i\neq j}x_i
\theta'_1(\lambda_{ij})/\theta_1(\lambda_{ij})
),
\end{displaymath}
viewed as multiplication operator, and the parameter
$\kappa$ of $R$ is given by $\kappa=(-1)^knk$.
\end{thm}
\begin{proof} Let $\rho(x)=\theta_1'(x)/\theta_1(x)$.
We first compute the expansion of $R$ to first order.
\begin{displaymath}
R(\lambda)={\rm Id}+\mu\, r(\lambda)+O(\mu^2),
\end{displaymath}
where $r$, the ``classical $r$-matrix'', is
the differential-difference operator
\begin{displaymath}
r(\lambda)=
\frac 1\kappa(\partial_1-\partial_2)+
\sigma_\lambda(x_{12})\hat s_{12}
+\rho(\lambda)-\rho(x_{12}).
\end{displaymath}
This implies that
\begin{displaymath}
T_i(\lambda_1,\dots,\lambda_n)={\rm Id}
+
\mu\sum_{j:j\neq i}r^{(ij)}(\lambda_{ij})
+O(\mu^2).
\end{displaymath}
Since $\Sigma_1^n\partial_if=0$, we have
\begin{displaymath}
\sum_{j:j\neq i}(\partial_i-\partial_j)f=n\partial_if,
\end{displaymath}
and therefore
\begin{displaymath}
T_i(\lambda_1,\dots,\lambda_n)=
{\rm Id}+
\frac n\kappa\mu
\left(
\partial_i+\sum_{j:j\neq i}\frac\kappa n
\sigma_{\lambda_{ij}}(x_{ij})\hat s_{ij}
+\sum_{j:j\neq i}
\frac\kappa n(\rho(\lambda_{ij})-\rho(x_{ij}))\right).
\end{displaymath}
The claim follows then from the relations
\begin{eqnarray*}
\partial_ig_m(x)&=&m\sum_{j:j\neq i}\left(\rho(\lambda_{ij})-
\rho(x_{ij})\right),\\
\hat s_{ij}g_m&=&(-1)^mg_m\hat s_{ij},
\end{eqnarray*}
and the elementary fact that $m=(-1)^kk$ obeys $(-1)^mm=k$.
\end{proof}
\subsubsection*{6. Quantum $n$-body problems}
Trigonometric and rational Dunkl operators in the $A_{n-1}$
are used
in the theory of integrable quantum $n$-body problems
of the Calogero-Sutherland type, see \cite{He},
\cite{Ch}. Let us consider the
trigonometric Dunkl operators (in a slightly different
normalization)
\begin{displaymath}
\nabla_i=\partial_i+\sum_{j:j\neq i}
k(\coth(x_i-x_j)-1)\hat s_{ij}.
\end{displaymath}
Then the operators $L_j={\rm Res}\sum_{i=1}^n(\nabla_i)^j$,
$j=1,\dots,n$ form a set of pairwise commuting $S_n$-invariant
differential operators, and generate the algebra
of $S_n$-invariant differential operators commuting
with the Schr\"odinger operator
\begin{displaymath}
L_2=\sum_{i=1}^n\partial_i^2-k(k+1)\sum_{i\neq j}\frac 1{\sinh^2(x_i-x_j)}.
\end{displaymath}
Here ${\rm Res}\,M$ is the differential operator whose
restriction to $S_n$-invariant functions coincides with
the differential-difference operator $M$.
In the elliptic case, we are lead to consider the commuting operators
$M_j(\lambda)=\sum_{i=1}^n(\nabla^\lambda_i)^j$, where
$\nabla_i^\lambda$ is the elliptic Dunkl operator \Ref{edo}.
These operators are not $S_n$-invariant. Instead, they obey
$\hat w M_j(\lambda)\hat w^{-1}=M_j(w(\lambda))$. However,
let us consider the singular limit when $\lambda$ tends to
the symmetric point 0,
for $j=2$. We use the shorthand notation $x_{ij}=x_i-x_j$.
\begin{eqnarray*}
M_2(\lambda)&=&
\sum_{i=1}^n
(\nabla_i^\lambda)^2\\
&=&\sum_{i=1}^n\partial_i^2
+k\sum_{i\neq j}\sigma'_{\lambda_{ij}}(x_{ij})\hat s_{ij}
+k^2\sum_{i\neq j}
\sigma_{\lambda_{ij}}(x_{ij})\sigma_{\lambda_{ij}}(x_{ji}).
\end{eqnarray*}
In this calculation, we used the functional equation
of Prop.\ \ref{aa} (iii) for $\sigma_\lambda$ and the fact that
$\sigma_{-\lambda}(-x)=-\sigma_\lambda(x)$. Since
\begin{displaymath}
\sigma_\lambda(x)\sigma_\lambda(-x)=\wp(\lambda)-\wp(x),
\end{displaymath}
and $\lim_{\lambda\to 0}\sigma'_\lambda(x)=-\wp(x)-2\eta_1$,
(see the Appendix)
we see that
\begin{displaymath} M_2(\lambda)-\sum_{i\neq j}\wp(\lambda_{ij})\end{displaymath}
has a limit
as $\lambda\to 0$:
\begin{displaymath}
L_2=\lim_{\lambda\to 0}
(M_2(\lambda)-\sum_{i\neq j}\wp(\lambda_{ij})
=\sum_{i=1}^n\partial_i^2
-k(k+1)\sum_{i\neq j}\wp(x_i-x_j) +{\rm const}.
\end{displaymath}
This is (minus) the Schr\"odinger operator of the
so-called elliptic Calogero--Moser integrable $n$-body problem
(see, e.g., \cite{OP}).
It is reasonable to conjecture that the
higher $S_n$-invariant differential operators
commuting with $L_2$ can be obtained as $\lambda\to 0$
limits of suitable equivariant polynomials in
$\nabla_i^\lambda$ with $\lambda$ dependent coefficients.
For integer $k$ there is a conjecture of one of the authors
(see \cite{ChVe})
that there are additional integrals of motion, such that
the whole ring of quantum integral is supercomplete.
We hope that elliptic Dunkl operators will help
to prove it.
Another interesting problem is to understand
the analogue of Opdam's shift operator in the elliptic case
(cf.\ \cite{He2}). The one-dimensional case shows that it can not be a
pure differential operator, because the genera of the spectral
curves for Lam\'e operators depend on the integer parameter $k$.
In the $B_2$ case our results imply the quantum integrability
of the system with Hamiltonian
\begin{displaymath}
H=-\triangle+F(x)+F(y)+G(x+y)+G(x-y),
\end{displaymath}
where $F=f'-f^2$, $G=g'-g^2$, and $f$, $g$, are given
by the formula \Ref{vv4}. The commuting operator
$K$ has the form Res$(\nabla_1^2\nabla_2^2)$, where
$\nabla_i$ are the corresponding generalized Dunkl operators
\Ref{e6}.
The quantum integrability of this systems was
independently established in \cite{OOS}.
It would be interesting to understand the relation
between this construction and the recent construction
of elliptic Dunkl operators of \cite{Ch2}, which are
formal infinite linear combinations
of affine Weyl group reflections.
\paragraph{Acknowledgments.} Two of us (V. B. and A. V.)
are grateful to the University of Maryland at College
Park and especially to Prof.\ S. P. Novikov for the
hospitality during February 1994, when this work was
completed. G. F. is grateful to IHES, where part of
this work was done, for hospitality, and
thanks V. Pasquier for explanations and discussions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,452 |
{"url":"http:\/\/math.stackexchange.com\/questions\/444222\/prove-that-9-mid-4n15n-1-for-all-n-in-mathbb-n","text":"# Prove that $9\\mid (4^n+15n-1)$ for all $n\\in\\mathbb N$\n\nFirst of all I would like to thank you for all the help you've given me so far.\n\nOnce again, I'm having some issues with a typical exam problem about divisibility. The problem says that:\n\nProve that $\\forall n \\in \\mathbb{N}, \\ 9\\mid4^n + 15n -1$\n\nI've tried using induction, but that didnt work. I've tried saying that:\n\n$4^n + 15n-1 \\equiv 0 \\pmod{9}$. Therefore, I want to prove that $4^{n+1} + 15(n+1) -1 \\equiv 0 \\pmod{9}$.\n\nI've prooved for $n=1$, it's $18\\equiv 0 \\pmod{9}$, which is OK.\n\nBut for the inductive step, I get:\n\n$4\\cdot4^n + 15n+15-1 \\equiv 0 \\pmod{9}$\n\nAnd from there, I don't know where to replace my inductive hypotesis, and therefore, that's why I think induction is not the correct tool to use here. I guess I might use some tools of congruence or divisibility, but I'm not sure which are they.\n\nI do realize that all $n\\in \\mathbb{N}\/ \\ 3 \\ |\\ n \\Rightarrow 4^n \\equiv 1 \\pmod{9} \\text{ and } 15n \\equiv 0 \\pmod{9}$. In that case, where 3 divides n, then I have prove that $4^n + 15n-1 \\equiv 0 \\pmod{9}$. But I don't know what to do with other natural numbers that are not divisible by 4, that is, all $n \\in \\mathbb{N} \/ n \\equiv 1 \\pmod{3} \\text{ or } n \\equiv 2 \\pmod{3}$.\n\n-\nFind the remainder of $4^{3k+1}$ modulo $9$, and that of $4^{3k+2}$. Since you already know $4^{3k} = 1 + 9\\cdot x$, that's not difficult. \u2013\u00a0Daniel Fischer Jul 15 '13 at 14:47\nYou can do a proof by induction here you just need to show the additional result that $4^k+5 = 0$ mod $3$. \u2013\u00a0Cameron Williams Jul 15 '13 at 14:48\nand what should I do with the $15n$? Thanks! \u2013\u00a0pmartelletti Jul 15 '13 at 14:50\n\nBy the Inductive Hypothesis, $4^n + 15n -1 \\equiv 0$ so $4^n \\equiv 1-15n$ and thus $$4^{n+1}+15(n+1)-1 = 4 \\cdot 4^n + 15n + 14 \\equiv 4 \\cdot (1-15n) + 15n + 14 = 18 -45n \\equiv 0$$ since both $18$ and $45$ are divisible by 9.\n\n-\nThanks! I liked this answer by induction... \u2013\u00a0pmartelletti Jul 15 '13 at 14:53\n\nYou can use the fact that any natural number can be written in three forms : $n = 3k$ or $n = 3k+1$ or $n = 3k+2$.\n\nIn the case : $n = 3k$\n\n$$4^n + 15 n - 1 = 64^k + 45 k -1 \\equiv 0 \\pmod{9}$$\n\nAnd then you do the same for the other cases\n\n-\n\nConsider $$4\\left(4^n+15n -1\\right)-\\left(4^{n+1} +15(n+1)-1\\right) .$$\n\n-\n\nWe don't really need induction as $$4^n+15n-1=(1+3)^n+15n-1$$ $$=1+\\binom n13+\\binom n23^2+\\cdots+\\binom n{n-1}3^{n-1}+3^n+15n-1$$ using Binomial Expansion\n\n$$\\implies 4^n+15n-1=18n+3^2\\left(\\binom n2+\\cdots+\\binom n{n-1}3^{n-3}+3^{n-2}\\right)$$ which is clearly multiple of $9$ as we know Binomial coefficients are integers for positive integer $n$\n\n-\n\nYou can do it by induction, skipping three steps. First check that it is correct for $n=1,2,3$. Then $4^{n+3}+15(n+1)-1=4^3\\cdot 4^n+15n-45-1.$ As you have shown $4^3 \\equiv 1 \\pmod 9$ you are home.\n\n-\n\nI'm going to do it by induction. First I need to prove that $4^k+5=0$ mod $3$.\n\nBase case: $4^1+5 = 9$ and clearly $3$ divides this quantity.\n\nInductive hypothesis: $3|4^k+5$. Then $4^{k+1}+5 = 4*4^k+5 = 4^k+5+3*4^k$. Since $3|4^k+5$, it divides $4^{k+1}+5$ and we have shown the result.\n\nNow we wish to prove the overall result that $3|4^k+15k-1$. Clearly the base case is true so we'll work on the inductive step. Suppose $3|4^k+15k-1$. Then $4^{k+1}+15(k+1)-1 = 4*4^k+15k+15-1 = (4^k+15k-1)+3(4^k+5)$. Since $3$ divides both of these terms individually, it divides their sum and we are done.\n\n-\n\nNote that $4^3\\equiv1\\pmod{9}$ and $15\\cdot3\\equiv0\\pmod{9}$\n\nTherefore, we only need to verify $4^n+15n\\equiv1\\pmod{9}$ for $n\\in\\{0,1,2\\}$ since any $n$ is equivalent to one of these $\\bmod{\\,3}$. Since $\\{1,19,46\\}$ are all $\\equiv1\\pmod{9}$, the equation holds for all $n$.\n\n-\n\nConsider $$4^n=9q-15n+1;$$ So $$4.4^n+15n+15-1=4(9q-15n+1)+15n+14$$ And go on.\n\n-","date":"2016-04-30 05:26:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8886263966560364, \"perplexity\": 149.1605138970131}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-18\/segments\/1461860111612.51\/warc\/CC-MAIN-20160428161511-00072-ip-10-239-7-51.ec2.internal.warc.gz\"}"} | null | null |
{"url":"http:\/\/mathhelpforum.com\/calculus\/180330-find-area-bounded-curves-print.html","text":"# find the area bounded by the curves.\n\n\u2022 May 12th 2011, 07:43 AM\nlollycc\nfind the area bounded by the curves.\nfind the area bounded by the curve y=cos x and the line y=0 and y=\\frac{3}{ 2\\pi } x could u plz show me more detail solution of this ~~~ thx\n\u2022 May 12th 2011, 08:04 AM\nSudharaka\nQuote:\n\nOriginally Posted by lollycc\nfind the area bounded by the curve y=cos x and the line y=0 and y=\\frac{3}{ 2\\pi } x could u plz show me more detail solution of this ~~~ thx\n\nDear lollycc,\n\nFind the intersection point of the two curves. Suppose the intersection point is $x_0$. Then you have to get the integration,\n\n$Area=\\int_{0}^{x_0}\\cos x~ dx-\\int_{0}^{x_0}\\frac{3x}{2\\pi}~ dx$\n\nNote that, $x=\\frac{\\pi}{3}$ satisfies $\\cos x=\\frac{3x}{2\\pi}$. Hence $x_0=\\frac{\\pi}{3}$\n\u2022 May 12th 2011, 08:09 AM\nAckbeet\nI'm not sure it's quite as simple as that, Sudharaka. Here's a plot of the two functions. It seems to me that you have to integrate the straight line up to the point of intersection, and then cosine from the intersection to pi\/2.\n\u2022 May 12th 2011, 08:17 AM\nSudharaka\nQuote:\n\nOriginally Posted by Ackbeet\nI'm not sure it's quite as simple as that, Sudharaka. Here's a plot of the two functions. It seems to me that you have to integrate the straight line up to the point of intersection, and then cosine from the intersection to pi\/2.\n\nDear Ackbeet,\n\nI have mistakenly taken the y axis(x=0 instead of y=0) as a boundary curve.(Headbang) Thanks for pointing out the mistake.\n\u2022 May 12th 2011, 08:25 AM\nAckbeet\nQuote:\n\nOriginally Posted by Sudharaka\nDear Ackbeet,\n\nI have mistakenly taken the y axis(x=0 instead of y=0) as a boundary curve.(Headbang) Thanks for pointing out the mistake.\n\nSure, no problem. Happens to the best of us.\n\u2022 May 13th 2011, 04:36 AM\nlollycc\nQuote:\n\nOriginally Posted by Ackbeet\nI'm not sure it's quite as simple as that, Sudharaka. Here's a plot of the two functions. It seems to me that you have to integrate the straight line up to the point of intersection, and then cosine from the intersection to pi\/2.\n\nthx guys for ur help ...i am really appreciated ~\nDear Ackbeet,\ncan u explain to me a little bit more of this question...maybe can u show me some detailed working out i am still kinda confused (Bow)\ni just to do the integration of cos x - integration of 3\/2pi x\nboth from zero to their intersection ...not sure if it is right~ but i found their intersection can not be expressed in fraction it looks like kind of irrational number (Wondering)\n\u2022 May 13th 2011, 05:00 AM\nAckbeet\nSudharaka found the point of intersection in Post # 2. It is pi\/3. So integrate the straight line from zero to the point of intersection, and then ADD the integral of cosine from the point of intersection to pi\/2, where the cosine function hits the x-axis (equivalent to y = 0, which is one of the boundaries of your region). Does that make sense?\n\u2022 May 13th 2011, 05:29 AM\nlollycc\ngreat~ i c now whoopse i realised i just made the same mistake as Sudharaka did...my frd confused me by telling me the area we are looking for it bounded by the two equations and the y-axis ...(Giggle)~ now i Can totally work this out thx so much (Clapping)\n\u2022 May 13th 2011, 05:35 AM\nAckbeet\nYou're welcome. Let me know if you have any further difficulties.\n\u2022 May 14th 2011, 09:25 PM\nlollycc\nDear Ackbeet,\nI think I need ur help now~\nsome one said there are actually two enclosed areas, by these 3 equations. One is from -pi\/2 to pi\/3 , and the other one is from 0 to pi\/2 . I double checked the equation it said find the AREA bounded by the curves ,so it means there just one area right? I am confused now...\nThe area of region 1 is:\nThe area of region 2 is :\n(Wondering) I think region 2 is right but i am not sure whether should i include the region 1 or not\n\u2022 May 16th 2011, 12:33 AM\nAckbeet\nQuote:\n\nOriginally Posted by lollycc\nDear Ackbeet,\nI think I need ur help now~\nsome one said there are actually two enclosed areas, by these 3 equations. One is from -pi\/2 to pi\/3 , and the other one is from 0 to pi\/2 . I double checked the equation it said find the AREA bounded by the curves ,so it means there just one area right? I am confused now...\nThe area of region 1 is:","date":"2017-08-16 22:18:29","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 5, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9453979730606079, \"perplexity\": 743.3176319405231}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886102663.36\/warc\/CC-MAIN-20170816212248-20170816232248-00563.warc.gz\"}"} | null | null |
{"url":"https:\/\/math.stackexchange.com\/questions\/1250584\/help-solving-differential-equations","text":"# Help solving differential equations\n\nI would like to know how to classify the following equations:\n\n$y''+ 4y'+5y=2e^{-2x}cos(x)$.\n\nIs it a second order linear equation?\n\n\u2022 See this. \u2013\u00a0Git Gud Apr 24 '15 at 23:30\n\u2022 @GitGud I did something similar, but I have (-2A+3Ax-3B+Bx=0) and -3A+Ax+2B-3Bx=2. Not too sure what to set x equal to in this case. \u2013\u00a0Chan Hunt Apr 25 '15 at 1:56\n\nthe characteristic equation is $$m^2+4m+5=0$$ $$m=-2\\pm i$$ $$y_c=e^{-2x}[C_1\\cos x+C_2\\sin x]$$ now we should find the particular solution let $$y_p=e^{-2x}[A\\cos x+B\\sin x]$$ because the similarity between the particular and complementry soultions, so the particular solution should be multiplied by x $$y_p=xe^{-2x}[A\\cos x+B\\sin x]$$ then you can find the $A$ and $B$","date":"2020-09-22 07:55:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7446223497390747, \"perplexity\": 289.91846184445944}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600400204410.37\/warc\/CC-MAIN-20200922063158-20200922093158-00271.warc.gz\"}"} | null | null |
Бочек II из Подебрад или Бочек Старший из Кунштата и Подебрад (; ум. 1417) — средневековый чешский государственный деятель из рода панов из Кунштата и Подебрад. Высочайший коморник Чешского королевства в 1377—1387 годах, один из учредителей и предводителей оппозиционного королю Вацлаву IV Панского союза, дед короля Йиржи из Подебрад.
Биография
Бочек Старший был сыном Бочека I из Кунштата и Подебрад. Первое письменное упоминание о Бочеке II относится к 1375 году и связано с разделом наследства его отца. Как старший сын Бочека I унаследовал большую часть его владений. Через два года Бочек женился на Анне Элишке из Липы, получив за ней в приданое замок Потштат.
В 1377—1387 годах Бочек II занимал должность высочайшего коморника Чешского королевства (по другим сведениям, подкоморжего). В период междоусобных войн Люксембургов Бочек Старший занимал сторону моравского маркграфа Йошта Люксембургского.
В 1394 году Бочек II совместно с Вилемом III из Ландштейна, Йиндржихом III из Градца, Йиндржихом III из Рожмберка и ещё пятью влиятельнейшими чешскими панами учредили Панский союз, направленный против короля Вацлава IV. Организовав в том же году открытый мятеж, паны пленили короля и отправили его в заключение в один из австрийских замков. В 1404 году Бочек II занял должность высочайшего писаря королевства, а после повторного пленения ранее освобождённого Вацлава IV в том же году вошёл в состав регентского совета из четырёх панов, управлявшего королевством во время отсутствия короля.
В 1415 году Бочек II из Подебрад был одним из панов, приложивших свои печати под посланием против сожжения Яна Гуса. После казни Гуса Бочек Старший был избран одним из трёх предводителей гуситского движения.
Бочек существенно расширил родовые владения, включив в их состав, в частности, замки Рихмбурк, Литице и Боузов (в 1408 году). Замок Липнице король Карел I отдал ему в залог за 4000 коп грошей в 1376 году. После того как в 1414 году пресеклась кунштатская ветвь рода Бочек II унаследовал её вотчины. В том же году Бочек заполучил Находское панство.
В конце жизни Бочек Старший фактически разделили свои владения между сыновьями, поставив их управлять разными панствами: Бочеку III Младшему он передал Боузов, Викторину Бочеку — Литице, Гинеку — Подебрады.
Семья
В 1377 году Бочек II женился на Анне Элишке из Липы.
Дети:
Ян из Подебрад (ум. 1409) — с 1406 года подписывался как Ян из Кости,
Бочек III из Подебрад или Бочек Младший — с 1416 года известен как Бочек из Боузова,
Викторин Бочек из Подебрад — с 1421 года известен как Викторин Бочек из Литиц; отец короля Йиржи из Подебрад,
Гинек на Подебрадах (ум. 1426);
Катержина — вступила в орден клариссинок в Праге.
Примечания
Литература
Ссылки
Boček st. z Kunstatu
Rodina pana Jiřího //podebradskenoviny.cz
Паны из Подебрад
Высочайшие коморники Чешского королевства | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,677 |
Didaphne cyanomela är en fjärilsart som beskrevs av Berthold Neumoegen 1894. Didaphne cyanomela ingår i släktet Didaphne och familjen björnspinnare. Inga underarter finns listade i Catalogue of Life.
Källor
Björnspinnare
cyanomela | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,596 |
Peugeot L500R HYbrid concept celebrates the past while looking to the future
By Scott Collie
The L500R HYbrid with the L45 that won the Indy 500 in 1916
Peugeot has created the L500R to celebrate its three Indy wins between 1913 and 1919
The car might have full bodywork and a hybrid powertrain, but it weighs just 1,000 kg
The paintjob is designed to hark back to the L45's
The L500R's body is just 1 m tall
The L500R is up there with the best looking concepts of the past few years
The car's massive wheels and slammed stance make for a seriously attractive concept
Up front, the L500R's red splitter is designed to hark back to the L45's read detailing
The three-element rear light is based on the style of some of Peugeot's road cars, albeit executed with a bit more flair
The L500R has a hybrid powertrain with two electric motors and a petrol engine
0-100 km/h takes just 2.5 seconds
The car dispatches the standing kilometer in just 19 seconds
The cabin is all about making the driver feel the center of attention
Peugeot has fitted the car with a tiny wheel
A holographic version of the i-Cockpit features in the L500R's cabin
The Peugeot L500R HYbrid from the rear
View gallery - 16 images
Peugeot didn't front up at this year's Indianapolis 500, but that doesn't mean the French brand doesn't have a strong connection to the Brickyard. In fact, the Peugeot "Charlatans" team won the Indy 500 in 1913, 1916 and 1919. The L500R HYbrid is a concept designed to celebrate this history, while still keeping an eye on the future of motorsports.
It might be designed to celebrate the past, but the L500 R HYbrid's powertrain is a thoroughly modern hybrid setup with 500 hp (373 kW) and 730 Nm (538 lb.ft) of torque. With just 1,000 kg (2,205 lb) to shift, the combination of a 270 hp (201 kW) gasoline engine and electric motors on both axles will catapult the car to 100 km/h in just 2.5 seconds. It'll also devour the standing kilometer (0.62 mi) in just 19 seconds.
Unlike the open-topped racers of the past, Peugeot's latest concept sits the driver in a floating capsule within the bodywork. Information about your speed, gear and revs are transmitted through a holographic version of the dual-display i-Cockpit system which debuted earlier this year, and the driver grasps a tiny steering wheel.
On the outside, the first thing you notice is just how close to the ground the L500R is. As well as helping with aerodynamics, the car's ultra-low one-meter (3.28-ft) height makes for a wedgy, purposeful shape. Combined with a heritage three-tone paintjob, we think the L500R is the best looking concept we've seen from Peugeot in years. Considering it's up against cars like the Fractal and Exalt, that's no mean feat.
Peugeot revealed the L500R HYbrid ahead of this year's Indianapolis 500, which was won by Alexander Rossi.
Source: Peugeot
AutomotiveConcept CarsHybridPeugeotPSA Peugeot Citroen
Scott Collie
Based in Melbourne, Australia, Scott grew up with a passion for cars and a love of writing. He now combines the two by covering all things automotive for New Atlas. When he's got a spare moment, you can usually find him freezing himself silly in search of fresh powder to ski. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,993 |
namespace autofill {
class AutofillAgent;
class TestPasswordAutofillAgent;
class TestPasswordGenerationAgent;
}
namespace extensions {
class DispatcherDelegate;
}
class ChromeRenderViewTest : public content::RenderViewTest {
public:
ChromeRenderViewTest();
virtual ~ChromeRenderViewTest();
protected:
// testing::Test
virtual void SetUp() OVERRIDE;
virtual void TearDown() OVERRIDE;
virtual content::ContentClient* CreateContentClient() OVERRIDE;
virtual content::ContentBrowserClient* CreateContentBrowserClient() OVERRIDE;
virtual content::ContentRendererClient*
CreateContentRendererClient() OVERRIDE;
scoped_ptr<extensions::DispatcherDelegate> extension_dispatcher_delegate_;
autofill::TestPasswordAutofillAgent* password_autofill_;
autofill::TestPasswordGenerationAgent* password_generation_;
autofill::AutofillAgent* autofill_agent_;
// Naked pointer as ownership is with content::RenderViewTest::render_thread_.
ChromeMockRenderThread* chrome_render_thread_;
};
#endif // CHROME_TEST_BASE_CHROME_RENDER_VIEW_TEST_H_
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,410 |
A superfood with a super taste! Natera Hemp is the most delicious way they know to give your body the gift of excellent nutrition. Their Hemp Seeds and Hemp Protein Powders are packed with complete powerhouse of proteins, omegas 3, 6 & 9 as well as an extraordinary abundance of minerals, vitamins, anti-oxidants, fibre and more. Nature delivers all this pure hemp goodness in a rare perfect balance for optimum nutrition and taste. Sustainably grown in Canada, Natera Hemp Seeds and Hemp Protein Powders are the ideal centrepiece for the vegan diet. They are all naturally Gluten-Free, Non-GMO, with no preservatives, additives, or artificial flavours - just pure nutrition straight from the good earth to you. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,003 |
Tag Greedo
George Lucas on Star Wars: Who Shot First?
***Think what you will about George Lucas, but in terms of Star Wars, it can all be traced back to him. That's why I always find it so interesting to listen to him talk about it. His creative process, the reason certain decisions were made, and how these movies became the pop cultural staples they are. This space is dedicated to just that. This is "George Lucas on Star Wars."***
***New around here? Check out Primary Ignition's "George Lucas on Star Wars" archive!***
Fanboy Wonder
The Scene: The bounty hunter Greedo confronts Han Solo over money he owes Jabba the Hutt. The two sit at a table.
In the original version of the film, Han shoots Greedo dead under the table.
In all versions following the 1997 Special Edition release, Greedo shoots at Han first and misses, prompting Han to fire back and kill him.
George Lucas Says: "It was always meant that Greedo fired first, and in the [original release] you don't get that too well. And then there was a discussion about, "Well it's good that it's left amorphous and everything." … In terms of Han's character and everything, I didn't like the fact that when he was introduced the first thing he did is just gun somebody down in cold blood. That wasn't what was meant to be there."
I Say: Like a lot of (Dare I say most?) Star Wars fans, I'm a "Han shot first" guy, and call BS on the idea that Greedo shot and missed at point blank range. If Greedo was supposed to fire his gun first, then why have the two of them sitting at a table? The notion that Greedo, or anybody, could miss a shot like that is laughable.
What's more, I'd argue Han gunning someone down in cold blood fits perfectly with what George describes as his character arc. He's talked at length over the years about how Han Solo starts out very selfish, cold, and out for himself. But through his relationship with Luke and Leia, he gradually starts to become compassionate and care about others. As this is Han at the beginning of that arc, it's more than fitting for him to kill Greedo to save his own skin.
Email Rob at primaryignition@yahoo.com, or check us out on Twitter.
A Star Wars: Han Solo & Chewbacca #2 Micro-Review – Shared History
***This is where we keep it nice and simple. Comic book reviews in 100 words or less. Straight, concise, and to the point.***
TITLE: Star Wars: Han Solo & Chewbacca #2
AUTHOR: Marc Guggenheim
ARTISTS: David Messina, Alex Sinclair (Colorist), Joe Caramagna (Letterer). Cover by Phil Noto.
This issue opens with a scene that might have been in the Solo movie: Han as a child, talking to his dad on the Corellian shipyards. We get the line about how father makes ships, but one day son will fly them…
For better or worse, every little nuance of the Star Wars movies is something to be explored. Case in point, in the original movie it was evident that Han and Greedo knew each other. This story dives into their shared history. Not a bad issue. Definitely an improvement from the first.
A Solo Bullet-Point Review – "Unnecessary" Excellence
***WARNING: The following contains some minor, fairly harmless spoilers for Solo: A Star Wars Story.***
I loved this movie. No, seriously. I loved it. It surpassed my expectations in almost every conceivable way. The characters (yes, even the new ones) were fun and engaging. The thrilling Star Wars action component was on point. Alden Ehrenreich and Donald Glover nailed the Han and Lando characters, while at the same time adding a little something themselves. It had he obligatory scenes you expected to see, i.e. Han meeting Chewie, winning the Milennium Falcon, etc. But it didn't pile on the nostalgia the way Rogue One did. I left Solo with a smile on my face, which is more than I can say for either Rogue One or The Last Jedi.
So let's do this. Punch it!
– Ron Howard. The production of Solo was mired in controversy. Directors Phil Lord and Christopher Miller departed during filming, citing "creative differences." Word broke of Lucasfilm bringing in an acting coach for Alden Ahrenreich, the actor who plays Han. That didn't exactly inspire confidence. Toss in the polarizing reaction The Last Jedi received, and it was looking like it was going to be a disaster.
I'd be very curious to learn what exactly Ron Howard changed about this movie. Because I don't think we can deny just how vital his touch was to the creative success of Solo. Not just because he's directed movies like Apollo 13, A Beautiful Mind, and Frost/Nixon. But because he's got such a long-lasting friendship with George Lucas. He's had direct access to the mind that sparked the creation of this whole phenomenon. So I would imagine few filmmakers are more qualified to create something faithful to his vision.
– "Unnecessary." I don't understand the critique that Solo is unnecessary, or adds nothing new to the franchise. Yes, the movie largely plays into pre-established exposition. But if you go by that logic, what was the point of even attempting to make the prequels? Or Rogue One? What exactly qualifies one of these movies "necessary?" What does that even mean?
Furthermore, Solo is hardly devoid of fresh ideas. But we also learn new information about Han, Chewie, and Lando. We're also introduced to new faces, like Qi'ra, L3-37, Tobias Beckett, Enfys Nest, and Crimson Dawn. Hell, I was even partial to Rio Durant.
In the end, Solo is fun. That's what matters. It's certainly all the "necessity" I require.
– When Han met Chewie. Laying the groundwork for the Han Solo/Chewbacca friendship was a vital component here. Their relationship is one of the most important in the entire Star Wars saga. I was struck by the believably and downright simplicity of how Solo sets that up. They save each other's asses a few times and build up trust to the point that a genuine friendship forms.
Actually, I was surprised with how well Solo handled most of the pre-established stuff. Lando owning the Falcon, the card game, the Kessel Run. It all pretty much worked. At least it did for me. Consider how fickle fanboys like me can get about this stuff, that's nothing to sneeze at.
– No Jabba. No Mos Eisley. No Luke or Ben. Solo has no shortage of references, winks, or nods. The folks over at Red Letter Media speculated that the movie would end somewhere during the events of A New Hope, much like Rogue One did. Specifically, with Han in the Mos Eisley Cantina. It could very well have ended with Han sitting at the table, and a shot of Obi-Wan and Luke walking over. I was very pleased they restrained themselves in that respect. For that matter, while he's referenced, we don't see Jabba the Hutt in Solo. There isn't even a mention of Boba Fett or Greedo.
But I imagine one of the reasons they were a little more conservative with this one is because they're saving those tricks for later…
– Sequels. Solo leaves a lot of room or sequels, and even spin-offs. There's already been talk of a Lando movie. There's also a surprise return that comes about as far out of left field as you can get. If you've seen it, you know who I'm talking about. They can go in that direction for another Solo movie, but the returning character would also make for a heck of a box office draw in their own right.
In the end, Solo wound up being the best case scenario for one of these "anthology" movies. It's a hell of a lot of fun, stands up on its own, and paved the way for continued storytelling.
To put it another way, "Great shot, kid! That was one in a million!"
Email Rob at PrimaryIgnition@yahoo.com, or follow Primary Ignition on Twitter. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 349 |
package com.lichfaker.plugin.rtpl.settings;
import com.intellij.openapi.editor.colors.TextAttributesKey;
import com.intellij.openapi.fileTypes.SyntaxHighlighter;
import com.intellij.openapi.options.colors.AttributesDescriptor;
import com.intellij.openapi.options.colors.ColorDescriptor;
import com.intellij.openapi.options.colors.ColorSettingsPage;
import com.lichfaker.plugin.rtpl.highlighter.RtplSyntaxHighlighter;
import org.jetbrains.annotations.NotNull;
import org.jetbrains.annotations.Nullable;
import javax.swing.*;
import java.util.Map;
/**
* @author lichfaker
* @email lichfaker@gmail.com
* @time 16/5/30
*/
public class RtplColorSettingsPage implements ColorSettingsPage {
private static final AttributesDescriptor[] DESCRIPTORS = new AttributesDescriptor[]{
new AttributesDescriptor("Tag", RtplSyntaxHighlighter.RTPL_XML_TAG),
new AttributesDescriptor("Attribute", RtplSyntaxHighlighter.RTPL_XML_ATTRIBUTE),
new AttributesDescriptor("String", RtplSyntaxHighlighter.RTPL_XML_STRING),
new AttributesDescriptor("Js Keywords", RtplSyntaxHighlighter.RTPL_JS_KEYWORDS),
new AttributesDescriptor("Js Variable", RtplSyntaxHighlighter.RTPL_JS_GLOBAL_VARIABLE),
};
private static final String text = "<Input type=\"hidden\" name=\"curPage\" ref=\"curPage\" value={curPage} />\n" +
"<Template name=\"lastColumn\">\n" +
" <Action \n" +
" confirm={var content='viewContent';$getTemplate(content)}\n" +
" data={ {record:record} }>查看</Action>\n" +
"</Template>";
@Nullable
@Override
public Icon getIcon() {
return null;
}
@NotNull
@Override
public SyntaxHighlighter getHighlighter() {
return new RtplSyntaxHighlighter();
}
@NotNull
@Override
public String getDemoText() {
return text;
}
@Nullable
@Override
public Map<String, TextAttributesKey> getAdditionalHighlightingTagToDescriptorMap() {
return null;
}
@NotNull
@Override
public AttributesDescriptor[] getAttributeDescriptors() {
return DESCRIPTORS;
}
@NotNull
@Override
public ColorDescriptor[] getColorDescriptors() {
return new ColorDescriptor[0];
}
@NotNull
@Override
public String getDisplayName() {
return "Rtpl";
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,662 |
Q: Enable file key plugin of MariaDB on CentOS 7.5 I'm trying to enable the file_key_management on MariaDB.
I'm working on CentOS 7.5 and MariaDB 15.1.
Here is the centos-release:
CentOS Linux release 7.5.1804 (Core)
And the MariaDB version:
Ver 15.1 Distrib 5.5.60-MariaDB
I've used this commands to prepare the keys:
openssl rand -hex 16 >> /etc/mysql/keys
openssl rand -hex 16 >> /etc/mysql/keys
openssl rand -hex 16 >> /etc/mysql/keys
chown mysql:mysql /etc/mysql/keys
chmod 400 /etc/mysql/keys
After that, I've edited the etc/mysql/keys file to be in format:
1;key_1
2;key_2
3;key_3
I encrypted the etc/mysql/keys file with openssl enc -aes-256-cbc -md sha1 -k "password" -in /etc/mysql/keys -out /etc/mysql/keys.enc. Last of all, I edited my my.cnf file to be like this:
[mysqld]
...
# File Key Management
plugin_load_add = file_key_management
file_key_management_filename = /etc/mysql/keys.enc
file_key_management_filekey = FILE:/etc/mysql/.key
file_key_management_encryption_algorithm = aes_cbc
[mysqld_safe]
...
After all that, when I reboot the mariadb service, it says the following:
Job for mariadb.service failed because the control process exited with error code. See "systemctl status mariadb.service" and "journalctl -xe" for details
This is the MariaDB log:
180826 17:06:19 InnoDB: highest supported file format is Barracuda.
180826 17:06:19 InnoDB: Waiting for the background threads to start
180826 17:06:20 Percona XtraDB (http://www.percona.com) 5.5.59-MariaDB-38.11 started; log sequence number 429373685
180826 17:06:20 [Note] Plugin 'FEEDBACK' is disabled.
180826 17:06:20 [ERROR] Can't open shared library '/usr/lib64/mysql/plugin/file_key_management.so' (errno: 17, cannot open shared object file: No such file or directory)
180826 17:06:20 [ERROR] Couldn't load plugins from 'file_key_management.so'.
180826 17:06:20 server_audit: MariaDB Audit Plugin version 1.4.3 STARTED.
180826 17:06:20 [ERROR] /usr/libexec/mysqld: unknown variable 'file_key_management_filename=/etc/mysql/keys.enc'
180826 17:06:20 [ERROR] Aborting
180826 17:06:20 server_audit: STOPPED
180826 17:06:20 InnoDB: Starting shutdown...
180826 17:06:24 InnoDB: Shutdown completed; log sequence number 429373685
180826 17:06:24 [Note] /usr/libexec/mysqld: Shutdown complete
180826 17:06:24 mysqld_safe mysqld from pid file /var/run/mariadb/mariadb.pid ended
I cannot find how to download the file_key_management.so file to use it. Can somebody help me find any solution? Thanks in advance.
A: Encrypted data was added in the 10.1 series of MariaDB. As you have 5.5.60 there isn't this plugin available.
A 10.1 packages are available for Centos 7 (as are 10.2 and 10.3).
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,509 |
\section{Introduction}
One of the fundamental goals of financial economics is to investigate at which price(s) a rational agent should contemplate transacting a financial contract. The pricing problem can be addressed in a number of ways depending on the assumptions on the agent as well as on the contract considered. The point of departure of classical arbitrage pricing is the assumption that agents are wealth maximizers and have access to an outstanding market where a number of basic financial securities are traded for a known price in an arbitrage-free way. The task is to find at which prices an agent would be willing to transact a given financial contract outside of the market. As is well known, the corresponding range of rational prices coincides with the interval of arbitrage-free prices. Since the pioneering contributions of Black and Scholes (1973), Merton (1973), Cox and Ross (1976), Rubinstein (1976), Ross (1978), Harrison and Kreps (1979), Kreps (1981), this framework has successfully been extended in several directions. A prominent line of research has contributed to what may be broadly called a general theory of ``subjective pricing''. This has been achieved by investigating the pricing problem under suitable relaxations of the classical notion of an arbitrage opportunity. A key contribution in this direction is the theory of good deal pricing initiated by Cochrane and Saa Requejo (2000) and Bernardo and Ledoit (2000) and based on the idea of restricting the interval of arbitrage-free prices by incorporating individual ``preferences'' into the pricing problem. This leads to tighter pricing bounds called good deal bounds. In this setting, arbitrage opportunities are replaced by good deals, i.e., investment opportunities that require no funding costs and deliver terminal payoffs that are sufficiently attractive based on the agent's ``preferences''. The crucial point is that, differently from arbitrage opportunities, good deals may expose to downside risk and the agent's task is therefore that of determining acceptable risk thresholds. Several ways to define
risk thresholds have been considered in the literature, e.g., by means of Sharpe ratios in Cochrane and Saa Requejo (2000), Bj\"{o}rk and Slinko (2006), and Bion-Nadal and Di Nunno (2013), gain-loss ratios in Bernardo and Ledoit (2000), test probabilities in Carr et al.\ (2001), utility functions in \v{C}ern\'{y} and Hodges (2002), \v{C}ern\'{y} (2003), Kl\"{o}ppel and Schweizer (2007), and Arai (2011), expected shortfall in Cherny (2008), distance functions in Bondarenko and Longarela (2009), and acceptability indices in Madan and Cherny (2010). A theory for general acceptance sets has been developed by Jaschke and K\"{u}chler (2001), \v{C}ern\'{y} and Hodges (2002), Staum (2004), Cherny (2008), and Cheridito et al.\ (2017). We also refer to Arai and Fukasawa (2014) and Arai (2017) for a study of optimal good deal pricing bounds. One can distinguish between two research directions in the field. A first strand of literature starts by imposing suitable constraints on price deflators or, equivalently, martingale measures with the aim of restricting the interval of arbitrage-free prices. The resulting good deal bounds can be therefore expressed in dual terms. The rationale for discarding some arbitrage-free prices is that transacting at those prices would create good deals with respect to a suitable acceptance set. The task is precisely to characterize the corresponding acceptance set. A second strand of literature starts by tightening the superreplication price through a suitable enlargement of the cone of positive random variables, which is replaced by a larger acceptance set. The task is to establish a dual description of the resulting good deal bounds. This is achieved by extending the Fundamental Theorem of Asset Pricing to a good deal pricing setting.
\smallskip
In this paper we follow the second strand of research mentioned above. Our goal is to contribute to the literature on good deal pricing in a static setting by establishing a version of the Fundamental Theorem of Asset Pricing in incomplete markets with frictions where agents use general acceptance sets to define good deals based on their individual ``preferences''. The presence of general acceptance sets poses technical challenges and requires pursuing a new strategy as the standard change of numeraire and exhaustion arguments behind the classical proof of the Fundamental Theorem can no longer be exploited. The highlights of our contribution are the following:
\begin{itemize}[leftmargin=*]
\item The point of departure is a clear and economically motivated definition of rational prices that is missing in the good deal pricing literature with the exception of Cherny (2008). Our approach is different and inspired by Koch-Medina and Munari (2020). We assume that an agent willing to purchase a financial contract outside of the market will never accept to buy at a price at which he or she could find a better replicable payoff in the market. In the spirit of good deal pricing, the agent is prepared to accept a suitable ``replication error'', which is formally captured by an acceptance set. The corresponding rational prices are called market-consistent prices. In a frictionless setting where agents accept no ``replication error'' our notion boils down to the classical notion of an arbitrage-free price.
\item We work under general convex transaction costs and portfolio constraints, which allows us to model both proportional and nonproportional frictions. The bulk of the literature has focused on frictionless markets or markets with proportional trasaction costs. Portfolio constraints have been rarely considered. Moreover, instead of focusing on the set of attainable payoffs at zero cost as a whole, we state our results by explicitly highlighting the specific role played by each source of frictions.
\item The payoff space is taken to be the space of random variables over a general probability space. This is different from the bulk of the literature, with the exception of Cherny (2008), where regularity conditions on payoffs, e.g., integrability, are stipulated upfront in view of the application of special mathematical results, e.g., duality theory. The advantage of our approach is that we are able to highlight where and why a restriction to a special class of payoffs is needed, e.g., to apply duality theory, and what are its consequences in terms of the original pricing problem. This also allows us to point out the failure of change of numeraire techniques applied to acceptance sets. This important aspect, which distinguishes good deal pricing from arbitrage pricing, has never been discussed in the literature.
\item We introduce the notion of scalable good deals, i.e., payoffs that are good deals independently of their size, which extends to a good deal pricing setting the notion of a scalable arbitrage opportunity by Pennanen (2011a). The absence of scalable good deals is key to deriving our characterizations of market-consistent prices. This condition is weaker than the absence of good deals commonly stipulated in the literature. In particular, there are situations where absence of arbitrage is sufficient to ensure absence of scalable good deals. We also argue that absence of scalable good deals is economically sounder than absence of good deals.
\item We adapt the classical notion of a price deflator to our good deal setting with frictions and introduce the class of strictly-consistent price deflators, which correspond to the Riesz densities of a pricing rule in a complete frictionless market where the basic traded securities are ``priced'' in accordance with their (suitably adjusted in the presence of nonproportional frictions) bid-ask spreads and every nonzero acceptable payoff has a strictly positive ``price''. This is different from similar notions in the literature, where no bid-ask spread adjustments are considered and acceptable payoffs, including positive payoffs, are often assumed to have a nonnegative ``price'' only.
\item We establish direct and dual characterizations of market-consistent prices. The direct characterization is based on the analysis of superreplication prices and extends to a good deal pricing setting the classical findings of Bensaid et al.\ (1992) in markets with frictions. The dual characterization is based on a general version of the Fundamental Theorem of Asset Pricing and underpins the appropriate extension of the classical superreplication duality. Under suitable assumptions on the underlying model space, the Fundamental Theorem establishes equivalence between absence of scalable good deals and existence of strictly-consistent price deflators. This extends to a good deal pricing setting the static version of the Fundamental Theorem obtained by Pennanen (2011a). We provide a detailed comparison with the literature to highlight in which sense our result extends and sharpens the various formulations of the Fundamental Theorem in the good deal pricing literature. The only work on good deal pricing featuring a strong result with strictly-consistent price deflators is \v{C}ern\'{y} and Hodges (2002). In that paper the market is frictionless and the acceptance set is assumed to be boundedly generated, a condition that often forces the underlying probability space to be finite.
\end{itemize}
The paper is organized as follows. In Section~\ref{sect: market model} we describe the market model and the agent's acceptance set, and we introduce the notion of market-consistent prices with acceptable risk. In Section~\ref{sect: acceptable deals} we focus on good deals and show a number of sufficient conditions for the absence of scalable good deals (Proposition~\ref{prop: no acc deals}). Our main results are recorded in Section~\ref{sect: FTAP}. We establish a direct and a dual characterization of market-consistent prices with acceptable risk (Propositions~\ref{theo: characterization mcp superreplication}
and~\ref{theo: dual MCP}). The dual characterization is based on our general version of the Fundamental Theorem of Asset Pricing (Theorem~\ref{theo: FTAP})
and the corresponding Superreplication Theorem (Theorem~\ref{theo: superhedging theorem}), which are, from a technical perspective, the highlights of the paper. Throughout we prove sharpness of our results by means of suitable examples, which are always presented in the simplest possible setting, namely that of a two-states model, to demonstrate their general validity.
\begin{comment}
The goal of this paper is to extend and unify the existing literature on good deal pricing in a static setting. We consider an agent who has access to an outstanding financial market and is willing to purchase a given financial contract outside of the market. To establish a range of ``rational'' prices, the agent will use the prices of traded securities as a benchmark. More precisely, the agent will never accept to buy at a price at which he or she could superreplicate the payoff of the financial contract by trading in the market. In line with the good deal pricing theory, we assume that the agent is prepared to accept a suitable ``superreplication error'', which is formally captured by an acceptance set. The corresponding ``rational'' prices are called {\em market-consistent prices with acceptable risk}. In a frictionless setting where agents accept no ``superreplication error'' this coincides with the classical notion of an arbitrage-free price. The main objective of our research is to establish characterizations of market-consistent prices with acceptable risk through appropriate extensions of the Fundamental Theorem of Asset Pricing. The highlights of our contribution are the following:
\begin{itemize}
\item We explicitly start by formalizing a notion of ``rational'' price. This is an important step from the point of view of the economical interpretation of the pricing theory to be developed, which is in line with the original approach in, e.g., (Harrison and Kreps, 1979) and (Kreps, 1981) in the setting of arbitrage pricing. We refer to (Cherny, 2008) for a formulation of ``rational'' price in the setting of good deal pricing. Our approach is different and follows (Koch-Medina and Munari, 2020). For more information, see Section~\ref{sect: direct MCP} (in particular, Remark~\ref{rem: on prices}).
\item The acceptance set is assumed to be a monotone convex set of random variables. This is different from the bulk of the good deal pricing literature where regularity conditions on acceptable payoffs, e.g., integrability, are stipulated upfront in view of the application of special mathematical results, e.g., duality theory. The advantage of our approach is that we are able to highlight where and why a restriction to a special class of payoffs is needed, e.g., to apply duality theory, and what are its consequences in terms of the original pricing problem. This also allows us to point out the failure of ``change of numeraire'' techniques applied to acceptance sets different from the standard positive cone. For more information, see Section~\ref{sect: payoff space} (in particular, Remark~\ref{rem: on S contained in X}).
\item We work under general convex transaction costs and portfolio constraints, which allows us to model both proportional and nonproportional frictions. Differently from the bulk of the literature, which focuses on the set of replicable payoffs attainable at zero cost as a whole, we state our results by explicitly highlighting the specific role played by each source of frictions. For more information, see Section~\ref{sect: market model details} (in particular, Example~\ref{ex: market models}).
\item We provide a thorough study of various types of acceptable deals. An acceptable deal is a (nonzero) acceptable payoff that can be acquired at zero cost. We call scalable acceptable deal a payoff that is an acceptable deal independently of its size and strong scalable acceptable deal a scalable acceptable deal whose negative is not such. The absence of (strong scalable) acceptable deals plays the role of the absence of arbitrage opportunities in the classical theory and is key to deriving our characterizations of market-consistent prices. We use the term acceptable deal in place of good deal to stress the link with acceptance sets and to avoid the ambiguity of the term good deal as is used in the literature. It should be noted that our ``no good deals'' conditions are much weaker than the ones stipulated in the literature. In particular, there are situations where absence of arbitrage is sufficient to ensure absence of acceptable deals. For more information, see Section~\ref{sect: acceptable deals} (in particular, Remark~\ref{rem: acceptable deals} and Proposition~\ref{prop: no acc deals}).
\item We introduce the notion of a (strictly-consistent) pricing density, which extends the classical notion of a (strictly-positive) stochastic discount factor to our setting. A pricing density corresponds to the Riesz density of a pricing rule in a complete frictionless market where the basic traded securities are ``priced'' in accordance with their (suitably adjusted in the presence of nonproportional frictions) bid-ask spreads and every (nonzero) acceptable payoff has a strictly positive ``price''. This is different from similar notions from the literature, where no bid-ask spread adjustment is considered and acceptable payoffs are only assumed to have a nonnegative ``price''. For more information, see Section~\ref{sect: dual MCP} (in particular Remark~\ref{rem: pricing densities}).
\item We establish direct and dual characterizations of market-consistent prices. The direct characterization is based on the analysis of superreplication prices with acceptable risk and extends to a good deal pricing setting the classical findings of (Bensaid et al., 1992) about markets with frictions. The dual characterization is based on a general version of the Fundamental Theorem of Asset Pricing and underpins the appropriate extension of the classical Superhedging Theorem. In the case of a conic acceptance set, the key condition is that the market admits no scalable acceptable deal. This weak condition may be satisfied even though the market admits acceptable deals or even arbitrage opportunities. In the case of a nonconic acceptance set, we have to require absence of scalable acceptable deals with respect to a suitably enlarged acceptance set. This extends to a good deal pricing setting the Fundamental Theorem established by (Pennanen, 2011a), which is the most general formulation of the Fundamental Theorem in markets with frictions we are aware of. For more information, see Section~\ref{sect: dual MCP} (in particular Remarks~\ref{rem: assumptions dual ftap} and~\ref{rem: FTAP}).
\end{itemize}
\end{comment}
\begin{comment}
\subsubsection*{Embedding in the literature}
The natural term of comparison for our work are the papers belonging to the second branch of the ``good deal pricing'' literature as presented above. To highlight the link to our work we provide a brief description of each of our main references with special emphasis on their formulations of the Fundamental Theorem of Asset Pricing. We refer to Remark~\ref{rem: FTAP} for a more detailed comparison.
\smallskip
{\em Frictionless markets}. The focus of (Carr et al., 2001) is on one-period frictionless markets with finite probability space and convex polyhedral acceptance sets defined in terms of test probability measures. The authors establish a Fundamental Theorem of Asset Pricing (Theorem 1) characterizing the absence of a special type of good deals that is specific to the polyhedral structure of their acceptance sets. The focus of (\v{C}ern\'{y} and Hodges, 2002) is on multi-period frictionless markets with convex acceptance sets. The reference model space is abstract. The authors establish a Fundamental Theorem of Asset Pricing (Theorem 2.5) characterizing the absence of so-called virtually good deals under the assumption that the model space is an $L^p$ space with $1<p<\infty$ and that the acceptance set is boundedly generated. The latter condition is seldom met in infinite dimensional model spaces. The focus of (Madan and Cherny, 2010) is on multi-period frictionless markets with conic convex acceptance sets defined in terms of acceptability indices. The reference model space consists of suitably integrable random variables. The authors provide a weak formulation of a Fundamental Theorem of Asset Pricing (Theorem 1) in terms of (not necessarily strictly positive) pricing densities. The focus of (Arai, 2011) is on multi-period frictionless markets with convex acceptance sets defined in terms of utility functions. The reference model space is an Orlicz space. The author establishes dual representations of the corresponding good deal bounds. No Fundamental Theorem of Asset Pricing is established.
\smallskip
{\em Markets with frictions}. The focus of (Jaschke and K\"{u}chler, 2001) is on multi-period markets with proportional frictions and conic convex acceptance sets. The reference model space is abstract. The authors establish a Fundamental Theorem of Asset Pricing (Corollary 8) characterizing the absence of a strong form of good deals under a suitable closedness assumption. The weak formulation of the Fundamental Theorem is stated in terms of (not necessarily strictly positive) pricing densities. No sufficient conditions for the closedness assumption are provided. The focus of (Staum, 2004) is on multi-period markets with convex frictions and convex acceptance sets. The reference model space is abstract. The author establishes a Fundamental Theorem of Asset Pricing (Theorem 6.2) characterizing the absence of a generalized type of good deals. The weak formulation of the Fundamental Theorem is stated in terms of (not necessarily strictly positive) pricing densities. Unfortunately, the proof contains a major flaw invalidating the entire result. The focus of (Cherny, 2008) is on multi-period markets with convex frictions and conic convex acceptance sets. The reference model space is a space of random variables tailored on the chosen acceptance set. The author establishes a Fundamental Theorem of Asset Pricing (Theorem 3.1) characterizing the absence of a strong form of good deals under suitable regularity assumptions on the set of replicable payoffs and on the supporting functionals of the acceptance set. The weak formulation of the Fundamental Theorem is stated in terms of (not necessarily strictly positive) pricing densities.
\end{comment}
\begin{comment}
\section{Introduction 2.0}
One of the main challenges in finance theory is to determine at which rational prices agents should contemplate transacting financial contracts. The pricing problem can be addressed in a number of ways depending on the assumptions on the agents as well as on the contracts considered. The point of departure of classical arbitrage pricing theory is the assumption that agents are wealth maximizers and have access to an outstanding market where a number of basic financial securities are traded for a known price in an arbitrage-free way. The task is to find at which prices an agent would be willing to transact a given financial contract {\em outside of the market}, and this leads to the definition of the interval of arbitrage-free prices. Since the pioneering contributions of Black and Scholes (1973), Merton (1973), Cox and Ross (1976), Rubinstein (1976), Ross (1978), Harrison and Kreps (1979), Kreps (1981), this framework has successfully been extended in several directions. A prominent line of research has contributed to what may be broadly called a general theory of ``subjective pricing''. This has been achieved by investigating the pricing problem under suitable relaxations of the classical notion of an arbitrage opportunity. A key contribution in this direction is the theory of good deal pricing initiated by Cochrane and Saa Requejo (2000) and Bernardo and Ledoit (2000) and based on the idea of restricting the interval of arbitrage-free prices by incorporating individual ``preferences'' into the pricing problem. This leads to tighter pricing bounds that are called {\em good deal bounds}.
In this setting, arbitrage opportunities are replaced by good deals, i.e., investment opportunities that require no funding costs and deliver terminal payoffs that are sufficiently attractive based on the agent's ``preferences''. The crucial point is that, differently from arbitrage opportunities, good deals may expose to downside risk and the agent's task is therefore that of determining acceptable risk thresholds. Several ways to define acceptability have been considered in the literature, e.g., by means of Sharpe ratios in Cochrane and Saa Requejo (2000), Bj\"{o}rk and Slinko (2006), and Bion-Nadal and Di Nunno (2013), gain-loss ratios in Bernardo and Ledoit (2000), test probabilities in Carr et al.\ (2001), utility functions in \v{C}ern\'{y} and Hodges (2002), \v{C}ern\'{y} (2003), Kl\"{o}ppel and Schweizer (2007), and Arai (2011), expected shortfall in Cherny (2008), distance functions in Bondarenko and Longarela (2009), and acceptability indices in Madan and Cherny (2010). A theory for general acceptability criteria has been developed by Jaschke and K\"{u}chler (2001), \v{C}ern\'{y} and Hodges (2002), Staum (2004), and Cherny (2008). We also refer to Arai and Fukasawa (2014) and Arai (2017) for a study of optimal good deal pricing bounds. With few exceptions, the bulk of the literature has focused on frictionless markets.
\smallskip
The goal of this paper is to extend the literature on good deal pricing in a static setting by providing an economically motivated definition of ``subjective prices'' and establishing a version of the Fundamental Theorem of Asset Pricing in incomplete markets with frictions where agents use general acceptance sets to define good deals based on their individual preferences. The presence of general acceptance sets poses technical challenges and requires pursuing a new strategy as the standard change-of-numeraire and exhaustion arguments behind the classical proof of the Fundamental Theorem can no longer be exploited.
\smallskip
The main contribution of the paper is summarized as follows:
\begin{itemize}[leftmargin=*]
\item Our point of departure is a clear and economically motivated definition of rational prices. This is in line with the original approach in, e.g., Harrison and Kreps (1979) and Kreps (1981) in the setting of arbitrage pricing. We refer to Cherny (2008) for a notion of rational prices in the setting of good deal pricing. Our approach is different and inspired by Koch-Medina and Munari (2020). We assume that an agent willing to purchase a financial contract outside of the market will never accept to buy at a price at which he or she could find a better replicable payoff in the market. In line with good deal pricing theory, the agent is prepared to accept a suitable ``replication error'', which is formally captured by an acceptance set. The corresponding rational prices are called market-consistent prices with acceptable risk. In a frictionless setting where agents accept no ``replication error'' our notion boils down to the classical notion of an arbitrage-free price.
\item We work under general convex transaction costs and portfolio constraints, which allows us to model both proportional and nonproportional frictions. Differently from the bulk of the literature, which focuses on the set of attainable payoffs at zero cost as a whole, we state our results by explicitly highlighting the specific role played by each source of frictions, namely transaction costs and portfolio constraints.
\item The payoff space is assumed to be the entire space of random variables on a given probability space. This is different from the bulk of the good deal pricing literature where regularity conditions on payoffs, e.g., integrability, are stipulated upfront in view of the application of special mathematical results, e.g., duality theory. The advantage of our approach is that we are able to highlight where and why a restriction to a special class of payoffs is needed, e.g., to apply duality theory, and what are its consequences in terms of the original pricing problem. Moreover, the fact that the acceptance set is a generic monotone convex set of random variables without regularity conditions allows us to point out the failure of change-of-numeraire techniques applied to acceptance sets, which, to the best of our knowledge, has never been highlighted in the literature.
\item We introduce the notion of scalable good deals, i.e., payoffs that are good deals independently of their size. The absence of scalable good deals plays the role of the absence of arbitrage opportunities in the classical theory and is key to deriving our characterizations of market-consistent prices. This condition is weaker than the absence of good deals commonly stipulated in the literature. In particular, there are situations where absence of scalable arbitrages (i.e. payoffs that are arbitrages independently of their size) is sufficient to ensure absence of scalable good deals. We also argue that absence of scalable good deals is economically sounder than absence of good deals.
\item We adapt the classical notion of a price deflator to our good deal setting with frictions and introduce the class of strictly-consistent price deflators, which correspond to the Riesz densities of a pricing rule in a complete frictionless market where the basic traded securities are ``priced'' in accordance with their (suitably adjusted in the presence of nonproportional frictions) bid-ask spreads and every nonzero acceptable payoff has a strictly positive ``price''. This is different from similar notions in the literature, where no bid-ask spread adjustment is considered and acceptable payoffs are only assumed to have a nonnegative ``price''.
\item We establish direct and dual characterizations of market-consistent prices. The direct characterization is based on the analysis of superreplication prices with acceptable risk and extends to a good deal pricing setting the classical findings of Bensaid et al.\ (1992) in markets with frictions. The dual characterization is based on a general version of the Fundamental Theorem of Asset Pricing and underpins the appropriate extension of the classical Superreplication Theorem. These are our main technical results, which extend to a good deal pricing setting the results by Pennanen (2011a) in the uniperiodal case, which features the most general formulation of the Fundamental Theorem in markets with frictions we are aware of. Our version of the Fundamental Theorem extends and sharpens the existing versions in the good deal pricing literature. On the one side, we replace the absence of good deals with the mathematically weaker and economically sounder condition of absence of scalable good deals. On the other side, we focus on strictly-consistent price deflators that assign to nonzero acceptable payoffs a strictly positive price, as opposed to only a nonnegative price as commonly done in the literature. Under suitable assumptions on the underlying model space, the Fundamental Theorem establishes equivalence between absence of scalable good deals and existence of strictly-consistent, as defined above, price deflators.
\end{itemize}
The paper is organized as follows. In Section~\ref{sect: market model} we describe the market model and the agent's acceptance set, and we introduce the notion of market-consistent prices with acceptable risk. In Section~\ref{sect: acceptable deals} we focus on good deals and show a number of sufficient conditions for the absence of scalable good deals (Proposition~\ref{prop: no acc deals}). Our main results are recorded in Section~\ref{sect: FTAP}. We establish a direct and a dual characterization of market-consistent prices with acceptable risk (Propositions~\ref{theo: characterization mcp superreplication}
and~\ref{theo: dual MCP}). The dual characterization is based on our general version of the Fundamental Theorem of Asset Pricing (Theorem~\ref{theo: FTAP})
and the corresponding Superreplication Theorem (Theorem~\ref{theo: superhedging theorem}), which are, from a technical perspective, the highlights of the paper. Throughout we prove sharpness of our results by means of suitable examples, which are always presented in the simplest possible setting, namely that of a two-states model, to demonstrate their general validity.
\end{comment}
\section{The pricing problem}
\label{sect: market model}
In this section we state the pricing problem and describe the underlying mathematical framework. The bulk of the presentation is aligned with our reference literature on good deal pricing, e.g., Carr et al.\ (2001), Jaschke and K\"{u}chler (2001), \v{C}ern\'{y} and Hodges (2002), Staum (2004), Cherny (2008), Madan and Cherny (2010). We will highlight discrepancies where needed.
\smallskip
We consider an agent who has access to a financial market where a finite number of basic securities are traded. The agent's problem is to determine the range of prices at which he or she should contemplate transacting a financial contract outside of the market. The candidate prices should satisfy the following rationality conditions. On the one hand, they should be {\em consistent with the market}, i.e., the agent should not be willing to transact if the market offers a better contract for a better price. On the other hand, they should be {\em consistent with individual ``preferences''}, i.e., the agent should determine when a marketed contract is better based on a pre-specified criterion of acceptability. To define market-consistent prices with acceptable risk we thus have to describe the underlying market model and the agent's acceptance set. From now on, we always take a buyer's perspective and we therefore focus on ask prices. The conversion to a seller's perspective and to bid prices is straightforward.
\subsection{The market model}
\label{sect: market model details}
We consider a one-period financial market where uncertainty about the terminal state of the economy is captured by a probability space $(\Omega,{\mathcal{F}},\mathbb{P})$. We denote by $L^0$ the space of random variables modulo almost-sure equality under $\mathbb{P}$ and equip it with its canonical algebraic operations and partial order. The set of positive random variables is denoted by $L^0_+$ and is referred to as the standard positive cone. Similarly, for $\mathcal{L}\subset L^0$ we define $\mathcal{L}_+:=\mathcal{L}\cap L^0_+$. The expectation under $\mathbb{P}$ is denoted by $\mathbb{E}$. For every $X\in L^0$ we define $\mathbb{E}_\mathbb{P}[X] := \mathbb{E}_\mathbb{P}[X^+]-\mathbb{E}_\mathbb{P}[X^-]$, where $X^+$ and $X^-$ are the positive and negative part of $X$, and we follow the sign convention $\infty-\infty=-\infty$. The standard Lebesgue spaces are denoted by $L^p$ for $p\in[1,\infty]$. The elements of $L^0$ represent {\em payoffs} of financial contracts at the terminal date. We identify the elements of $\mathbb{R}$ with constant payoffs and refer to them as {\em risk-free payoffs}.
\smallskip
We assume that a finite number of basic securities are traded in the market and denote by ${\mathcal{S}}\subset L^0$ the vector space spanned by their payoffs. The elements of ${\mathcal{S}}$ are called {\em replicable payoffs}. Contrary to most of the good deal pricing literature, we do not assume the existence of risk-free replicable payoffs. To each replicable payoff we associate an ask price via a {\em pricing rule} $\pi:{\mathcal{S}}\to(-\infty,\infty]$. In line with the literature, we allow for nonfinite prices to account for the existence of physical limitations in the availability of replicable payoffs. These limitations affect every agent. Moreover, we fix a nonempty set $\mathcal{M}\subset{\mathcal{S}}$ consisting of those replicable payoffs that can be effectively bought by our agent. The elements of $\mathcal{M}$ are called {\em attainable payoffs} and account for the existence of, e.g., regulatory limitations in the purchase of replicable payoffs. These limitations are specific to our agent. Even though the agent has access to a (possibly strict) subset of ${\mathcal{S}}$ only, it is mathematically convenient to define $\pi$ on the entire set ${\mathcal{S}}$ to exploit its natural vector space structure. Recall that, by finite dimensionality, every linear Hausdorff topology on ${\mathcal{S}}$ is normable and any two norms on ${\mathcal{S}}$ are equivalent. We stipulate the following assumptions on the market primitives.
\begin{assumption}
We denote by $\|\cdot\|$ a fixed norm on ${\mathcal{S}}$. We assume that $\pi$ is convex, lower semicontinuous, and satisfies $\pi(0)=0$. Moreover, we assume that $\mathcal{M}$ is convex, closed, and satisfies $0\in\mathcal{M}$.
\end{assumption}
Our general setting is compatible with a variety of market models encountered in the literature.
\begin{example}
\label{ex: market models}
Let $S_1,\dots,S_N\in L^0$ be the payoffs of the basic securities. To avoid redundant securities, assume that they are linearly independent. Through their trading activity, agents can set up portfolios of basic securities at the initial date. A portfolio of basic securities is represented by a vector $x=(x_1,\dots,x_N)\in\mathbb{R}^N$. We adopt the standard convention according to which a positive entry refers to a long position and a negative entry to a short position. Since in our setting no trading occurs at the terminal date and each security delivers its terminal state-contingent contractual payoff, portfolio $x$ generates the payoff $\sum_{i=1}^Nx_iS_i$, and the set of replicable payoffs ${\mathcal{S}}$ coincides with the linear space generated by $S_1,\dots,S_N$. To each portfolio we associate an ask price via $V_0:\mathbb{R}^N\to(-\infty,\infty]$. As no basic security is redundant, two portfolios generating the same payoff must coincide and, hence, command the same ask price. This ``law of one price'' allows us to define for every replicable payoff $X\in{\mathcal{S}}$
\[
\pi(X) = V_0(x)
\]
where $x\in\mathbb{R}^N$ is any portfolio satisfying $X=\sum_{i=1}^Nx_iS_i$. The pricing rule $\pi$ satisfies the stipulated assumptions whenever $V_0$ is convex, lower semicontinuous, and satisfies $V_0(0)=0$. This is the case in any of the following situations.
\begin{itemize}[leftmargin=*]
\item {\em No transaction costs}. In a frictionless market the bid-ask spread associated with every basic security is zero so that every unit of the $i$th basic security can be bought or sold for the same price $p_i\in\mathbb{R}$. This yields the classical linear pricing functional
\[
V_0(x) = \sum_{i=1}^Np_ix_i.
\]
\item {\em Proportional transaction costs}. In a market with proportional transaction costs every unit of the $i$th basic security can be bought for the price $p^b_i\in\mathbb{R}$ and sold for the price $p^s_i\in\mathbb{R}$. It is natural to assume that $p^b_i\geq p^s_i$ so that the corresponding bid-ask spread is nonnegative. In this setting, it is natural to consider the sublinear pricing functional used, e.g., in Jouini and Kallal (1995)
\[
V_0(x) = \sum_{x_i\geq0}p^b_ix_i+\sum_{x_i<0}p^s_ix_i.
\]
\item {\em Nonproportional transaction costs}. In a market with nonproportional transaction costs the unitary buying and selling prices for the $i$th basic security vary with the volume traded according to some functions $p^b_i,p^s_i:\mathbb{R}_+\to\mathbb{R}\cup\{\infty\}$. Again, it makes sense to assume that $p^b_i(x)\geq p^s_i(x)$ for every $x\in\mathbb{R}_+$ so that the corresponding bid-ask spread is nonnegative. In many market models, see, e.g., the careful discussion about limit-order markets in Pennanen (2011a), it is natural to assume that $p^b_i$ is convex and $p^s_i$ is concave and that both are null and right continuous at zero as well as left continuous at the point where they jump at infinity. In addition, their one-sided derivatives should satisfy $\partial^+p^b_i(0)\geq\partial^+p^s_i(0)$. The assumption that $p^b_i$ and $p^s_i$ take nonfinite values represents a cap on the total number of units available in the market. In this setting, it is natural to consider the convex pricing functional used, e.g., in \c{C}etin and Rogers (2007)
\[
V_0(x) = \sum_{x_i\geq0}p^b_i(x_i)-\sum_{x_i<0}p^s_i(-x_i).
\]
\item {\em General convex pricing functional}. By standard convex duality, all the preceding examples are special instances of the general convex pricing functional defined by
\[
V_0(x) = \sup_{p\in\mathbb{R}^N}\left\{\sum_{i=1}^Np_ix_i-\delta(p)\right\},
\]
where $\delta:\mathbb{R}^N\to[0,\infty]$ is a map attaining the value zero. The map $\delta$ can be used to generate pre-specified deviations from frictionless prices. In particular, differently from the previous rules, this general pricing rule allows for a nonadditive structure across the different basic securities. We refer to Kaval and Molchanov (2006) and Pennanen (2011a) for concrete examples in the setting of link-saved trading and limit-order markets.
\end{itemize}
We model portfolio constraints such as borrowing and short selling restrictions on specific basic securities by restricting the set of admissible portfolios to a subset $\mathcal{P}\subset\mathbb{R}^N$. The set $\mathcal{M}$ thus corresponds to
\[
\mathcal{M} = \left\{\sum_{i=1}^Nx_iS_i \,; \ x\in\mathcal{P}\right\}.
\]
The set $\mathcal{M}$ satisfies the stipulated assumptions whenever $\mathcal{P}$ is convex, closed, and satisfies $0\in\mathcal{P}$. This is the case in any of the following situations. We refer to Pennanen (2011a) and the references therein for additional examples of portfolio constraints that are compatible with our setting.
\begin{itemize}[leftmargin=*]
\item {\em No portfolio constraints}. This corresponds to $\mathcal{P}=\mathbb{R}^N$.
\item {\em No short selling}. This corresponds to $\mathcal{P}=\mathbb{R}^N_+$.
\item {\em Caps on short and long positions}. This corresponds to $\mathcal{P}=[\underline{x}_1,\overline{x}_1]\times\cdots\times[\underline{x}_N,\overline{x}_N]$ for suitable $\underline{x},\overline{x}\in\mathbb{R}^N$ such that $\underline{x}_i\leq\overline{x}_i$ for every $i=1,\dots,N$. In particular, this allows us to impose no short selling and caps on long positions at the same time.
\end{itemize}
\end{example}
\begin{comment}
\begin{remark}[{\em From buyer to seller}]
\label{rem: from buyer to seller}
In this paper we take the perspective of a buyer and $\pi$ and $\mathcal{M}$ are therefore interpreted as an ask pricing functional and a set of attainable payoffs from a buyer's perspective. To switch to the seller's perspective, one has simply to consider the pricing rule $X\mapsto-\pi(-X)$ and the set of attainable payoffs $-\mathcal{M}$.
\end{remark}
\end{comment}
\begin{comment}
\begin{remark}[{\em On the market model}]
Let ${\mathcal{S}}\subset L^0(\mathbb{P})$ be a vector space of replicable payoffs. The range of market models that are compatible with ${\mathcal{S}}$ depends on the dimensionality of ${\mathcal{S}}$. If the space of replicable payoffs is finite dimensional, like in the model of this paper, the eligible models are standard one-period markets or a multi-period markets where the only admissible trading strategies are of buy-and-hold type. In the infinite-dimensional case, we may consider any (discrete or continuous) multi-period market. Many results in this paper do not require ${\mathcal{S}}$ to be finite dimensional. However, the finite dimensionality of ${\mathcal{S}}$ will play a decisive role in a key closedness result, namely Lemma~\ref{lem: closedness C}, that is the basis for our general versions of the Fundamental Theorem of Asset Pricing. For this reason, we have opted to formulate the entire paper in the setting of a standard one-period market.
\end{remark}
\end{comment}
\subsection{The acceptance set}
As said, the agent's problem is to determine a range of prices at which he or she should be prepared to acquire a financial contract with payoff $X\in L^0$ outside of the market. To tackle this problem, the agent will identify among all attainable payoffs $Z\in\mathcal{M}$ those that are ``preferable'' to $X$ (from a buyer's perspective) and use the corresponding market prices to determine an upper bound on the candidate prices for $X$. In line with good deal pricing theory, we define said ``preference'' relationship by means of an acceptance set $\mathcal{A}\subset L^0$. More precisely, we assume that $Z$ is ``preferable'' to $X$ whenever
$Z-X\in\mathcal{A}$. It should be noted that the relation induced by $\mathcal{A}$ is not a preference relation in a technical sense unless $\mathcal{A}$ is a convex cone. The bulk of the good deal pricing literature has focused on this special case. This is, however, unsatisfactory as there exist relevant acceptance sets that are convex but fail to be conic, e.g., acceptance sets defined through utility functions or stochastic dominance. To include these examples, we join \v{C}ern\'{y} and Hodges (2002) and Staum (2004) and dispense with conicity. In this case, we find necessary to consequently dispense with the language of ``preferences'' and to provide a new, more general, interpretation to the acceptance set. In this paper, we interpret $\mathcal{A}$ as the set of all replication errors that are deemed acceptable by the agent. In other words, the agent will try to replicate $X$ by means of attainable payoffs $Z$ available in the market and will use the acceptance set to determine when the residual payoff $Z-X$ is acceptable or not. If $\mathcal{A}=L^0_+$, the agent will target perfect superreplication, thereby accepting no downside risk in the replication procedure. This choice corresponds to the classical setting of arbitrage pricing. If $\mathcal{A}$ is strictly larger than $L^0_+$, the agent will be prepared to accept a suitable amount of downside risk. This may be achieved, e.g., by setting a cap on the downside risk alone or by balancing upside and downside risk.
\smallskip
The elements of $\mathcal{A}$ are called {\em acceptable payoffs}. We assume that every payoff dominating an acceptable payoff is also acceptable and that the notion of acceptability is well behaved with respect to aggregation in the sense that every convex combination of acceptable payoffs remains acceptable. The first property corresponds to the usual monotonicity requirement stipulated in risk measure theory; see, e.g., Artzner et al.\ (1999). Formally, we stipulate the following assumptions
on the acceptance set.
\begin{assumption}
\label{ass: acceptance set}
The set $\mathcal{A}$ is a strict convex subset of $L^0$ and satisfies $0\in\mathcal{A}$ as well as $\mathcal{A}+L^0_+\subset\mathcal{A}$.
\end{assumption}
Our assumptions are compatible with many relevant acceptability criteria.
\begin{example}
\label{ex: acceptance sets}
The following sets fulfill the defining properties of an acceptance set.
\begin{itemize}[leftmargin=*]
\item {\em Expected shortfall}. Let $\alpha\in(0,1)$. For given $X\in L^0$ we define the Value at Risk of $X$ at level $\alpha$ as the negative of the upper $\alpha$-quantile of $X$, i.e.,
\[
\mathop {\rm VaR}\nolimits_\alpha(X) := \inf\{x\in\mathbb{R} \,; \ \mathbb{P}(X+x<0)\leq\alpha\} = -\inf\{x\in\mathbb{R} \,; \ \mathbb{P}(X\leq x)>\alpha\}.
\]
The Expected Shortfall of $X$ at level $\alpha$ and the corresponding acceptance set are defined as
\[
\mathop {\rm ES}\nolimits_\alpha(X) := \frac{1}{\alpha}\int_0^\alpha \mathop {\rm VaR}\nolimits_p(X)dp, \ \ \ \ \ \ \mathcal{A}_{\mathop {\rm ES}\nolimits}(\alpha) := \{X\in L^0 \,; \ \mathop {\rm ES}\nolimits_\alpha(X)\leq0\}.
\]
The set $\mathcal{A}_{\mathop {\rm ES}\nolimits}(\alpha)$ consists of those payoffs that are positive on average on the left tail beyond their upper $\alpha$-quantile. This acceptability criterion has been used in a pricing context by Cherny (2008).
\item {\em Gain-loss ratios}. Let $\alpha\in\big(0,\frac{1}{2}\big]$. For a given $X\in L^0$ we define the expectile of $X$ at level $\alpha$ as the unique solution $e_\alpha(X)\in[-\infty,\infty]$ of the equation
\[
\alpha\mathbb{E}[(X-e_\alpha(X))^+]=(1-\alpha)\mathbb{E}[(e_\alpha(X)-X)^+]
\]
provided that either $X^+$ or $X^-$ is integrable, and $e_\alpha(X)=-\infty$ otherwise. The corresponding acceptance set is defined by
\[
\mathcal{A}_e(\alpha) := \{X\in L^0 \,; \ e_\alpha(X)\geq0\} = \left\{X\in L^0 \, ; \ \frac{\mathbb{E}[X^+]}{\mathbb{E}[X^-]}\geq\frac{1-\alpha}{\alpha}\right\},
\]
with the convention $\frac{\infty}{\infty}=-\infty$ and $\frac{0}{0}=\infty$. This set consists of all the payoffs for which the ratio between the expected inflow of money (gains) and the expected outflow of money (losses) is sufficiently large. In particular, note that $\frac{1-\alpha}{\alpha}\geq1$, which implies that the expected gain must be at least as large as the the expected loss. This type of acceptability criterion has been investigated in a pricing context by Bernardo and Ledoit (2000), even though the link with expectiles was not discussed there.
\item {\em Test scenarios}. Let $E\in{\mathcal{F}}$ such that $\mathbb{P}(E)>0$. The acceptance set given by
\[
\mathcal{A}_E := \{X\in L^0 \,; \ X\mathbbm 1_E\geq0\}
\]
consists of all the payoffs that are positive on the event $E$. In this case, the elements of $E$ can be seen as pre-specified test or control scenarios and the acceptability criterion boils down to requiring a positive payment in each of these scenarios. Clearly, the set $\mathcal{A}_E$ corresponds to the standard positive cone provided that we take $E=\Omega$ or more generally $\mathbb{P}(E)=1$.
\item {\em Test probabilities}. Let $\mathbb{Q}=(\mathbb{Q}_1,\dots,\mathbb{Q}_n)$ be a vector of probability measures on $(\Omega,{\mathcal{F}})$ that are absolutely continuous with respect to $\mathbb{P}$. For a given vector $\alpha=(\alpha_1,\dots,\alpha_n)\in\mathbb{R}^n$ with nonpositive components we define the acceptance set
\[
\mathcal{A}_\mathbb{Q}(\alpha) := \left\{X\in L^0 \,; \ \mathbb{E}\left[\frac{d\mathbb{Q}_i}{d\mathbb{P}}X\right]\geq\alpha_i, \ \forall \ i\in\{1,\dots,n\}\right\},
\]
which consists of all the payoffs whose expected value under each of the pre-specified test probabilities is above the corresponding floor. The test probabilities may be designed, e.g., based on expert opinions or may correspond to appropriate distortions of the underlying probability measure $\mathbb{P}$. This type of acceptability criterion has been investigated in a pricing context by Carr et al.\ (2001). In that paper, the probability measures used to define the acceptance set are called valuation test measures or stress test measures depending on whether the associated floor is zero or not.
\item {\em Utility functions}. Let $u:\mathbb{R}\to[-\infty,\infty)$ be a nonconstant, increasing, concave function satisfying $u(0)=0$, which is interpreted as a von Neumann-Morgenstern utility function. For $\alpha\in(-\infty,0]$ we define the acceptance set by
\[
\mathcal{A}_u(\alpha) := \{X\in L^0 \, ; \ \mathbb{E}[u(X)]\geq\alpha\},
\]
which consists of all the payoffs that yield a sufficiently large expected utility. In particular, the level $\alpha$ could coincide with some utility level, in which case $\mathcal{A}_u(\alpha)$ would consist of all the payoffs that are preferable, from the perspective of the utility function $u$, to a pre-specified deterministic monetary loss. This type of acceptability criteria has been considered in a pricing context by \v{C}ern\'{y} and Hodges (2002), \v{C}ern\'{y} (2003), Kl\"{o}ppel and Schweizer (2007), and Arai (2011).
\item {\em Stochastic dominance}. Recall that a random variable $X\in L^0$ with cumulative distribution function $F_X$ dominates a random variable $Y\in L^0$ with cumulative distribution function $F_Y$ in the sense of second-order stochastic dominance whenever for every $t\in\mathbb{R}$ we have
\[
\int_{-\infty}^t F_X(x)dx \leq \int_{-\infty}^t F_Y(y)dy.
\]
In this case, we write $X\succeq_{SSD}Y$. Now, fix $Z\in L^0$ with $0\succeq_{SSD}Z$ and define the acceptance set
\[
\mathcal{A}_{SSD}(Z):=\{X\in L^0 \,; \ X\succeq_{SSD}Z\}.
\]
The reference payoff $Z$ may represent the terminal value of a pre-specified benchmark portfolio. Note that, by definition, we have $\mathbb{E}[Z]\leq0$. The use of stochastic dominance rules in pricing problems dates back at least to Levy (1985).
\end{itemize}
\end{example}
\subsection{Market-consistent prices with acceptable risk}
As already said, to determine a range of rational prices, the agent will identify among all attainable payoffs available in the market those that deliver an acceptable replication error and will use the corresponding market prices to assess whether a candidate buying price is too high or not. These prices are called market-consistent prices (with acceptable risk) and constitute the natural range of prices for a buyer who has access to the market, respects the existing portfolio constraints, and is willing to take up replication risk according to the chosen acceptance set. Indeed, if a price is not market consistent, then the agent can always invest that amount (or less) in the market to purchase an attainable payoff that ensures an acceptable replication error. This leads to the following formal definition, which was never explicitly formulated in the literature. We refer to Remark~\ref{rem: MCP} for a comparison with the literature.
\begin{definition}
A number $p\in\mathbb{R}$ is a {\em market-consistent price (with acceptable risk)} for $X\in L^0$ if:
\begin{enumerate}
\item[(1)] $p<\pi(Z)$ for every $Z\in\mathcal{M}$ such that $Z-X\in\mathcal{A}\setminus\{0\}$;
\item[(2)] $p\leq\pi(X)$ whenever $X\in\mathcal{M}$.
\end{enumerate}
We denote by $\mathop {\rm MCP}\nolimits(X)$ the set of market-consistent prices for $X$.
\end{definition}
The set of market-consistent prices for a payoff $X\in L^0$ is an interval that is bounded to the right. The upper bound is the natural generalization of the classical superreplication price to our setting, i.e.,
\[
\pi^+(X) := \inf\{\pi(Z) \,; \ Z\in\mathcal{M}, \ Z-X\in\mathcal{A}\}.
\]
We call $\pi^+(X)$ the {\em superreplication price (with acceptable risk)} of $X$. In other words, the superreplication price is the natural pricing threshold for a buyer who prices in a market-consistent way according to the underlying acceptance set. We record this observation in the next proposition.
\begin{proposition}
\label{prop: interval MCP}
For every $X\in L^0$ the set $\mathop {\rm MCP}\nolimits(X)$ is an interval such that $\inf\mathop {\rm MCP}\nolimits(X)=-\infty$ and $\sup\mathop {\rm MCP}\nolimits(X)=\pi^+(X)$.
\end{proposition}
\begin{proof}
It is clear that $(-\infty,p)\subset\mathop {\rm MCP}\nolimits(X)$ for every market-consistent price $p\in\mathop {\rm MCP}\nolimits(X)$. Now, take any $p\in(-\infty,\pi^+(X))$ and note that, by definition of $\pi^+$, we have $p<\pi(Z)$ for every $Z\in\mathcal{M}$ such that $Z-X\in\mathcal{A}$. This shows that $p$ is a market-consistent price for $X$ and implies that $\pi^+(X)\leq\sup\mathop {\rm MCP}\nolimits(X)$. Conversely, take an arbitrary market-consistent price $p\in\mathop {\rm MCP}\nolimits(X)$. If $Z\in\mathcal{M}$ is such that $Z-X\in\mathcal{A}$, then $\pi(Z)\geq p$. Taking the infimum over such $Z$'s and the supremum over such $p$'s delivers the inequality $\pi^+(X)\geq\sup\mathop {\rm MCP}\nolimits(X)$. This shows that $\pi^+(X)$ is the supremum of the set $\mathop {\rm MCP}\nolimits(X)$.
\end{proof}
\begin{remark}
\label{rem: MCP}
(i) In line with our pricing problem, the notion of a market-consistent price is stated from a buyer's perspective. Following the same logic, one could define market-consistent prices from a seller's perspective and restrict the focus on prices that are simultaneously market consistent for both parties. One may wonder why we focus only on buyer's prices. From an economical perspective, this is because, as stressed above, the choice of the acceptance set is based on individual ``preferences'', implying that the general financial situation is that of a buyer and seller equipped with {\em different} acceptance sets. From a mathematical perspective, the buyer's and seller's problems are related to each other and one can easily adapt our results to obtain the corresponding results for seller's prices.
\smallskip
(ii) In the good deal pricing literature the focus is typically on superreplication prices and an explicit notion of a rational price is not explicitly discussed. The exception is Cherny (2008), where, in line with classical arbitrage pricing theory, rational prices are defined through extensions of the pricing rule preserving the absence of (suitably defined) good deals. Even though the pricing rule is not linear, the extension is assumed to be linear in the direction of the payoff that is ``added'' to the market. Our definition is not based on market extensions and does not require the absence of good deals, which, differently from the absence of arbitrage opportunities, is a debatable requirement; see Section~\ref{sect: acceptable deals}. Our approach extends that of Koch-Medina and Munari (2020) beyond the setting of frictionless markets and to general acceptance sets beyond the standard positive cone. We believe this approach is preferable from an economic perspective to the usual approach pursued in the good deal pricing literature.
\smallskip
(iii) Note that, in the definition of a market-consistent price, condition (1) need not imply condition (2), which is a natural requirement for a market-consistent price of an attainable payoff. The implication holds if, for instance, for every payoff $X\in\mathcal{M}$ there exist a nonzero $U\in\mathcal{A}$ and $c\in\mathbb{R}$ such that $X+\frac{1}{n}U\in\mathcal{M}$ and $\pi(X+\frac{1}{n}U)\leq\pi(X)+\frac{1}{n}c$ for every $n\in\mathbb{N}$. In particular, this holds if $\mathcal{A}$ and $\mathcal{M}$ have nonzero intersection and $\pi$ and $\mathcal{M}$ are both conic.
\end{remark}
\begin{comment}
\begin{remark
\label{rem: on prices}
In a frictionless arbitrage-free market, $p\in\mathbb{R}$ is said to be an {\em arbitrage-free price} for $X\in L^0(\mathbb{P})$ if the linear extension of $\pi$ to the enlarged marketed space ${\mathcal{S}}+\mathop{\rm span}\nolimits(X)$ obtained by assigning to $X$ the value $p$ is strictly positive; see, e.g., (Harrison and Kreps, 1979). In this setting, one can easily verify that every market-consistent (buyer and seller) price is also an arbitrage-free price. If the acceptance set is the standard positive cone, then the two notions coincide. We find that the definition above is preferable to that of arbitrage-free prices for at least three reasons.
\begin{itemize}
\item The common interpretation attached to arbitrage-free prices is that, by exchanging the payoff at one of those price, the market for the basic securities can be extended without creating arbitrage opportunities. As already highlighted in (Kreps, 1981) and later repeated in (Jouini and Kallal, 1999), this interpretation is at odds with the fact that introducing a new security in the market will generally alter the prices of the existing securities. The definition of a market-consistent price does not require any market extension of this type.
\item The market extension could be interpreted in a weak sense, i.e., one could assume that the market is only extended to embed the private transaction between the two parties. However, in this case, there is no economically compelling reason to define rational prices through equilibrium-like conditions, such as the absence of arbitrage opportunities. The definition of a market-consistent price is not based on equilibrium-like conditions.
\item The notion of an arbitrage-free price becomes problematic in markets with frictions as there are multiple ways to ``extend'' the market preserving the prices of the basic securities together with the absence of arbitrage opportunities. We refer to (Jouini and Kallal, 1999) and (Cherny, 2008) for different approaches. The definition of a market-consistent price can unambiguously be extended to markets with frictions.
\end{itemize}
\end{remark}
\end{comment}
\section{Good deals}
\label{sect: acceptable deals}
In this section we introduce the notion of good deals and discuss some important types of good deals. The absence of (a suitable class of) good deals will prove essential to establish our desired characterizations of market-consistent prices with acceptable risk. As explained below, this section deviates considerably from the path that is usually taken in the good deal pricing literature. In particular, we question the economic plausibility of the absence of good deals and replace it with the weaker condition of absence of scalable good deals.
\smallskip
A good deal is any nonzero acceptable payoff that is attainable and can be acquired at zero cost. As such, a good deal constitutes a natural generalization of an arbitrage opportunity, which corresponds to the situation where the acceptance set reduces to the standard positive cone. An important class of good deals is that of scalable good deals, i.e., payoffs that are good deals independently of their size. The notion of a good deal has appeared, sometimes with a slightly different meaning, under various names in the literature including good deal in Cochrane and Saa Requejo (2000), \v{C}ern\'{y} and Hodges (2002), Bj\"{o}rk and Slinko (2006), Kl\"{o}ppel and Schweizer (2007), Bion-Nadal and Di Nunno (2013), Baes et al.\ (2020), good deal of first kind in Jaschke and K\"{u}chler (2001), good opportunity in Bernardo and Ledoit (2000), acceptable opportunity in Carr et al.\ (2001). The notion of a scalable good deal is a direct extension of that of a scalable arbitrage opportunity introduced by Pennanen (2011a) and appeared, in a frictionless setting, in Baes et al.\ (2020). In this paper, we extend it to markets with frictions. The formal notions are recorded in the next definition, where we use recession cones and recession functionals as recalled in the appendix. Under our standing assumptions, $\mathcal{A}^\infty$ and $\mathcal{M}^\infty$ are the largest convex cones contained in $\mathcal{A}$ and $\mathcal{M}$, respectively. Similarly, $\pi^\infty$ is the smallest sublinear functional dominating $\pi$.
\begin{definition}
We say that a nonzero replicable payoff $X\in{\mathcal{S}}$ is:
\begin{enumerate}
\item[(1)] a {\em good deal (with respect to $\mathcal{A}$)} if $X\in\mathcal{A}\cap\mathcal{M}$ and $\pi(X)\leq0$.
\item[(2)] a {\em scalable good deal (with respect to $\mathcal{A}$)} if $X\in\mathcal{A}^\infty\cap\mathcal{M}^\infty$ and $\pi^\infty(X)\leq0$.
\item[(3)] a {\em strong scalable good deal (with respect to $\mathcal{A}$)} if $X$ is a scalable good deal while $-X$ is not.
\end{enumerate}
We replace the term ``good deal'' with ``arbitrage opportunity'' whenever $\mathcal{A}=L^0_+$.
\end{definition}
\begin{remark}
Note that if $X\in L^0$ is a strong scalable good deal, by definition there exists $\lambda>0$ such that $-\lambda X$ is not a good deal. However, this ``short'' position can be completely offset at zero cost by acquiring the attainable payoff $\lambda X$. This is what makes the scalable good deal ``strong''.
\end{remark}
It is clear that every strong scalable good deal is a scalable good deal, which in turn is a good deal. It is also clear that every (scalable) arbitrage opportunity is a (scalable) good deal. The absence of (scalable) good deals will be critical in our study of market-consistent prices. This condition plays the role of the absence of arbitrage opportunities in the classical arbitrage pricing theory. In that setting, an arbitrage opportunity constitutes an anomaly in the market because every rational agent will seek to exploit it thereby raising its demand until prices will also rise and the arbitrage opportunity will eventually vanish. The situation is quite different when we consider good deals as there might be no consensus across agents in the identification of a common criterion of acceptability, thereby casting doubts on the economic foundation of the absence of good deals. In our opinion, this crucial point has not been appropriately highlighted in the literature. The key observation about our paper is that the absence of good deals is not needed to develop our theory. Indeed, everything we have to ensure is that no (strong) scalable good deal exists. As shown by the next proposition, this weaker condition holds in a number of standard situations. In particular, we show that the absence of scalable arbitrage opportunities is sometimes sufficient to rule out scalable good deals. The condition $\mathcal{M}^\infty\subset{\mathcal{S}}_+$ is typically implied by caps on short positions (it holds if the payoffs of the basic securities are positive and the set of admissible portfolios is bounded from below so that short selling is possible but restricted for each security) while the condition $\mathcal{M}^\infty=\{0\}$ is satisfied whenever there are caps on short and long positions alike (it is equivalent to the boundedness of the set of admissible portfolios); see Example~\ref{ex: market models}.
\begin{proposition}
\label{prop: no acc deals}
Assume that one of the following conditions holds:
\begin{enumerate}
\item[(i)] $\mathcal{A}^\infty=L^0_+$ and there exists no scalable arbitrage opportunity.
\item[(ii)] $\mathcal{M}^\infty\subset{\mathcal{S}}_+$ and there exists no scalable arbitrage opportunity.
\item[(iii)] $\mathcal{M}^\infty=\{0\}$.
\end{enumerate}
Then, there exists no scalable good deal.
\end{proposition}
\begin{proof}
It is clear that no scalable good deal can exist if (iii) holds. Now, take a nonzero $X\in\mathcal{A}^\infty\cap\mathcal{M}^\infty$. Under either (i) or (ii), we have $X\in L^0_+$. Hence, we must have $\pi^\infty(X)>0$, for otherwise $X$ would be a scalable arbitrage opportunity. As a result, there cannot exist any scalable good deal.
\end{proof}
The next proposition records a simple equivalent condition for the absence of strong scalable good deals that will be used in the sequel. The condition is a one-period equivalent to the condition in Theorem 8 in Pennanen (2011b). In that paper the condition is expressed in terms of portfolios instead of payoffs and the acceptance set is the standard positive cone.
\begin{proposition}
\label{prop: characterization no strong acc}
There exists no strong scalable good deal if and only if $\mathcal{A}^\infty\cap\{X\in\mathcal{M}^\infty \,; \ \pi^\infty(X)\leq0\}$
is a vector space.
\end{proposition}
\begin{proof}
Let ${\mathcal{N}}=\mathcal{A}^\infty\cap\{X\in\mathcal{M}^\infty \,; \ \pi^\infty(X)\leq0\}$. Note that ${\mathcal{N}}$ is a convex cone. To prove the ``only if'' implication, assume that there exist no strong scalable good deal. This implies that $-{\mathcal{N}}\subset{\mathcal{N}}$, showing that ${\mathcal{N}}$ is a vector space. To prove the ``if'' implication, assume that ${\mathcal{N}}$ is a vector space. Take a nonzero $X\in\mathcal{A}^\infty\cap\mathcal{M}^\infty$ such that $\pi^\infty(X)\leq0$. Note that $X\in{\mathcal{N}}$, so that $-X\in{\mathcal{N}}$ as well. It follows that no strong scalable good deal can exist, concluding the proof.
\end{proof}
\section{Fundamental Theorem of Asset Pricing}
\label{sect: FTAP}
This section contains our main results. We establish a direct and a dual characterization of market-consistent pricing with acceptable risk. In line with classical results on arbitrage-free prices, the dual characterization relies on an appropriate extension of the Fundamental Theorem of Asset Pricing. Most of this section, including our dual results, is new and both extends and sharpens the corresponding results in the good deal pricing literature. We refer to the dedicated remarks for a detailed comparison with the literature.
\subsection{The reference payoff space}
\label{sect: payoff space}
The remainder of the paper is concerned with establishing characterizations of market-consistent prices with acceptable risk. In line with classical results on arbitrage-free prices, our characterizations will be obtained by means of topological methods. This will sometimes force us to restrict the set of payoffs we are able to price. This set is called {\em payoff space} and is denoted by ${\mathcal{X}}\subset L^0$. Note that, for consistency, the payoff space should always contain those payoffs that already carry a market price, i.e., ${\mathcal{S}}\subset{\mathcal{X}}$. Of course, the natural choice is to take ${\mathcal{X}}=L^0$ endowed with its canonical topology of convergence in probability. This choice, however, becomes problematic whenever we wish to apply duality theory. In this case, the payoff space has to be a strict subspace of $L^0$.
\smallskip
At first sight, the introduction of the payoff space at this stage may seem cumbersome and one may wonder why we have not focused our attention on ${\mathcal{X}}$ instead of $L^0$ right from the beginning. In fact, this is the standard approach of the entire good deal pricing literature, where ${\mathcal{X}}$ is taken to be a Lebesgue space or, more generally, a locally-convex topological vector space equipped with a partial order. Our approach has two main advantages. On the one side, we do not want to rule out the natural choice ${\mathcal{X}}=L^0$. On the contrary, we aim to develop a pricing theory for general contracts as far as our methodology permits us. On the other side, the distinction between $L^0$ and ${\mathcal{X}}$ makes it possible to unveil a number of critical aspects of good deal pricing that have never been highlighted in the literature.
\smallskip
A first aspect has to do with the assumption ${\mathcal{S}}\subset{\mathcal{X}}$. This inclusion implies that the choice of the payoff space has an impact on the range of basic securities, thereby inducing restrictions on the underlying market model. For example, the choice ${\mathcal{X}}=L^1$ implies that each basic security has an integrable payoff, a condition that need not be satisfied by all realistic market models. Ensuring a flexible choice of the payoff space is therefore desirable from a modelling perspective. A second aspect has to do with the comparison between arbitrage pricing and good deal pricing. In arbitrage pricing one works under ${\mathcal{X}}=L^0$, so that the explicit introduction of the payoff space is not necessary. The application of duality theory is nevertheless possible through a change of probability making the payoffs of the basic securities integrable. A similar trick cannot be reproduced in the setting of good deal pricing because the structure of the acceptance set, differently from the standard positive cone underpinning arbitrage pricing, is often disrupted through a change of probability. The analysis of the interplay between the payoff space and the acceptance set should thus be an integral part of good deal pricing. We refer to Remark~\ref{rem: on S contained in X} for more details about this critical, yet neglected, aspect of the theory.
\smallskip
We collect the standing assumptions on the payoff space below. As said, the choice ${\mathcal{X}}=L^0$ endowed with the topology of convergence in probability is not ruled out for the moment. We will be careful to highlight when we are forced to restrict our analysis to a strict subspace of $L^0$.
\begin{assumption}
\label{assumption direct}
We assume that ${\mathcal{X}}$ is a linear subspace of $L^0$ equipped with a linear Hausdorff topology. We also assume that ${\mathcal{S}}\subset{\mathcal{X}}$ and $\mathcal{A}\cap{\mathcal{X}}$ is closed.
\end{assumption}
\begin{remark}
\label{rem: on S contained in X}
(i) The assumption ${\mathcal{S}}\subset{\mathcal{X}}$ will be crucial to establish our characterizations of market-consistent prices. At the same time, we will sometimes need to apply duality theory and we will therefore have to assume that ${\mathcal{X}}$ is a strict subspace of $L^0$. In this case, our assumption will force the payoffs of the basic traded securities to display some degree of regularity, e.g., integrability with respect to $\mathbb{P}$. Inspired by standard arguments from arbitrage pricing theory, one may wonder whether this issue can be overcome by a simple change of probability. Indeed, define
\[
d\mathbb{Q} = \frac{c}{1+\sum_{i=1}^N|S_i|}d\mathbb{P}, \ \ \ \ \ \ c=\mathbb{E}\left[\frac{1}{1+\sum_{i=1}^N|S_i|}\right],
\]
where $S_1,\dots,S_N\in L^0$ are the payoffs of the basic securities. It is immediate to see that the probability $\mathbb{Q}$ is equivalent to $\mathbb{P}$ and every payoff in ${\mathcal{S}}$ is integrable with respect to $\mathbb{Q}$. As a result, it would seem possible to work with ${\mathcal{X}}=L^1(\mathbb{Q})$, where $L^1(\mathbb{Q})$ is the space of $\mathbb{Q}$-integrable random variables. This is precisely what is done in arbitrage pricing theory to make the application of duality theory possible; see, e.g., the proof of Theorem 1.7 in F\"{o}llmer and Schied (2016). The problem with this approach is that the acceptance set often depends explicitly on the natural probability $\mathbb{P}$ and its (topological) properties are typically lost after we pass to $\mathbb{Q}$. Most importantly for our applications, the set $\mathcal{A}\cap L^1(\mathbb{Q})$ is seldom closed with respect to the norm topology of $L^1(\mathbb{Q})$. Interestingly, this issue does not arise in arbitrage pricing theory because the acceptance set used there, namely $L^0_+$, is invariant with respect to changes of equivalent probability. More generally, the change of probability would not be problematic if the acceptance set is invariant with respect to changes of the numeraire. Unfortunately, as shown in Koch-Medina et al.\ (2017), numeraire invariance is only compatible with acceptance sets based on test scenarios as defined in Example~\ref{ex: acceptance sets}.
\smallskip
(ii) For technical reasons we need to require that the restriction of the acceptance set to ${\mathcal{X}}$ is closed. This implies that the natural choice ${\mathcal{X}}=L^0$ is feasible only if the chosen acceptance set is closed with respect to the topology of convergence in probability. This condition is sometimes satisfied, e.g., by $L^0_+$, but often fails. As a result, the choice of ${\mathcal{X}}$ will generally depend on the underlying acceptance set.
\end{remark}
\subsection{A key auxiliary set}
\label{sect: superreplication C}
This subsection features a technical result that will play a crucial role in the sequel. We show that, under the absence of strong scalable good deals, the set
\[
\mathcal{C} := \{(X,m)\in{\mathcal{X}}\times\mathbb{R} \,; \ \exists Z\in\mathcal{M} \,:\, Z-X\in\mathcal{A}, \ \pi(Z)\leq-m\}
\]
is closed (in the natural product topology). This set consists of all the payoff-price couples $(X,m)\in{\mathcal{X}}\times\mathbb{R}$ such that $X$ can be superreplicated with acceptable risk by means of admissible payoffs available in the market for less than $-m$. The set $\mathcal{C}$ plays the same role that in classical arbitrage pricing theory is played by the set of payoffs that can be superreplicated at zero cost. To see the link, consider a frictionless market, i.e., a market where $\pi$ is linear and $\mathcal{M}={\mathcal{S}}$, and assume that $\mathcal{A}=L^0_+$. The set of payoffs that can be superreplicated at zero cost is given by
\[
\mathcal{K} := \{X\in{\mathcal{X}} \,; \ \exists Z\in{\mathcal{S}}, \ Z\geq X, \ \pi(Z)\leq0\}.
\]
It is easily verified that, taking any $U\in{\mathcal{S}}$ satisfying $\pi(U)=1$, we can rewrite $\mathcal{C}$ as
\[
\mathcal{C} = \{(X,m)\in{\mathcal{X}}\times\mathbb{R} \,; \ X+mU\in\mathcal{K}\}.
\]
In this classical setting, it is well known that the absence of arbitrage opportunities implies closedness of $\mathcal{K}$ and, hence, of $\mathcal{C}$. This is key to establish the classical Fundamental Theorem of Asset Pricing; see, e.g., F\"{o}llmer and Schied (2016). The closedness of $\mathcal{C}$ in our general framework will allow us to establish a general version of the Fundamental Theorem in the next subsections.
\begin{lemma}
\label{lem: closedness C}
If there is no strong scalable good deal, then $\mathcal{C}$ is closed and $(0,n)\notin\mathcal{C}$ for some $n\in\mathbb{N}$.
\end{lemma}
\begin{proof}
Set ${\mathcal{N}}=\{X\in\mathcal{A}^\infty\cap\mathcal{M}^\infty \,; \ \pi^\infty(X)\leq 0\}$ and denote by ${\mathcal{N}}^\perp$ the orthogonal complement of ${\mathcal{N}}$ in ${\mathcal{S}}$. We claim that for every $(X,m)\in\mathcal{C}$ there exists $Z\in\mathcal{M}\cap{\mathcal{N}}^\perp$ such that $Z-X\in\mathcal{A}$ and $\pi(Z)\leq-m$. To see this, note that we find $W\in\mathcal{M}$ such that $W-X\in\mathcal{A}$ and $\pi(W)\leq-m$. We can write $W=W_{{\mathcal{N}}}+W_{{\mathcal{N}}^\perp}$ for unique elements $W_{{\mathcal{N}}}\in{\mathcal{N}}$ and $W_{{\mathcal{N}}^\perp}\in{\mathcal{N}}^\perp$. Note that $W_{{\mathcal{N}}}$ belongs to $-{\mathcal{N}}$ because the set ${\mathcal{N}}$ is a vector space by Proposition~\ref{prop: characterization no strong acc}. Hence, setting $Z=W_{{\mathcal{N}}^\perp}$, we infer that $Z=W-W_{{\mathcal{N}}}\in\mathcal{M}+\mathcal{M}^\infty\subset\mathcal{M}$ as well as $Z-X=(W-X)-W_{{\mathcal{N}}}\in\mathcal{A}+\mathcal{A}^\infty\subset\mathcal{A}$ by \eqref{eq: recession cones 1}. Moreover, $\pi(Z)=\pi(W-W_{{\mathcal{N}}})\leq-m$ by combining \eqref{eq: recession cones 1} with \eqref{eq: recession cones 2}. This shows the desired claim.
\smallskip
Next, we establish closedness. To this end, take a net $(X_\alpha,m_\alpha)\subset\mathcal{C}$ indexed on the directed set $(A,\succeq)$ and a point $(X,m)\in{\mathcal{X}}\times\mathbb{R}$ and assume that $(X_\alpha,m_\alpha)\to(X,m)$. By assumption, we find a net $(Z_\alpha)\subset\mathcal{M}$ such that $Z_\alpha-X_\alpha\in\mathcal{A}$ and $\pi(Z_\alpha)\leq-m_\alpha$ for every $\alpha\in A$. Without loss of generality we can assume that $(Z_\alpha)\subset{\mathcal{N}}^\perp$. Now, suppose that $(Z_\alpha)$ has no convergent subnet. In this case, we find a subnet of $(Z_\alpha)$ consisting of nonzero elements with strictly-positive diverging norms. (Indeed, it suffices to consider the index set $B=\{(\alpha,n) \,; \ \alpha\in A, \ n\in\mathbb{N}, \ \|Z_\alpha\|>n\}$ equipped with the direction defined by $(\alpha,n)\succeq(\beta,m)$ if and only if $\alpha\succeq\beta$ and $m\geq n$ and take $Z_{(\alpha,n)}=Z_\alpha$ for every $(\alpha,n)\in B$). We still denote this subnet by $(Z_\alpha)$. Since the unit sphere in ${\mathcal{S}}$ is compact, we can assume that $\frac{Z_\alpha}{\|Z_\alpha\|} \to Z$ for a suitable nonzero $Z\in\mathcal{M}^\infty$ by~\eqref{eq: recession cones 1}. As $(X_\alpha)$ is a convergent net by assumption,
\[
\frac{Z_\alpha-X_\alpha}{\|Z_\alpha\|} \to Z.
\]
This implies that $Z\in\mathcal{A}^\infty$ again by~\eqref{eq: recession cones 1}. We claim that $\pi^\infty(Z)\leq0$. Otherwise, we must find $\lambda>0$ such that $\pi(\lambda Z)>0$. Without loss of generality we may assume that $\|Z_\alpha\|>\lambda$ for every $\alpha\in A$. Since $(m_\alpha)$ is a convergent net, we can use the lower semicontinuity and convexity of $\pi$ to get
\[
0 < \pi(\lambda Z) \leq \liminf_{\alpha}\pi\left(\frac{\lambda Z_\alpha}{\|Z_\alpha\|}\right) \leq \liminf_{\alpha}\frac{\lambda\pi(Z_\alpha)}{\|Z_\alpha\|} \leq \liminf_{\alpha}\frac{-\lambda m_\alpha}{\|Z_\alpha\|} = 0.
\]
This yields $\pi^\infty(Z)\leq0$. As a result, it follows that $Z$ belongs to ${\mathcal{N}}$. However, this is not possible because $Z$ is a nonzero element in ${\mathcal{N}}^\perp$. To avoid this contradiction, the net $(Z_\alpha)$ must admit a convergent subnet, which we still denote by $(Z_\alpha)$ for convenience. By closedness of $\mathcal{M}$, the limit $Z$ also belongs to $\mathcal{M}$. As we clearly have $Z_\alpha-X_\alpha\to Z-X$, it follows that $Z-X\in\mathcal{A}$ by closedness of $\mathcal{A}\cap{\mathcal{X}}$. Moreover,
\[
\pi(Z) \leq \liminf_{\alpha}\pi(Z_\alpha) \leq \liminf_{\alpha}-m_\alpha = -m
\]
by lower semicontinuity of $\pi$. This shows that $(X,m)\in\mathcal{C}$ and establishes that $\mathcal{C}$ is closed.
\smallskip
Finally, we show that $(0,n)\notin\mathcal{C}$ for some $n\in\mathbb{N}$. To this effect, assume to the contrary that for every $n\in\mathbb{N}$ there exists $Z_n\in\mathcal{A}\cap\mathcal{M}$ such that $\pi(Z_n)\leq-n$. If the sequence $(Z_n)$ is bounded, then we may assume without loss of generality that $Z_n\to Z$ for some $Z\in\mathcal{A}\cap\mathcal{M}$. The lower semicontinuity of $\pi$ implies $\pi(Z) \leq \liminf_{n\to\infty}\pi(Z_n) = -\infty$, which cannot hold. Hence, the sequence $(Z_n)$ must be unbounded. As argued above, we can assume that $(Z_n)\subset{\mathcal{N}}^\perp$ without loss of generality. Moreover, we find a suitable subsequence, which we still denote by $(Z_n)$, that has strictly-positive divergent norms satisfying $\frac{Z_n}{\|Z_n\|}\to Z$ for some nonzero $Z$ belonging to $\mathcal{A}^\infty\cap\mathcal{M}^\infty$. We claim that $\pi^\infty(Z)\leq0$. Otherwise, we must find $\lambda>0$ such that $\pi(\lambda Z)>0$. Without loss of generality we may assume that $\|Z_n\|>\lambda$ for every $n\in\mathbb{N}$. The lower semicontinuity and convexity of $\pi$ imply
\[
0 < \pi(\lambda Z) \leq \liminf_{n\to\infty}\pi\left(\frac{\lambda Z_n}{\|Z_n\|}\right) \leq \liminf_{n\to\infty}\frac{\lambda\pi(Z_n)}{\|Z_n\|} \leq \liminf_{n\to\infty}\frac{-\lambda n}{\|Z_n\|} \leq 0.
\]
This shows that $\pi^\infty(Z)\leq0$ must hold. As a result, it follows that $Z$ belongs to ${\mathcal{N}}$. However, this is not possible because $Z$ is a nonzero element in ${\mathcal{N}}^\perp$. Hence, we must have $(0,n)\notin\mathcal{C}$ for some $n\in\mathbb{N}$.
\end{proof}
\begin{comment}
\begin{remark}
(i) The closedness of $\mathcal{C}$ established in the last lemma may also be obtained by applying a generalization of the famous result in (Dieudonn\'e, 1966) about the closure of the difference of two convex closed sets. Indeed, $\mathcal{C}$ can be equivalently written as
\[
\mathcal{C}=\{(X,m)\in\mathcal{M}\times\mathbb{R} \,; \ \pi(Z)\leq-m\}-((\mathcal{A}\cap{\mathcal{X}})\times\mathbb{R}_+).
\]
The absence of strong acceptable deals is equivalent to the recession cones of the two sets in the right-hand side having zero intersection and the finite dimensionality of the set involving $\pi$ and $\mathcal{M}$ ensures the local compactness needed for closedness to hold; see, e.g., Theorem 1.1.8 in (Z\u{a}linescu, 2002). The advantage of the above proof is that it provides a direct argument for closedness and allows us to establish an additional property that will be needed in what follows.
\smallskip
(ii) Consider the classical frictionless case where $\pi$ is linear and $\mathcal{M}={\mathcal{S}}$ and assume we find $U\in\mathcal{M}\cap{\mathcal{X}}_+$ such that $\pi(U)=1$. In this case, $\mathcal{C}$ can be reduced to
\[
\mathcal{C}=\{(X,m)\in{\mathcal{X}}\times\mathbb{R} \, ; \ mU+X\in\ker(\pi)-(\mathcal{A}\cap{\mathcal{X}})\}.
\]
It is clear that $\mathcal{C}$ is closed precisely when $\ker(\pi)-(\mathcal{A}\cap{\mathcal{X}})$ is so. Note that, if $\mathcal{A}$ is taken to be the standard positive cone, this difference coincides with the set of payoffs that can be superreplicated at zero cost.
\end{remark}
\end{comment}
\subsection{Direct characterization of market-consistent prices}
As observed in Proposition~\ref{prop: interval MCP}, the set of market-consistent prices with acceptable risk is an interval that is bounded from above by the corresponding superreplication price.
In this section we are concerned with establishing when the superreplication price is itself market consistent. This will yield a direct characterization of market-consistent prices. In Example \ref{ex: superreplication under replicability} we show that in general the superreplication price can be market consistent or not regardless of whether the underlying payoff is attainable or not. This is based on the following simple characterization of market consistency.
\begin{proposition}
\label{prop: characterization mcp superreplication}
For every $X\in{\mathcal{X}}$ such that $\pi^+(X)\in\mathbb{R}$ we have $\pi^+(X)\in MCP(X)$ if and only if $(\mathcal{A}+X)\cap\{Z\in\mathcal{M} \,; \ \pi(Z)=\pi^+(X)\}\subset\{X\}$.
\end{proposition}
\begin{proof}
First, assume that $(\mathcal{A}+X)\cap\{Z\in\mathcal{M} \,; \ \pi(Z)=\pi^+(X)\}\subset\{X\}$. Then, for every $Z\in\mathcal{M}$ satisfying $Z-X\in\mathcal{A}\setminus\{0\}$ we must have $\pi(Z)>\pi^+(X)$. Since $\pi^+(X)\leq\pi(X)$ whenever $X\in\mathcal{M}$, it follows that $\pi^+(X)\in\mathop {\rm MCP}\nolimits(X)$, proving the ``if'' implication. Conversely, assume that $\pi^+(X)\in\mathop {\rm MCP}\nolimits(X)$ and take any payoff $Z\in(\mathcal{A}+X)\cap\mathcal{M}$. If we happen to have $\pi(Z)=\pi^+(X)$, then $Z$ must be equal to $X$ by market consistency of $\pi^+(X)$. This proves the ``only if'' implication.
\end{proof}
The previous proposition shows that market consistency of the superreplication price is strongly linked with the attainability of the infimum in the definition of superreplication price. We therefore target sufficient conditions for attainability to hold. The next lemma suggests a strategy to tackle this problem.
\begin{lemma}
\label{lem: superreplication C}
For every $X\in{\mathcal{X}}$ we have $\pi^+(X) = \inf\{m\in\mathbb{R} \,; \ (X,-m)\in\mathcal{C}\}$.
\end{lemma}
\begin{proof}
For every $m\in\mathbb{R}$ we have $(X,-m)\in\mathcal{C}$ if and only if there exists $Z\in\mathcal{M}$ such that $Z-X\in\mathcal{A}$ and $\pi(Z)\leq m$. As a result, we get
\[
\pi^+(X)
=
\inf\{m\in\mathbb{R} \,; \ \exists Z\in\mathcal{M} \,:\, Z-X\in\mathcal{A}, \ \pi(Z)\leq m\}
=
\inf\{m\in\mathbb{R} \,; \ (X,-m)\in\mathcal{C}\}.\qedhere
\]
\end{proof}
The closedness criterion for the set $\mathcal{C}$ established in Lemma~\ref{lem: closedness C} yields the following attainability result.
\begin{proposition}
\label{prop: direct FTAP}
If there exists no strong scalable good deal, then for every $X\in{\mathcal{X}}$ with $\pi^+(X)<\infty$ there exists $Z\in\mathcal{M}$ such that $Z-X\in\mathcal{A}$ and $\pi(Z)=\pi^+(X)$.
\end{proposition}
\begin{proof}
First of all, we note that $\pi^+$ is lower semicontinuous as, by virtue of Lemma~\ref{lem: closedness C}, $\mathcal{C}$ is closed and the epigraph of $\pi^+$ coincides with $\{(X,m)\in{\mathcal{X}}\times\mathbb{R} \, ; \ (X,-m)\in\mathcal{C}\}$. Next, we claim that $\pi^+$ does not attain the value $-\infty$. To this end, note first that $\pi^+(0)>-\infty$ by Lemma \ref{lem: closedness C}. Since $\pi^+(0)\leq0$, it follows that $\pi^+$ is finite at $0$. It is readily seen that $\pi^+$ is convex. Hence, being lower semicontinuous, $\pi^+$ can never attain the value $-\infty$ on the space ${\mathcal{X}}$.
To show the desired attainability, take a payoff $X\in{\mathcal{X}}$ such that $\pi^+(X)<\infty$. Since $\pi^+(X)$ is finite, it follows from the closedness of $\mathcal{C}$ established in Lemma \ref{lem: closedness C} that the infimum in Lemma~\ref{lem: superreplication C} is attained. By definition of $\mathcal{C}$, this implies that $\pi^+(X)=\pi(Z)$ for a suitable $Z\in\mathcal{M}$ such that $Z-X\in\mathcal{A}$.
\end{proof}
The next theorem provides a characterization of market-consistent prices under the assumption that the market does not admit strong scalable good deals. In this case, we show that for a payoff outside $\mathcal{M}$ the superreplication price is never market consistent and, hence, the set of market-consistent prices is an open interval. For a payoff in $\mathcal{M}$ the superreplication price may or may not be market consistent, so that the corresponding set of market-consistent prices may or may not be a closed interval.
\begin{proposition}[{\bf Direct characterization of market-consistent prices}]
\label{theo: characterization mcp superreplication}
If there exists no strong scalable good deal, then for every $X\in{\mathcal{X}}$ we have $\mathop {\rm MCP}\nolimits(X)\neq\emptyset$ and the following statements hold:
\begin{enumerate}
\item[(i)] If $X\in\mathcal{M}$, then $\pi^+(X)\leq\pi(X)$ and both $\pi^+(X)\notin\mathop {\rm MCP}\nolimits(X)$ and $\pi^+(X)\in\mathop {\rm MCP}\nolimits(X)$ can hold.
\item[(ii)] If $X\in\mathcal{M}$ and $\pi^+(X)\notin\mathop {\rm MCP}\nolimits(X)$, then both $\pi^+(X)=\pi(X)$ and $\pi^+(X)<\pi(X)$ can hold.
\item[(iii)] If $X\in\mathcal{M}$ and $\pi^+(X)\in\mathop {\rm MCP}\nolimits(X)$, then $\pi^+(X)=\pi(X)$.
\item[(iv)] If $X\notin\mathcal{M}$, then $\pi^+(X)\notin\mathop {\rm MCP}\nolimits(X)$.
\end{enumerate}
The alternatives in (i) and (ii) can hold even if there exists no good deal.
\end{proposition}
\begin{proof}
It follows from Proposition~\ref{prop: direct FTAP} that for every $X\in{\mathcal{X}}$ we must have $\pi^+(X)>-\infty$, showing that $\mathop {\rm MCP}\nolimits(X)\neq\emptyset$. Now, take $X\in\mathcal{M}$. Since $X-X=0\in\mathcal{A}$, we easily infer from the definition of superreplication price that $\pi^+(X)\leq\pi(X)$. It is shown in Example~\ref{ex: superreplication under replicability} that all the situations in (i) and (ii) may hold (even if there exist no good deals). To establish (iii) and (iv), take an arbitrary $X\in{\mathcal{X}}$ and assume that $\pi^+(X)\in\mathop {\rm MCP}\nolimits(X)$. As Proposition~\ref{prop: direct FTAP} implies that $(\mathcal{A}+X)\cap\{Z\in\mathcal{M} \,; \ \pi(Z)=\pi^+(X)\}$ is not empty, it follows from Proposition~\ref{prop: characterization mcp superreplication} that $X$ must belong to $\mathcal{M}$ and that the infimum in the definition of superreplication price must be attained by $X$ alone, establishing the desired implications.
\end{proof}
\begin{example}
\label{ex: superreplication under replicability}
Let $\Omega=\{\omega_1,\omega_2\}$ and assume that ${\mathcal{F}}$ is the power set of $\Omega$ and that $\mathbb{P}$ is specified by $\mathbb{P}(\omega_1)=\mathbb{P}(\omega_2)=\frac{1}{2}$. In this simple setting, we take ${\mathcal{X}}=L^0$ and identify every element of ${\mathcal{X}}$ with a vector of $\mathbb{R}^2$. Set ${\mathcal{S}}=\mathbb{R}^2$ and consider the acceptance set defined by
\[
\mathcal{A} = \{(x,y)\in\mathbb{R}^2 \,; \ y\geq\max\{-x,0\}\}.
\]
\smallskip
(i) Set $\pi(x,y)=\max\{2x+y,x+2y\}$ for every $(x,y)\in\mathbb{R}^2$ and $\mathcal{M}=\mathbb{R}^2$. It is immediate to verify that no good deal exists. Set $X=(-2,1)\in\mathcal{M}$ and observe that $\pi^+(X)=0$ and
\[
(\mathcal{A}+X)\cap\{Z\in\mathcal{M} \,; \ \pi(Z)=0\}=\{X\}.
\]
It follows from Proposition \ref{prop: characterization mcp superreplication} that $\pi^+(X)\in MCP(X)$. Next, take $Y=(1,-2)\in\mathcal{M}$. In this case, an explicit calculation shows that
\[
\pi^+(Y) = \inf_{x\in\mathbb{R}}\max\{2x-2+\max\{1-x,0\},x-4+2\max\{1-x,0\}\} = -\frac{3}{2}.
\]
Moreover, setting $W=(-\frac{1}{2},-\frac{1}{2})\in\mathcal{M}$, we have
\[
(\mathcal{A}+Y)\cap\bigg\{Z\in\mathcal{M} \,; \ \pi(Z)=-\frac{3}{2}\bigg\}=\{W\}.
\]
It follows from Proposition \ref{prop: characterization mcp superreplication} that $\pi^+(Y)\notin\mathop {\rm MCP}\nolimits(Y)$. Note also that $\pi(X)=\pi^+(X)$ and $\pi(Y)>\pi^+(Y)$.
\smallskip
(ii) Set $\pi(x,y)=\max\{x+y,x+2y\}$ for every $(x,y)\in\mathbb{R}^2$ and $\mathcal{M}=\{(x,y)\in\mathbb{R}^2\,; \ x\leq1\}$. Observe that no good deal exists. Set $X=(1,-1)\in\mathcal{M}$ and $Y=(2,-2)\notin \mathcal{M}$. Then, $\pi^+(X)=\pi^+(Y)=0$ and
\[
(\mathcal{A}+X)\cap\{Z\in\mathcal{M} \,; \ \pi(Z)=0\}=(\mathcal{A}+Y)\cap\{Z\in\mathcal{M} \,; \ \pi(Z)=0\}=\{\lambda X \,; \ \lambda\in[0,1]\}.
\]
It follows from Proposition \ref{prop: characterization mcp superreplication} that $\pi^+(X)\notin \mathop {\rm MCP}\nolimits(X)$ and $\pi^+(Y)\notin \mathop {\rm MCP}\nolimits(Y)$. Note also that $\pi(X)=0$ so that $\pi(X)=\pi^+(X)$.
\smallskip
(iii) Set $\pi(x,y)=e^x-1$ for every $(x,y)\in\mathbb{R}^2$ and $\mathcal{M}=\mathbb{R}\times\mathbb{R}_+$. Any $X\in{\mathcal{X}}$ satisfies $\pi^+(X)=-1$ and
\[
(\mathcal{A}+X)\cap\{Z\in\mathcal{M} \,; \ \pi(Z)=-1\} = \emptyset.
\]
It follows from Proposition \ref{prop: characterization mcp superreplication} that $\pi^+(X)\in MCP(X)$ regardless of whether $X$ belongs to $\mathcal{M}$ or not. Note that in this case there exist strong scalable good deals.
\end{example}
The previous proposition unveils a stark contrast between our general setting and the classical frictionless setting. In a frictionless market, the superreplication price of every replicable payoff is market consistent and coincides with the associated replication cost. In our case, for an attainable payoff, the superreplication price may be strictly lower than the associated replication cost. This is in line with the findings in Bensaid et al.\ (1992), where the focus was on a multi-period Cox-Ross-Rubinstein model with proportional transaction costs and no portfolio constraints and the acceptance set was taken to be the standard positive cone. As explained in that paper, the discrepancy between the superreplication price and the replication cost is a direct consequence of the fact that trading is costly and it may therefore ``pay to weigh the benefits of replication against those of potential savings on transaction costs''. What also follows from the previous result and was only implicitly highlighted in Bensaid et al.\ (1992) is that, contrary to the frictionless case, the superreplication price of an attainable payoff and, {\em a fortiori}, its replication cost may fail to be market consistent. This is another implication of transaction costs, which allow the infimum in the definition of superreplication price to be attained by multiple replicable payoffs even if the market admits no good deals. Motivated by this discussion, we provide sufficient conditions for the replication cost of a payoff in $\mathcal{M}$ to be market consistent and, hence, to coincide with the corresponding superreplication price. More precisely, we show that this holds for every payoff with ``zero bid-ask spread'' provided the market admits no good deals.
\begin{proposition}
If there exists no good deal, then $\pi(X)=\pi^+(X)\in\mathop {\rm MCP}\nolimits(X)$ for every $X\in\mathcal{M}\cap(-\mathcal{M})$ such that $\pi(-X)=-\pi(X)$.
\end{proposition}
\begin{proof}
Take an arbitrary $X\in\mathcal{M}\cap(-\mathcal{M})$ such that $\pi(-X)=-\pi(X)$. Since $\pi^+(X)$ is the supremum of the set $\mathop {\rm MCP}\nolimits(X)$ and $\pi^+(X)\leq\pi(X)$, it suffices to show that $\pi(X)\in\mathop {\rm MCP}\nolimits(X)$. To this effect, take any $Z\in\mathcal{M}$ satisfying $Z-X\in\mathcal{A}\setminus\{0\}$. Note that $\frac{1}{2}Z-\frac{1}{2}X=\frac{1}{2}(Z-X)+\frac{1}{2}0\in\mathcal{A}\cap\mathcal{M}$. As a result, the absence of good deals implies that
\[
0 < \pi\left(\frac{1}{2}Z-\frac{1}{2}X\right) \leq \frac{1}{2}\pi(Z)+\frac{1}{2}\pi(-X) = \frac{1}{2}\pi(Z)-\frac{1}{2}\pi(X).
\]
This yields $\pi(X)<\pi(Z)$ and proves that $\pi(X)$ is a market-consistent price for $X$.
\end{proof}
\begin{comment}
\begin{corollary}
Let $\mathcal{M}$ and $\pi$ be linear. If there exists no acceptable deal, then $\pi(X)=\pi^+(X)\in\mathop {\rm MCP}\nolimits(X)$ for every replicable payoff $X\in\mathcal{M}$.
\end{corollary}
\end{comment}
\subsection{Consistent price deflators}
In this subsection we start our journey towards a dual characterization of market-consistent prices with acceptable risk. As already mentioned, a key step is to establish the appropriate extension of the Fundamental Theorem of Asset Pricing. Both results will be expressed in terms of suitable dual elements, called consistent price deflators. Here, consistency refers to the acceptance set. We mainly distinguish between two types of price deflators, namely consistent and strictly consistent ones. These notions are encountered in the literature under special assumptions on the market model and/or on the acceptance set. In a frictionless setting, a consistent price deflator corresponds to a representative state pricing function in Carr et al.\ (2001) and to a Riesz density of a no-good-deal pricing functional in \v{C}ern\'{y} and Hodges (2002). In a market with proportional frictions, it corresponds to a Riesz density of an underlying frictionless pricing rule in Jouini and Kallal (1995), to a consistent price system in Jaschke and K\"{u}chler (2001), to a consistent pricing kernel in Staum (2004), and is related to a risk-neutral measure in Cherny (2008). In a market with nonproportional frictions, it corresponds to a marginal price deflator in Pennanen (2011a). Strictly consistent price deflators have been considered in Jouini and Kallal (1995), \v{C}ern\'{y} and Hodges (2002), and Pennanen (2011a). Note that the acceptance set in Jouini and Kallal (1995) and Pennanen (2011a) is the standard positive cone. The formal definition of a price deflator is as follows.
\begin{definition}
\label{def: pricing density}
A random variable $D\in L^0$ is a {\em price deflator} if the following conditions hold:
\begin{enumerate}
\item[(1)] $DX\in L^1$ for every $X\in{\mathcal{S}}$.
\item[(2)] $\sup\{\mathbb{E}[DX]-\pi(X) \,; \ X\in\mathcal{M}\}<\infty$.
\end{enumerate}
In this case, we say that $D$ is:
\begin{enumerate}
\item[(3)] {\em weakly consistent} if $\inf\{\mathbb{E}[DX] \,; \ X\in\mathcal{A}\cap{\mathcal{X}}\}>-\infty$.
\item[(4)] {\em consistent} if $\mathbb{E}[DX]\geq0$ for every $X\in\mathcal{A}\cap{\mathcal{X}}$.
\item[(5)] {\em strictly consistent} if $\mathbb{E}[DX]>0$ for every nonzero $X\in\mathcal{A}\cap{\mathcal{X}}$.
\end{enumerate}
\end{definition}
It should be clear that a price deflator is a natural extension of a classical price deflator to our market with frictions. To illustrate this, consider a price deflator $D\in L^0$ and define $\mathcal{L}=\{X\in L^0 \,; \ DX\in L^1\}$. Note that every replicable payoff belongs to the vector space $\mathcal{L}$. Moreover, define $\psi(X)=\mathbb{E}[DX]$ for $X\in\mathcal{L}$. By definition, there exists a constant $\gamma_{\pi,\mathcal{M}}\geq0$ such that for every attainable payoff $X\in\mathcal{M}\cap(-\mathcal{M})$
\[
-\pi(-X)-\gamma_{\pi,\mathcal{M}}\leq\psi(X)\leq\pi(X)+\gamma_{\pi,\mathcal{M}}.
\]
The functional $\psi$ can therefore be viewed as the pricing rule of an ``artificial'' frictionless market where every payoff in $\mathcal{L}$ is ``replicable'' and the attainable payoffs are ``priced'', up to a suitable enlargement, consistently with their market bid-ask spread. No enlargement is needed when $\psi$ is already dominated from above by $\pi$. This happens, for instance, if both $\pi$ and $\mathcal{M}$ are conic in the first place. In particular, this holds if $\pi$ is linear and $\mathcal{M}$ coincides with the entire ${\mathcal{S}}$, in which case $\psi$ is a linear extension of the pricing rule beyond the space of replicable payoffs. This shows that, in a frictionless setting, the notion of a price deflator boils down to the classical notion from arbitrage pricing theory. Consistency with the acceptance set is, of course, specific to good deal pricing theory. The interpretation is as follows. If $D$ is weakly consistent, then we find a constant $\gamma_\mathcal{A}\leq0$ such that for every acceptable payoff $X\in\mathcal{A}\cap{\mathcal{X}}\cap\mathcal{L}$
\[
\psi(X) \geq \gamma_\mathcal{A}.
\]
This means that prices of acceptable payoffs in the ``artificial'' frictionless market with pricing rule $\psi$ cannot be arbitrarily negative. A simple situation where such ``artificial'' prices are nonnegative is when $\mathcal{A}$ is a cone in the first place. In this case, weak consistency is equivalent to consistency. In particular, if $\mathcal{A}$ is taken to be the standard positive cone, then (strict) consistency boils down to the (strict) positivity of $\psi$. Hence, consistency with the acceptance set requires that the pricing rule in the ``artificial'' frictionless market assigns prices to acceptable payoffs that are bounded from below, positive, or strictly positive depending on the type of consistency. If the acceptance set is the standard positive cone, then (strict) consistency boils down to (strict) positivity. This shows that a (strictly) consistent price deflator is a direct extension of a (strictly) positive price deflator, or equivalently of an (equivalent) martingale measure, in the classical theory. We summarize this discussion in the following proposition, which highlights the role of conicity in simplifying the formulation of a consistent price deflator.
\begin{proposition}
\label{prop: pricing density}
Let $D\in L^0$ be a price deflator. Then, the following statements hold:
\begin{enumerate}
\item[(i)] $\mathbb{E}[DX]\leq\pi(X)$ for every $X\in\mathcal{M}^\infty$ such that $\pi$ is conic on $\mathop{\rm cone}\nolimits(X)$.
\item[(ii)] $\mathbb{E}[DX]=\pi(X)$ for every $X\in\mathcal{M}^\infty\cap(-\mathcal{M}^\infty)$ such that $\pi$ is linear on $\mathop{\rm span}\nolimits(X)$.
\end{enumerate}
If $D$ is weakly consistent, then the following statement holds:
\begin{enumerate}
\item[(iii)] $\mathbb{E}[DX]\geq0$ for every $X\in\mathcal{A}^\infty\cap{\mathcal{X}}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Take an arbitrary $X\in{\mathcal{X}}$. Since $\mathop{\rm span}\nolimits(X)=\mathop{\rm cone}\nolimits(X)\cup\mathop{\rm cone}\nolimits(-X)$, it is clear that (i) implies (ii). To prove (i), assume that $X\in\mathcal{M}^\infty$ and $\pi$ is conic on $\mathop{\rm cone}\nolimits(X)$. Then, by definition of a pricing density,
\[
\sup_{n\in\mathbb{N}}\{n(\mathbb{E}[DX]-\pi(X))\} = \sup_{n\in\mathbb{N}}\{\mathbb{E}[D(nX)]-\pi(nX)\} < \infty.
\]
This is only possible if $\mathbb{E}[DX]-\pi(X)\leq0$, showing the desired claim. Finally, to establish (iii), assume that $D$ is weakly consistent and $X\in\mathcal{A}^\infty$. Then, by definition of weak consistency,
\[
\inf_{n\in\mathbb{N}}\{n\mathbb{E}[DX]\} = \inf_{n\in\mathbb{N}}\mathbb{E}[D(nX)] > -\infty.
\]
This is only possible if $\mathbb{E}[DX]\geq0$, proving the desired claim and concluding the proof.
\end{proof}
\begin{comment}
\begin{remark}
In a market where some attainable payoff is frictionless, every price deflator can be represented in terms of a probability measure. To see this, let $D\in L^0$ be a (strictly positive) price deflator and consider a strictly positive payoff $U\in\mathcal{M}^\infty\cap(-\mathcal{M}^\infty)$ such that $\pi$ is linear along $\mathop{\rm span}\nolimits(U)$ and satisfies $\pi(U)>0$. It follows from the preceding proposition that $\mathbb{E}_\mathbb{P}[DU]=\pi(U)$. Then, we find a probability measure $\mathbb{Q}$ that is absolutely continuous with (equivalent to) $\mathbb{P}$ and satisfies $\frac{d\mathbb{Q}}{d\mathbb{P}}=\frac{DU}{\pi(U)}$. In this case,
\[
\frac{\mathbb{E}_\mathbb{P}[DX]}{\pi(U)} = \mathbb{E}_\mathbb{Q}\left[\frac{X}{U}\right]
\]
for every $X\in L^0$ such that $DX\in L^1$.
The probability $\mathbb{Q}$ thus plays the role of an (equivalent) pricing measure from the classical arbitrage pricing theory.
\end{remark}
\end{comment}
Later, we will show that, contrary to the focus on consistent price deflators of the bulk of the literature, strictly consistent price deflators are the right dual objects to use in order to obtain a version of the Fundamental Theorem of Asset Pricing in a good deal pricing setting. For the time being, we show that the existence of strictly consistent price deflators always implies the absence of scalable good deals. However, contrary to the classical frictionless setting, it does not generally imply the absence of good deals unless the price deflators satisfy suitable extra assumptions.
\begin{proposition}
\label{prop: density implies no good deal}
If there exists a strictly consistent price deflator $D\in L^0$, then there exists no scalable good deal. If, additionally, $\mathbb{E}[DX]\leq\pi(X)$ for every $X\in\mathcal{M}$, then there exists no good deal either.
\end{proposition}
\begin{proof}
Take a nonzero payoff $X\in\mathcal{A}\cap\mathcal{M}^\infty$. To show that no scalable good deal exists, we have to show that $\pi^\infty(X)>0$. To this effect, note that, by definition of a price deflator,
\[
\sup_{n\in\mathbb{N}}\{n(\mathbb{E}[DX]-\pi^\infty(X))\} =
\sup_{n\in\mathbb{N}}\{\mathbb{E}[D(nX)]-\pi^\infty(nX)\} \leq \sup_{n\in\mathbb{N}}\{\mathbb{E}[D(nX)]-\pi(nX)\} < \infty,
\]
where we used that $\pi^\infty$ dominates $\pi$. This is only possible if $\mathbb{E}[DX]-\pi^\infty(X)\leq0$. As a result, we obtain $\pi^\infty(X)\geq\mathbb{E}[DX]>0$. Next, assume that $\mathbb{E}[DX]\leq\pi(X)$ for every payoff $X\in\mathcal{M}$ and take a nonzero payoff $X\in\mathcal{A}\cap\mathcal{M}$. Then, $\pi(X)\geq\mathbb{E}[DX]>0$, showing that no good deal exists.
\end{proof}
\begin{example}
\label{ex: scpricingdens with acc deals}
Let $\Omega=\{\omega_1,\omega_2\}$ and assume that ${\mathcal{F}}$ is the power set of $\Omega$ and that $\mathbb{P}$ is specified by $\mathbb{P}(\omega_1)=\mathbb{P}(\omega_2)=\frac{1}{2}$. In this simple setting, we take ${\mathcal{X}}=L^0$ and identify every element of ${\mathcal{X}}$ with a vector of $\mathbb{R}^2$. Set ${\mathcal{S}}=\mathbb{R}^2$ and consider the acceptance set defined by
\[
\mathcal{A} = \{(x,y)\in\mathbb{R}^2 \,; \ y\geq\max\{-x,0\}\}.
\]
We show that the existence of strictly consistent price deflators is not sufficient to rule out good deals. In view of the previous proposition, this can occur only if either the pricing rule or the set of attainable payoffs fails to be conic and the supremum in Definition~\ref{def: pricing density} is strictly positive. We provide an example in both cases.
\smallskip
(i) Set $\pi(x,y)=x+y^2$ for every $(x,y)\in\mathbb{R}^2$ and $\mathcal{M}=\mathbb{R}^2$. Note that $\mathcal{M}$ is conic while $\pi$ is not. It is clear that $D=(2,4)$ is a strictly consistent price deflator. In particular, we have
\[
\sup_{X\in\mathcal{M}}\{\mathbb{E}[DX]-\pi(X)\} = \sup_{y\in\mathbb{R}}\{2y-y^2\} = 1.
\]
However, $X=(-1,1)\in\mathcal{A}\cap\mathcal{M}$ satisfies $\pi(X)=0$ and is thus a good deal.
\smallskip
(ii) Set $\pi(x,y)=x+y$ for every $(x,y)\in\mathbb{R}^2$ and $\mathcal{M}=\{(x,y)\in\mathbb{R}^2 \,; \ x\geq-1, \ 0\leq y\leq 1\}$. Note that $\pi$ is conic while $\mathcal{M}$ is not. It is clear that $D=(2,4)$ is a strictly consistent price deflator. In particular,
\[
\sup_{X\in\mathcal{M}}\{\mathbb{E}[DX]-\pi(X)\} = \sup_{0\leq y\leq1}y = 1.
\]
However, $X=(-1,1)\in\mathcal{A}\cap\mathcal{M}$ satisfies $\pi(X)=0$ and is thus a good deal.
\end{example}
\subsection{The reference set of price deflators}
We turn to the more challenging problem of investigating if and under which assumptions the converse implication holds, i.e., the absence of (scalable) good deals implies the existence of strictly consistent price deflators. To this effect, we rely on duality theory and we therefore have to choose a suitable topology on the reference payoff space. As remarked below, our framework is flexible enough to accommodate the standard model spaces. We refer to the appendix for the necessary details on weak topologies.
\begin{assumption}
\label{standing assumption}
We denote by ${\mathcal{X}}'$ a linear subspace of $L^0$. We assume that ${\mathcal{X}}$ and ${\mathcal{X}}'$ contain $L^\infty$ and satisfy $XY\in L^1$ for all $X\in{\mathcal{X}}$ and $Y\in{\mathcal{X}}'$. These spaces are in separating duality through the bilinear form $(X,Y) \mapsto \mathbb{E}[XY]$. The topology on ${\mathcal{X}}$ fixed in Assumption~\ref{assumption direct} is taken to be $\sigma({\mathcal{X}},{\mathcal{X}}')$. Similarly, we equip ${\mathcal{X}}'$ with the topology $\sigma({\mathcal{X}}',{\mathcal{X}})$. In addition, we assume that ${\mathcal{X}}'$ is the norm dual of a normed space ${\mathcal{Y}}\subset L^0$ (which need not coincide with ${\mathcal{X}}$) and that $\sigma({\mathcal{X}}',{\mathcal{X}})$ is weaker than the associated weak-star topology $\sigma({\mathcal{X}}',{\mathcal{Y}})$.
\end{assumption}
\begin{remark}
\label{rem: setting}
(i) Under our assumption the payoff space ${\mathcal{X}}$ could be any Lebesgue space or, more generally, any Orlicz space and the dual space ${\mathcal{X}}'$ could be $L^\infty$, in which case ${\mathcal{Y}}$ is taken to be $L^1$.
\smallskip
(ii) Note that, under our assumption, both topologies $\sigma({\mathcal{X}},{\mathcal{X}}')$ and $\sigma({\mathcal{X}}',{\mathcal{X}})$ are Hausdorff and locally convex. Hence, the standard machinery of convex duality applies to them.
\smallskip
(iii) Under our standing assumptions the set $\mathcal{A}\cap{\mathcal{X}}$ has to be $\sigma({\mathcal{X}},{\mathcal{X}}')$-closed. For the common payoff spaces and acceptance sets, this is fulfilled even in the (generally restrictive) situation where ${\mathcal{X}}'$ is a small space. For concreteness, let $(\Omega,{\mathcal{F}},\mathbb{P})$ be nonatomic and let ${\mathcal{X}}$ be an Orlicz space. Moreover, let ${\mathcal{X}}'=L^\infty$. The set $\mathcal{A}\cap{\mathcal{X}}$ is closed with respect to $\sigma({\mathcal{X}},{\mathcal{X}}')$ in any of the following cases:
\begin{enumerate}
\item[(a)] $\mathcal{A}\cap L^1$ is closed with respect to the norm topology of $L^1$.
\item[(b)] $\mathcal{A}$ is either law invariant under $\mathbb{P}$ or surplus invariant, and for all $(X_n)\subset\mathcal{A}\cap{\mathcal{X}}$ and $X\in{\mathcal{X}}$ such that $X_n\to X$ $\mathbb{P}$-almost surely and $\sup_{n\in\mathbb{N}}|X_n|\in{\mathcal{X}}$ it follows that $X\in \mathcal{A}$.
\end{enumerate}
The condition in point (a) clearly implies $\sigma({\mathcal{X}},{\mathcal{X}}')$-closedness of $\mathcal{A}\cap{\mathcal{X}}$. In point (b), law invariance stipulates that acceptability is only driven by the probability distribution of a payoff while surplus invariance, introduced in Koch-Medina et al.\ (2015) and studied more thoroughly in Koch-Medina et al.\ (2017), stipulates that acceptability is only driven by the downside profile of a payoff. The closedness under dominated $\mathbb{P}$-almost sure convergence is sometimes referred to as Fatou closedness. In these cases the desired $\sigma({\mathcal{X}},{\mathcal{X}}')$-closedness of $\mathcal{A}\cap{\mathcal{X}}$ follows from the results in Svindland (2010) and Gao et al.\ (2018) under law invariance and from those in Gao and Munari (2020) under surplus invariance.
\end{remark}
We define the sets of weakly and strictly consistent price deflators belonging to ${\mathcal{X}}'$ as follows:
\[
\mathcal{D} := \{D\in{\mathcal{X}}' \,; \ \mbox{$D$ is a weakly consistent price deflator}\},
\]
\[
\mathcal{D}_{str} := \{D\in{\mathcal{X}}' \,; \ \mbox{$D$ is a strictly consistent price deflator}\}.
\]
It is also convenient to introduce the maps $\gamma_{\pi,\mathcal{M}}:{\mathcal{X}}'\to(-\infty,\infty]$ and $\gamma_\mathcal{A}:{\mathcal{X}}'\to[-\infty,\infty)$ defined by
\[
\gamma_{\pi,\mathcal{M}}(Y) := \sup_{X\in\mathcal{M}}\{\mathbb{E}[XY]-\pi(X)\},
\]
\[
\gamma_\mathcal{A}(Y) := \inf_{X\in\mathcal{A}\cap{\mathcal{X}}}\mathbb{E}[XY].
\]
Note that $\gamma_{\pi,\mathcal{M}}$ coincides with the conjugate function of the restriction to $\mathcal{M}$ of the pricing rule $\pi$ whereas $\gamma_\mathcal{A}$ is, up to a sign, the support function of the set $-(\mathcal{A}\cap{\mathcal{X}})$. These maps appear in the definition of a weakly consistent price deflator. A key role in our analysis is again played by the set $\mathcal{C}$ introduced in Section~\ref{sect: superreplication C}. In particular, weakly consistent price deflators appear naturally in the dual representation of $\mathcal{C}$. We denote by $\mathop{\rm cl}\nolimits(\mathcal{C})$ the closure of $\mathcal{C}$ (with respect to the natural product topology on ${\mathcal{X}}\times\mathbb{R}$) and refer to the appendix for the notation on support functions and barrier cones.
\begin{lemma}
\label{lem: elementary properties C}
The sets $\mathcal{C}$ and $\mathcal{D}$ are convex and the following statements hold:
\begin{enumerate}
\item[(i)] $-((\mathcal{A}\cap{\mathcal{X}})\times\mathbb{R}_+)\subset\mathcal{C}$ and $\mathop{\rm bar}\nolimits(\mathcal{C})\subset{\mathcal{X}}'_+\times\mathbb{R}_+$.
\item[(ii)] $\sigma_\mathcal{C}(Y,1)=\gamma_{\pi,\mathcal{M}}(Y)-\gamma_\mathcal{A}(Y)$ for every $Y\in{\mathcal{X}}'$.
\item[(iii)] $\mathcal{D}=\{Y\in{\mathcal{X}}'_+ \,; \ \sigma_\mathcal{C}(Y,1)<\infty\} = \{Y\in{\mathcal{X}}'_+ \,; \ (Y,1)\in\mathop{\rm bar}\nolimits(\mathcal{C})\}$.
\item[(iv)] If $(0,n)\notin\mathop{\rm cl}\nolimits(\mathcal{C})$ for some $n\in\mathbb{N}$, then we can represent $\mathop{\rm cl}\nolimits(\mathcal{C})$ as
\[
\mathop{\rm cl}\nolimits(\mathcal{C}) = \bigcap_{Y\in\mathcal{D}}\{(X,m)\in{\mathcal{X}}\times\mathbb{R} \,; \ \mathbb{E}[XY]+m\leq\gamma_{\pi,\mathcal{M}}(Y)-\gamma_\mathcal{A}(Y)\}.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
The convexity of $\mathcal{C}$ and $\mathcal{D}$ is clear. Points (i), (ii), and (iii) follow easily from rewriting $\mathcal{C}$ as
\begin{equation*}
\label{eq: C difference}
\mathcal{C} = \{(Z,m)\in\mathcal{M}\times\mathbb{R} \, ; \ \pi(Z)\leq-m\}-(\mathcal{A}\cap{\mathcal{X}})\times\mathbb{R}_+.
\end{equation*}
Note that no problems with nonfinite values arise as $0\in\mathcal{M}$, $\pi(0)=0$, and $\mathcal{A}$ contains the cone of positive random variables.
\begin{comment}
The convexity of $\mathcal{C}$ is clear. Now, take an arbitrary $(X,m)\in-((\mathcal{A}\cap{\mathcal{X}})\times\mathbb{R}_+)$ and set $Z=0\in\mathcal{M}$. Then, we clearly have $Z-X=-X\in\mathcal{A}$ as well as $\pi(Z)=0\leq-m$, showing that $(X,m)\in\mathcal{C}$. Next, take any $(Y,r)\in\mathop{\rm bar}\nolimits(\mathcal{C})$ and note that
\[
\sup_{m\in\mathbb{N}}\{-m\mathbb{E}_\mathbb{P}[\mathbbm 1_{\{Y<0\}}Y]\}+\sup_{n\in\mathbb{N}}\{-nr\} = \sup_{m,n\in\mathbb{N}}\{\mathbb{E}_\mathbb{P}[-m\mathbbm 1_{\{Y<0\}}Y]-nr\} \leq \sigma_\mathcal{C}(Y,r) < \infty,
\]
where we used that $-(m\mathbbm 1_{\{Y<0\}},n)\in-((\mathcal{A}\cap{\mathcal{X}})\times\mathbb{R}_+)\subset\mathcal{C}$ by monotonicity of $\mathcal{A}$. This shows that $(Y,r)$ must belong to ${\mathcal{X}}'_+\times\mathbb{R}_+$ and concludes the proof of (i). An explicit calculation shows that
\begin{align*}
\sigma_\mathcal{C}(Y,1)
&=
\sup_{m\in\mathbb{R}}\sup_{Z\in\mathcal{M},\,\pi(Z)\leq-m}\sup_{X\in Z-\mathcal{A}\cap{\mathcal{X}}}\{\mathbb{E}_\mathbb{P}[XY]+m\}
=
\sup_{Z\in\mathcal{M}}\sup_{X\in Z-\mathcal{A}\cap{\mathcal{X}}}\{\mathbb{E}_\mathbb{P}[XY]-\pi(Z)\} \\
&=
\sup_{Z\in\mathcal{M}}\{\mathbb{E}_\mathbb{P}[ZY]-\pi(Z)\}+\sup_{X\in-(\mathcal{A}\cap{\mathcal{X}})}\mathbb{E}_\mathbb{P}[XY]
=
\gamma_{\pi,\mathcal{M}}(Y)-\gamma_\mathcal{A}(Y)
\end{align*}
for every $Y\in{\mathcal{X}}'$. This establishes (ii) and (iii) and implies that $\mathcal{D}$ is convex by convexity of $\sigma_\mathcal{C}$.
\end{comment}
To show (iv), assume that $\mathop{\rm cl}\nolimits(\mathcal{C})$ is strictly contained in ${\mathcal{X}}\times\mathbb{R}$. The dual representation of closed convex sets recorded in Theorem 7.51 of Aliprantis and Border (2006) yields
\begin{equation}
\label{eq: repr C}
\mathop{\rm cl}\nolimits(\mathcal{C}) = \bigcap_{(Y,r)\in{\mathcal{X}}'\times\mathbb{R}}\{(X,m)\in{\mathcal{X}}\times\mathbb{R} \,; \ \mathbb{E}[XY]+mr\leq \sigma_\mathcal{C}(Y,r)\}.
\end{equation}
Here, we have used that $\sigma_{\mathop{\rm cl}\nolimits(\mathcal{C})}=\sigma_\mathcal{C}$. We claim that $\mathop{\rm bar}\nolimits(\mathcal{C})\cap({\mathcal{X}}'\times(0,\infty))\neq\emptyset$. To show this, take $n\in\mathbb{N}$ such that $(0,n) \notin \mathop{\rm cl}\nolimits(\mathcal{C})$. Then, it follows from~\eqref{eq: repr C} that there must exist $(Y,r)\in\mathop{\rm bar}\nolimits(\mathcal{C})$ satisfying $nr = \mathbb{E}[0\cdot Y]+nr > \sigma_\mathcal{C}(Y,r) \geq 0$. This establishes the desired claim. Now, recall from point (i) that $\mathop{\rm bar}\nolimits(\mathcal{C})\subset{\mathcal{X}}'_+\times\mathbb{R}_+$. Since $\sigma_\mathcal{C}$ is sublinear and $\mathop{\rm bar}\nolimits(\mathcal{C})$ is a convex cone, it follows that
\[
\mathop{\rm cl}\nolimits(\mathcal{C}) = \bigcap_{Y\in{\mathcal{X}}'_+}\{(X,m)\in{\mathcal{X}}\times\mathbb{R} \,; \ \mathbb{E}[XY]+m\leq \sigma_\mathcal{C}(Y,1)\}.
\]
The desired representation is now a direct consequence of point (ii).
\end{proof}
\subsection{Fundamental theorem of asset pricing}
The key tool to establish the Fundamental Theorem of Asset Pricing is the following convenient version of the classical results by Yan (1980) and Kreps (1981). We refer to Clark (1993), Jouini et al.\ (2005), Rokhlin (2005), Cassese (2007), Rokhlin (2009), and Gao and Xanthos (2017) for a variety of versions of the same principle. A simple inspection of our formulation shows that, taking $\mathcal{L}'=\mathcal{D}$, the theorem delivers existence of price deflators that assign a strictly positive ``price'' to every nonzero payoff in $\mathcal{L}$. Depending on the choice of $\mathcal{L}$ we obtain different types of price deflators. The choice $\mathcal{L}={\mathcal{X}}_+$ yields strictly positive price deflators whereas the choice $\mathcal{L}=\mathcal{A}\cap{\mathcal{X}}$ yields strictly consistent ones.
\begin{theorem}[{\bf Kreps-Yan}]
\label{theo: kreps yan in our setting}
Let $\mathcal{L}\subset{\mathcal{X}}$ and $\mathcal{L}'\subset{\mathcal{X}}'$ and assume that the following properties hold:
\begin{enumerate}
\item[(i)] Completeness: For every sequence $(Y_n)\subset\mathcal{L}'$ there exist a sequence $(\lambda_n)\subset(0,\infty)$ and $Y\in\mathcal{L}'$ such that $\sum_{k=1}^n\lambda_kY_k\to Y$.
\item[(ii)] Countable separation: There exists a sequence $(Y_n)\subset\mathcal{L}'\cap(-\mathop{\rm bar}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{L})))$ such that for every nonzero $X\in\mathcal{L}$ we have $\mathbb{E}[XY_n]>0$ for some $n\in\mathbb{N}$.
\end{enumerate}
Then, there exists $Y\in\mathcal{L}'$ such that $\mathbb{E}[XY]>0$ for every nonzero $X\in\mathcal{L}$.
\end{theorem}
\begin{proof}
By the countable separation property, there exists a sequence $(Y_n)\subset\mathcal{L}'\cap(-\mathop{\rm bar}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{L})))$ such that for every nonzero $X\in\mathcal{L}$ we have $\mathbb{E}[XY_n]>0$ for some $n\in\mathbb{N}$. In particular, note that $\mathbb{E}[XY_n]\geq0$ for all $X\in\mathcal{L}$ and $n\in\mathbb{N}$ because $(Y_n)\subset-\mathop{\rm bar}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{L}))$. Moreover, by the completeness property, there exist a sequence $(\lambda_n)\subset(0,\infty)$ and $Y\in\mathcal{L}'$ such that $\sum_{k=1}^n\lambda_kY_k\to Y$. It is immediate to see that $\mathbb{E}[XY]>0$ for every nonzero $X\in\mathcal{L}$.
\end{proof}
\begin{remark}
The preceding theorem holds for every pair of vector spaces ${\mathcal{X}}$ and ${\mathcal{X}}'$ equipped with a bilinear mapping $\langle\cdot,\cdot\rangle:{\mathcal{X}}\times{\mathcal{X}}'\to\mathbb{R}$. In this respect, our statement is a minor extension of the abstract version of the result obtained by Jouini et al.\ (2005). In that paper, the set $\mathcal{L}$ was assumed to be a pointed convex cone satisfying $\mathcal{L}-\mathcal{L}={\mathcal{X}}$ and the dual set $\mathcal{L}'$ was taken to coincide with $-\mathop{\rm bar}\nolimits(\mathcal{L}) = \{Y\in{\mathcal{X}}' \,; \ \langle X,Y\rangle\geq0, \ \forall X\in\mathcal{L}\}$. Incidentally, note that pointedness is automatically implied by the countable separation property (regardless of the special choice of $\mathcal{L}$).
\end{remark}
The ``conification'' in the Kreps-Yan theorem leads us to work with the modified acceptance set
\[
\mathcal{K}(\mathcal{A}) := \mathop{\rm cl}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{A}\cap{\mathcal{X}}))+L^0_+
\]
where $\mathop{\rm cl}\nolimits$ is the closure operator with respect to the reference topology $\sigma({\mathcal{X}},{\mathcal{X}}')$. A similar conification was considered in \v{C}ern\'{y} and Hodges (2002) and Staum (2004) and is necessary to obtain a version of the Fundamental Theorem that applies to nonconic acceptance sets. The next proposition contains a list of useful properties of this enlarged acceptance set.
\begin{proposition}
\label{prop: conified A}
The set $\mathcal{K}(\mathcal{A})$ satisfies the properties in Assumption~\ref{ass: acceptance set} provided it is a strict subset of $L^0$. Moreover, it is a cone and satisfies $\mathcal{K}(\mathcal{A})\cap{\mathcal{X}}=\mathop{\rm cl}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{A})\cap{\mathcal{X}})$. In particular, if $\mathcal{A}$ is a cone, then $\mathcal{K}(\mathcal{A})\cap{\mathcal{X}}=\mathcal{A}\cap{\mathcal{X}}$.
\end{proposition}
\begin{proof}
It is readily seen that $\mathcal{K}(\mathcal{A})$ satisfies the properties in Assumption~\ref{ass: acceptance set} and is a cone. Note that $\mathcal{K}(\mathcal{A})\cap{\mathcal{X}}=\mathop{\rm cl}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{A}\cap{\mathcal{X}}))+{\mathcal{X}}_+$. Hence, it remains to show that $\mathop{\rm cl}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{A}\cap{\mathcal{X}}))+{\mathcal{X}}_+\subset\mathop{\rm cl}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{A}\cap{\mathcal{X}}))$. To this end, take arbitrary $X\in\mathop{\rm cl}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{A}\cap{\mathcal{X}}))$ and $U\in{\mathcal{X}}_+$. By assumption, we find nets $(X_\alpha)\subset\mathcal{A}\cap{\mathcal{X}}$ and $(\lambda_\alpha)\subset\mathbb{R}_+$ such that $\lambda_\alpha X_\alpha\to X$. Clearly, $\lambda_\alpha X_\alpha+U\to X+U$. We conclude by showing that for every $\alpha$ we have $\lambda_\alpha X_\alpha+U\in\mathop{\rm cone}\nolimits(\mathcal{A})$. This is obvious if $\lambda_\alpha=0$ because $U\in{\mathcal{X}}_+\subset\mathcal{A}$. Otherwise, assume that $\lambda_\alpha>0$. In this case, we have $X_\alpha+\frac{1}{\lambda_\alpha}U\in\mathcal{A}+{\mathcal{X}}_+\subset\mathcal{A}$ by monotonicity of $\mathcal{A}$. Hence, it follows that $\lambda_\alpha X_\alpha+U=\lambda_\alpha(X_\alpha+\frac{1}{\lambda_\alpha}U)\in\mathop{\rm cone}\nolimits(\mathcal{A})$. This concludes the proof.
\end{proof}
To establish the existence of strictly consistent price deflators through the Kreps-Yan theorem we have to verify the completeness and countable separation properties. We start by showing that completeness always holds in our setting. This is a direct consequence of the fact that, by assumption, the space ${\mathcal{X}}'$ is a norm dual and $\sigma({\mathcal{X}}',{\mathcal{X}})$ is weaker than the corresponding weak-star topology.
\begin{proposition}
\label{prop: completeness property}
For every sequence $(Y_n)\subset\mathcal{D}$ there exist a sequence $(\lambda_n)\subset(0,\infty)$ and $Y\in\mathcal{D}$ such that $\sum_{k=1}^n\lambda_kY_k\to Y$.
\end{proposition}
\begin{proof}
Recall that $\mathcal{D}\subset{\mathcal{X}}'_+$ by Lemma~\ref{lem: elementary properties C} and note that $\sigma_\mathcal{C}(Y,1)\geq0$ for every $Y\in\mathcal{D}$. Moreover, recall that ${\mathcal{X}}'$ is a norm dual and denote by $\|\cdot\|_{{\mathcal{X}}'}$ the corresponding dual norm. Let $S_n=\sum_{k=1}^n\alpha_kY_k$ and $\alpha_n=(1+\|Y_n\|_{{\mathcal{X}}'})^{-1}(1+\sigma_\mathcal{C}(Y_n,1))^{-1}2^{-n}>0$ for every $n\in\mathbb{N}$. Since ${\mathcal{X}}'$ is complete with respect to its norm topology, we have $S_n\to Z$ for a suitable $Z\in{\mathcal{X}}'$ with respect to said topology. Hence, by our standing assumptions, we also have $S_n\to Z$ with respect to the reference topology $\sigma({\mathcal{X}}',{\mathcal{X}})$. To conclude the proof, note that $\sum_{k=1}^n\alpha_k\to r$ for some $r>0$ and
\[
\sigma_\mathcal{C}(Z,r) \leq \liminf_{n\to\infty}\sum_{k=1}^n\alpha_k\sigma_\mathcal{C}(Y_k,1) < \infty
\]
by lower semicontinuity and sublinearity of $\sigma_\mathcal{C}$. This yields $(Z,r)\in\mathop{\rm bar}\nolimits(\mathcal{C})$. The desired statement follows by setting $\lambda_n=\frac{\alpha_n}{r}>0$ for every $n\in\mathbb{N}$ and $Y=\frac{Z}{r}\in\mathcal{D}$.
\end{proof}
Establishing the countable separation property is more challenging and requires an additional assumption, namely the absence of scalable good deals. In the next proposition we state a useful equivalent condition for this to hold in the case of a pointed conic acceptance set. We show that, in this case, there are no scalable good deals if and only if every good deal that is diminished by arbitrary multiples of any given (nonzero) acceptable payoff ceases to be a good deal. If the acceptance set is the standard positive cone, this condition corresponds to the ``no scalable arbitrage'' condition in Pennanen (2011a).
\begin{proposition}
\label{prop: on scalable deals}
Let $\mathcal{A}$ be a pointed cone. Then, there exists no scalable good deal if and only if for every nonzero $X\in\mathcal{A}\cap{\mathcal{X}}$ there is $\lambda>0$ such that $(\mathcal{A}+\lambda X)\cap\{Z\in\mathcal{M} \,; \ \pi(Z)\leq0\}=\emptyset$.
\end{proposition}
\begin{proof}
To prove the ``if'' implication, assume that for every nonzero $X\in\mathcal{A}\cap{\mathcal{X}}$ there is $\lambda>0$ such that $(\mathcal{A}+\lambda X)\cap\{Z\in\mathcal{M} \,; \ \pi(Z)\leq0\}=\emptyset$. Take a nonzero $X\in\mathcal{A}\cap{\mathcal{X}}$. By assumption, we find $\lambda>0$ such that $\lambda X\notin\{Z\in\mathcal{M} \,; \ \pi(Z)\leq0\}$. This implies that $X\notin\{Z\in\mathcal{M}^\infty \,; \ \pi^\infty(Z)\leq0\}$. In particular, for every $X\in\mathcal{A}^\infty\cap\mathcal{M}^\infty$ such that $\pi^\infty(X)\leq0$ we must have $X\in{\mathcal{S}}\subset{\mathcal{X}}$ and, hence, $X=0$. This shows that no scalable good deal can exist. To prove the ``only if'' implication, assume conversely that no scalable good deal exists. First, we claim that $\{Z\in\mathcal{A}\cap\mathcal{M} \,; \ \pi(Z)\leq0\}$ is bounded. If this is not the case, for every $n\in\mathbb{N}$ we find $Z_n\in\mathcal{A}\cap\mathcal{M}$ such that $\pi(Z_n)\leq0$ and $\|Z_n\|\geq n$. As the unit sphere in ${\mathcal{S}}$ is compact, there exists a nonzero $Z\in{\mathcal{S}}$ such that $\frac{Z_n}{\|Z_n\|}\to Z$. Note that $Z\in\mathcal{A}^\infty\cap\mathcal{M}^\infty$ by~\eqref{eq: recession cones 1}. Note also that the lower semicontinuity and convexity of $\pi$ yield
\[
\pi(Z) \leq \liminf_{n\to\infty}\pi\left(\frac{Z_n}{\|Z_n\|}\right) \leq \liminf_{n\to\infty}\frac{\pi(Z_n)}{\|Z_n\|} \leq 0.
\]
This shows that $Z$ is a scalable good deal, contradicting our assumption. Hence, $\{Z\in\mathcal{A}\cap\mathcal{M} \,; \ \pi(Z)\leq0\}$ is bounded. Now, suppose we find a nonzero $X\in\mathcal{A}\cap{\mathcal{X}}$ such that for every $\lambda>0$ there exists $Z_\lambda\in\mathcal{M}$ with $\pi(Z_\lambda)\leq0$ and $Z_\lambda-\lambda X\in\mathcal{A}$. In particular, $Z_\lambda\in\mathcal{A}$ and $\frac{Z_\lambda}{\lambda}\in\mathcal{A}+X$ for every $\lambda>0$. As $(\mathcal{A}+X)\cap{\mathcal{S}}$ is closed and does not contain the zero payoff, the norm $\|\cdot\|$ must be bounded from below by a suitable $\varepsilon>0$ on the set $(\mathcal{A}+X)\cap{\mathcal{S}}$. In particular, $\frac{\|Z_\lambda\|}{\lambda}\geq\varepsilon$ for every $\lambda>0$. This implies that $\{Z_\lambda \,; \ \lambda>0\}$ is unbounded. However, this is against what we have prove above, showing that for every nonzero $X\in\mathcal{A}\cap{\mathcal{X}}$ there must be $\lambda>0$ such that $(\mathcal{A}+\lambda X)\cap\{Z\in\mathcal{M} \,; \ \pi(Z)\leq0\}=\emptyset$.
\end{proof}
We are finally in a position to state
sufficient conditions for the existence of strictly consistent price deflators. As a first step, we provide two sets of sufficient conditions for the existence of consistent price deflators that are strictly positive. This is achieved by proving the countable separation property for $\mathcal{L}={\mathcal{X}}_+$ and $\mathcal{L}'=\mathcal{D}$. In order to move from strict positivity to strict consistency, we need an additional assumption on the dual space ${\mathcal{X}}'$, namely the separability of its norm predual. In this case, we are able to establish the countable separation property for $\mathcal{L}=\mathcal{A}\cap{\mathcal{X}}$ and $\mathcal{L}'=\mathcal{D}$. We refer to the accompanying remark for a detailed discussion about the proof strategy and the separability assumption.
\begin{theorem
\label{theo: dual ftap A convex}
Assume that one of the following holds:
\begin{enumerate}
\item[(i)] $\mathcal{A}$ is a pointed cone and there exists no scalable good deal.
\item[(ii)] $\mathcal{K}(\mathcal{A})$ is pointed and there exists no scalable good deal with respect to $\mathcal{K}(\mathcal{A})$.
\end{enumerate}
Then, there exists a strictly positive consistent price deflator $D$ in ${\mathcal{X}}'$. If, in addition, the norm predual of ${\mathcal{X}}'$ is separable with respect to its norm topology, then $D$ can be taken to be strictly consistent.
\end{theorem}
\begin{proof}
It follows from Proposition \ref{prop: conified A} that $\mathcal{K}(\mathcal{A})$ is a conic acceptance set such that $\mathcal{K}(\mathcal{A})\cap{\mathcal{X}}$ is closed and coincides with $\mathop{\rm cl}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{A})\cap{\mathcal{X}})$. Note that every price deflator $D$ that is (strictly) consistent with $\mathcal{K}(\mathcal{A})$ is also (strictly) consistent with $\mathcal{A}$. As a result, it suffices to prove the stated claims under condition (i). Hence, assume that $\mathcal{A}$ is a pointed cone and there exists no scalable good deal.
\smallskip
We first show that we can always find a strictly positive consistent price deflator in ${\mathcal{X}}'$. To this effect, we apply Theorem \ref{theo: kreps yan in our setting} to $\mathcal{L}={\mathcal{X}}_+$ and $\mathcal{L}'=\mathcal{D}$, in which case $\mathcal{L}'\cap(-\mathop{\rm bar}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{L})))=\mathcal{D}$ by Lemma~\ref{lem: elementary properties C}. In view of this result and of Proposition~\ref{prop: completeness property}, to establish our claim it suffices to exhibit a sequence $(Y_n)\subset\mathcal{D}$ of price deflators such that
\begin{equation}
\label{eq: countable separation}
\mbox{for every nonzero $X\in{\mathcal{X}}_+$ there exists $n\in\mathbb{N}$ such that $\mathbb{E}[XY_n]>0$}.
\end{equation}
By Proposition~\ref{prop: on scalable deals}, for every nonzero $X\in{\mathcal{X}}_+$ there exists $\lambda>0$ such that $(\lambda X,0)\notin\mathcal{C}$. Since $\mathcal{C}$ is closed and $(0,n)\notin\mathcal{C}$ for some $n\in\mathbb{N}$ by Lemma~\ref{lem: closedness C}, we can use the representation of (the closure of) $\mathcal{C}$ in Lemma~\ref{lem: elementary properties C} to find an element $Y_X\in\mathcal{D}$ such that $\mathbb{E}[\lambda XY_X]>\sigma_\mathcal{C}(Y_X,1)\geq0$. Equivalently, we have that
\begin{equation}
\label{eq: countable separation preliminary}
\mbox{for every nonzero $X\in{\mathcal{X}}_+$ there exists $Y_X\in\mathcal{D}$ such that $\mathbb{E}[XY_X]>0$}.
\end{equation}
To establish \eqref{eq: countable separation}, we start by showing that the family ${\mathcal{G}} = \{\{Y>0\} \,; \ Y\in\mathcal{D}\}$ is nonempty and closed under countable unions. That ${\mathcal{G}}$ is nonempty follows from~\eqref{eq: countable separation preliminary}. To show that ${\mathcal{G}}$ is closed under countable unions, take an arbitrary sequence $(Y_n)\subset\mathcal{D}\setminus\{0\}$. By Proposition~\ref{prop: completeness property}, we find a sequence $(\lambda_n)\subset(0,\infty)$ and an element $Y\in\mathcal{D}$ such that $S_n=\sum_{k=1}^n\lambda_kY_k\to Y$. It is easy to see that
\begin{equation}
\label{eq: exhaustion 1}
\{Y>0\} = \bigcup_{n\in\mathbb{N}}\{Y_n>0\} \ \ \ \mbox{$\mathbb{P}$-almost surely}.
\end{equation}
Indeed, consider first the event $E=\{Y>0\}\cap\bigcap_{n\in\mathbb{N}}\{Y_n=0\}$. We must have $\mathbb{P}(E)=0$ for otherwise
\[
0 < \mathbb{E}[\mathbbm 1_EY] = \lim_{n\to\infty}\mathbb{E}[\mathbbm 1_ES_n] = 0.
\]
As a result, the inclusion ``$\subset$'' in~\eqref{eq: exhaustion 1} must hold. Next, we claim that $\mathbb{P}(Y\geq S_n)=1$ for every $n\in\mathbb{N}$. If not, we find $k\in\mathbb{N}$ and $\varepsilon>0$ such that the event $E=\{Y\leq S_k-\varepsilon\}$ satisfies
\[
0 < \varepsilon\mathbb{P}(E) \leq \mathbb{E}[\mathbbm 1_E(S_k-Y)] \leq \lim_{n\to\infty}\mathbb{E}[\mathbbm 1_E(S_n-Y)] = 0.
\]
This delivers the inclusion ``$\supset$'' in~\eqref{eq: exhaustion 1} and shows that ${\mathcal{G}}$ is closed under countable unions as desired. Now, set $s=\sup\{\mathbb{P}(E) \,; \ E\in{\mathcal{G}}\}$. Take any sequence $(Y_n)\subset\mathcal{D}$ such that $\mathbb{P}(Y_n>0)\uparrow s$. By closedness under countable unions, there must exist $Y^\ast\in\mathcal{D}$ such that $\{Y^\ast>0\}=\bigcup_{n\in\mathbb{N}}\{Y_n>0\}$ $\mathbb{P}$-almost surely. Take an arbitrary nonzero $X\in{\mathcal{X}}_+$ and assume that $\mathbb{E}[XY_n]=0$ for every $n\in\mathbb{N}$. This would imply that $\mathbb{E}[XY^\ast]=0$ and, thus, the element $\frac{1}{2}Y^\ast+\frac{1}{2}Y_X\in\mathcal{D}$ would satisfy
\[
\mathbb{P}\left(\frac{1}{2}Y^\ast+\frac{1}{2}Y_X>0\right) \geq \mathbb{P}(Y^\ast>0)+\mathbb{P}(\{Y^\ast=0\}\cap\{Y_X>0\}) > \mathbb{P}(Y^\ast>0) = s,
\]
which cannot hold. In conclusion, we must have $\mathbb{E}[XY_n]>0$ for some $n\in\mathbb{N}$, showing~\eqref{eq: countable separation}.
\smallskip
To conclude the proof, we show that there exist a strictly consistent price deflator in ${\mathcal{X}}'$ if we additionally assume that the norm predual of ${\mathcal{X}}'$ is separable with respect to its norm topology. To this end, we apply Theorem \ref{theo: kreps yan in our setting} to $\mathcal{L}=\mathcal{A}\cap{\mathcal{X}}$ and $\mathcal{L}'=\mathcal{D}$, in which case $\mathcal{L}'\cap(-\mathop{\rm bar}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{L})))=\mathcal{D}$ by Lemma~\ref{lem: elementary properties C}. In view of this result and of Proposition~\ref{prop: completeness property}, we are done if we exhibit a sequence $(Y_n)\subset\mathcal{D}$ such that
\begin{equation}
\label{eq: countable separation A}
\mbox{for every nonzero $X\in\mathcal{A}\cap{\mathcal{X}}$ there exists $n\in\mathbb{N}$ such that $\mathbb{E}[XY_n]>0$}.
\end{equation}
By Proposition~\ref{prop: on scalable deals}, for every nonzero $X\in\mathcal{A}\cap{\mathcal{X}}$ there exists $\lambda>0$ such that $(\lambda X,0)\notin\mathcal{C}$. Since $\mathcal{C}$ is closed and $(0,n)\notin\mathcal{C}$ for some $n\in\mathbb{N}$ by Lemma~\ref{lem: closedness C}, we can use the representation of (the closure of) $\mathcal{C}$ in Lemma~\ref{lem: elementary properties C} to find an element $Y_X\in\mathcal{D}$ such that $\mathbb{E}[\lambda XY_X]>\sigma_\mathcal{C}(Y_X,1)\geq0$. Equivalently, we have that
\begin{equation}
\label{eq: countable separation A preliminary}
\mbox{for every nonzero $X\in\mathcal{A}\cap{\mathcal{X}}$ there exists $Y_X\in\mathcal{D}$ such that $\mathbb{E}[XY_X]>0$}.
\end{equation}
Recall that ${\mathcal{X}}'$ is a norm dual and denote by $\|\cdot\|_{{\mathcal{X}}'}$ the corresponding dual norm. For every nonzero $X\in\mathcal{A}\cap{\mathcal{X}}$ consider the rescaled couple
\[
(Z_X,r_X) = \bigg(\frac{Y_X}{\|Y_X\|_{{\mathcal{X}}'}},\frac{1}{\|Y_X\|_{{\mathcal{X}}'}}\bigg) \in \mathop{\rm bar}\nolimits(\mathcal{C}).
\]
As the norm predual of ${\mathcal{X}}'$ is separable by assumption, the unit ball in ${\mathcal{X}}'$ is weak-star metrizable by Theorem 6.30 in Aliprantis and Border (2006). Being weak-star compact by virtue of the Banach-Alaoglu Theorem, see e.g.\ Theorem 6.21 in Aliprantis and Border (2006), the unit ball together with any of its subsets is therefore weak-star separable. In particular, this is true for $\{Z_X \,; \ X\in(\mathcal{A}\cap{\mathcal{X}})\setminus\{0\}\}$. Since our reference topology on ${\mathcal{X}}'$, namely $\sigma({\mathcal{X}}',{\mathcal{X}})$, was assumed to be weaker than the weak-star topology, it follows that $\{Z_X \,; \ X\in(\mathcal{A}\cap{\mathcal{X}})\setminus\{0\}\}$ is also separable with respect to $\sigma({\mathcal{X}}',{\mathcal{X}})$. Let $\{Z_{X_n} \,; \ n\in\mathbb{N}\}$ be a countable dense subset. Then, for every nonzero $X\in\mathcal{A}\cap{\mathcal{X}}$, it follows immediately from~\eqref{eq: countable separation A preliminary} that we must have $\mathbb{E}[XY_{X_n}]>0$ for some $n\in\mathbb{N}$ by density. This delivers~\eqref{eq: countable separation A}.
\end{proof}
\begin{comment}
{\em Step 2: Assume that (3.2) holds}. By assumption, there exists a family $\{Z_n \,; \ n\in\mathbb{N}\}\subset{\mathcal{X}}'$ such that
\[
\mathcal{A}\cap{\mathcal{X}} = \bigcap_{n\in\mathbb{N}}\{X\in{\mathcal{X}} \,; \ \mathbb{E}_\mathbb{P}[XZ_n]\leq0\}.
\]
Set ${\mathcal{B}}=\mathop{\rm bar}\nolimits(\mathcal{A}\cap{\mathcal{X}})$ and let $\mathcal{K}$ be the closure of the set $\mathop{\rm co}\nolimits(\{Z_n \,; \ n\in\mathbb{N}\})\cup\{0\}$. We claim that ${\mathcal{B}}=\mathcal{K}$. Clearly, we only need to show the inclusion ``$\subset$''. To this effect, take $Z\in{\mathcal{B}}$ and assume that $Z\notin\mathcal{K}$. It follows from Lemma~\ref{lem: external representation} that $\mathbb{E}_\mathbb{P}[XZ]>0$ for some $X\in\mathop{\rm bar}\nolimits(\mathcal{K})$. Since $\mathbb{E}_\mathbb{P}[XZ_n]\leq0$ for every $n\in\mathbb{N}$, we infer that $X\in\mathcal{A}\cap{\mathcal{X}}$. As a result, we must also have $\mathbb{E}_\mathbb{P}[XZ]\leq0$ because $Z\in{\mathcal{B}}$. This contradiction yields the desired inclusion. Now, it follows from the definition of $\mathcal{K}$ that the barrier cone ${\mathcal{B}}$ is separable. Since $\mathcal{D}\subset-{\mathcal{B}}$, the separability is inherited by $\mathcal{D}$. So, let $\{Y_n \,; \ n\in\mathbb{N}\}$ be a countable dense subset of $\mathcal{D}$. For every nonzero $X\in\mathcal{A}\cap{\mathcal{X}}$, it follows immediately from~\eqref{eq: countable separation A preliminary} that we must have $\mathbb{E}_\mathbb{P}[XY_n]>0$ for some $n\in\mathbb{N}$ by density. This establishes~\eqref{eq: countable separation A}.
\end{comment}
\begin{remark}
\label{rem: assumptions dual ftap}
(i) The pointedness requirement can be slightly weakened. Indeed, it suffices that $\mathcal{A}\cap{\mathcal{X}}$ and $\mathcal{K}(\mathcal{A})\cap{\mathcal{X}}$ are pointed, respectively. In view of Proposition~\ref{prop: conified A}, the latter condition is equivalent to the pointedness of $\mathop{\rm cl}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{A}))\cap{\mathcal{X}}$. Note that, under pointedness, the absence of scalable good deals is equivalent to the generally weaker absence of strong scalable good deals. Note also that pointedness of $\mathcal{A}\cap{\mathcal{X}}$ is necessary for the existence of strictly consistent price deflators. One can verify that pointedness of $\mathcal{A}\cap{\mathcal{X}}$ is satisfied by most of the standard acceptance sets. For instance, by Proposition 5.9 in Bellini et al.\ (2021), this holds whenever ${\mathcal{X}}$ is law invariant and $\mathcal{A}$ is a law-invariant cone such that $\mathcal{A}\cap{\mathcal{X}}\neq\{X\in{\mathcal{X}} \,; \ \mathbb{E}[X]\geq0\}$.
\smallskip
(ii) The separability of the norm predual of ${\mathcal{X}}'$ is typically ensured by suitable assumptions on the underlying $\sigma$-field. For concreteness, consider the case where ${\mathcal{X}}'=L^\infty$, which is interesting because it delivers bounded price deflators. In this case, the norm predual is $L^1$. A simple sufficient condition for separability is that ${\mathcal{F}}$ is countably generated. A characterization of separability in the nonatomic setting can be found, e.g., in Theorem 13.16 in Aliprantis and Border (2006). It is worthwhile highlighting that separability is not required of the reference payoff space ${\mathcal{X}}$ and may hold even if ${\mathcal{X}}$ is not separable with respect to a pre-specified natural topology. For instance, if ${\mathcal{X}}$ is an Orlicz space, then separability with respect to the norm topology may fail even if ${\mathcal{F}}$ is countably generated; see, e.g., Theorem 1 in Section 3.5 in Rao and Ren (1991).
\smallskip
(iii) To establish the existence of a strictly consistent price deflator we had to ``conify'' the acceptance set $\mathcal{A}$ so as to obtain another acceptance set $\mathcal{K}(\mathcal{A})$ satisfying the same standing assumptions. A direct way to see that a ``conification'' is necessary is to observe that every strictly consistent price deflator is automatically strictly consistent for the acceptance set $\mathcal{K}(\mathcal{A})$. This is also true for the more natural ``conified'' acceptance set $\mathop{\rm cone}\nolimits(\mathcal{A})$, but the the intersection $\mathop{\rm cone}\nolimits(\mathcal{A})\cap{\mathcal{X}}$ need not be closed and, hence, our standing assumptions need not hold.
\smallskip
(iv) The proof of the existence of {\em strictly positive} consistent price deflators builds on the exhaustion argument underpinning the classical result on equivalent probability measures in Halmos and Savage (1949). In fact, a direct application of that result provides an alternative proof of the countable separation property in~\eqref{eq: countable separation}. To see this, note that every element $Y_X\in\mathcal{D}$ in~\eqref{eq: countable separation preliminary} is associated with a probability measure on $(\Omega,{\mathcal{F}})$ defined by $d\mathbb{P}_X = \frac{Y_X}{\mathbb{E}_\mathbb{P}[Y_X]}d\mathbb{P}$. Since the family of such probability measures is dominated by $\mathbb{P}$, it follows from Lemma 7 in Halmos and Savage (1949) that there exists a sequence $(X_n)\subset{\mathcal{X}}_+\setminus\{0\}$ such that for every $E\in{\mathcal{F}}$ we have that $\mathbb{P}_{X_n}(E)=0$ for every $n\in\mathbb{N}$ if and only if $\mathbb{P}_X(E)=0$ for every nonzero $X\in{\mathcal{X}}_+$. For every nonzero $X\in{\mathcal{X}}_+$ we clearly have $\mathbb{P}_X(X>0)>0$ and, hence, there must exist $n\in\mathbb{N}$ such that $\mathbb{P}_{X_n}(X>0)>0$ or, equivalently, $\mathbb{E}[XY_{X_n}]>0$. The countable separation property is thus fulfilled by the sequence $(Y_{X_n})$. It is worth noting that neither this argument nor the argument in the proof above can be used to ensure the existence of {\em strictly consistent} price deflators when the acceptance sets is strictly larger than the standard positive cone and, thus, contains nonpositive payoffs. This is because controlling probabilities alone is not sufficient to control the sign of expectations. To deal with strict consistency in the general case we therefore had to pursue a different strategy based on the separability of the norm predual of ${\mathcal{X}}'$, which was inspired by the original work by Kreps (1981) and by the related work by Clark (1993) in the setting of frictionless markets.
\end{remark}
We are finally in a position to establish the announced version of the Fundamental Theorem of Asset Pricing for markets with frictions and general acceptance sets. The previous theorem is the heart of our Fundamental Theorem, which we state in the usual form of an equivalence and where, for concreteness, we focus on bounded price deflators to mimic its classical formulation. The theorem follows at once by combining Proposition~\ref{prop: density implies no good deal} and Theorem~\ref{theo: dual ftap A convex}. We split the theorem in three parts. In a first part, we focus on the situation where the acceptance set is the standard positive cone. In this case, we obtain a different proof of the one-period version of the Fundamental Theorem in markets with frictions established in Theorem 5.4 in Pennanen (2011a). As already said, the absence of scalable arbitrage opportunities corresponds to the ``no scalable arbitrage'' condition and a price deflator corresponds to a marginal price deflator in that paper. In a second and third part, we focus on conic and general acceptance sets, respectively. The corresponding versions of the Fundamental Theorem are new. We refer to the accompanying remark for a detailed embedding in the literature and to the example below for a proof of the necessity of our assumptions on the acceptance set.
\begin{theorem}[{\bf Fundamental Theorem of Asset Pricing}]
\label{theo: FTAP}
\begin{enumerate}
\item[(i)] There exists no scalable arbitrage opportunity if and only if there exists a strictly positive price deflator in $L^\infty$.
\item[(ii)] Let $L^1$ be separable with respect to its norm topology (e.g., ${\mathcal{F}}$ is countably generated).
\begin{enumerate}
\item[(a)] Let $\mathcal{A}$ be a pointed cone. Then, there exists no scalable good deal if and only if there exists a strictly consistent price deflator in $L^\infty$.
\item[(b)] Let $\mathcal{K}(\mathcal{A})$ be pointed. If there exists no scalable good deal with respect to $\mathcal{K}(\mathcal{A})$, then there exists a strictly consistent price deflator in $L^\infty$. If there exists a strictly consistent price deflator in $L^\infty$, then there exists no scalable good deal.
\end{enumerate}
\end{enumerate}
\end{theorem}
\vspace{0.01cm}
\begin{remark}
\label{rem: FTAP}
We provide a detailed comparison of our version of the Fundamental Theorem of Asset Pricing with the various versions obtained in the good deal pricing literature.
\smallskip
(i) The focus of Carr et al.\ (2001) is on one-period frictionless markets. The reference acceptance set is convex and defined in terms of finitely many test probabilities. The reference probability space is finite. In Theorem 1 the authors establish a Fundamental Theorem under the absence of a special type of good deals that is specific to the polyhedral structure of the acceptance set and that is stronger than the absence of scalable good deals. The statement is in terms of representative state pricing functions, which correspond to special (in general not strictly) consistent price deflators.
\smallskip
(ii) The focus of Jaschke and K\"{u}chler (2001) is on multi-period markets with proportional frictions admitting a frictionless asset. The reference acceptance set is assumed to be a convex cone. The reference probability space is general. In fact, the payoff space is an abstract topological vector space. In Corollary 8 the authors establish a Fundamental Theorem under the assumption of absence of good deals of second kind. In our setting, this is equivalent to the absence of payoffs $X\in\mathcal{A}\cap\mathcal{M}$ such that $\pi(X)<0$. The statement is in terms of consistent (not strictly consistent) price deflators. To deal with the infinite dimensionality of $\mathcal{M}$, which follows from the multi-period nature of the market model, the Fundamental Theorem is stated under an additional assumption that corresponds to the closedness of $\mathcal{C}$. No sufficient conditions for this are provided. It should be noted that the absence of good deals of second kind is not sufficient to ensure closedness of $\mathcal{C}$ even when $\mathcal{M}$ is finite dimensional. To show this, let $\Omega=\{\omega_1,\omega_2,\omega_3\}$ and assume that ${\mathcal{F}}$ is the power set of $\Omega$ and that $\mathbb{P}(\omega_1)=\mathbb{P}(\omega_2)=\mathbb{P}(\omega_3)=\frac{1}{3}$. We take ${\mathcal{X}}=L^0$ and identify every element of $L^0$ with a vector of $\mathbb{R}^3$. Let $\mathcal{M}$ coincide with ${\mathcal{S}}=\{(x,y,z)\in\mathbb{R}^3 \, ; \ x=0\}$ and let $\pi:{\mathcal{S}}\to\mathbb{R}$ be defined by $\pi(x,y,z)=y$. Consider the closed convex conic acceptance set
\[
\mathcal{A}=\left\{(x,y,z)\in\mathbb{R}^3 \,; \ x^2+y^2+6xy+2\sqrt{6}xz+2\sqrt{6}yz\geq0, \ \sqrt{3}x+\sqrt{3}y+\sqrt{2}z\geq0 \right\},
\]
obtained by rotating the cone $\mathcal{A}'=\{(x,y,z)\in\mathbb{R}^3 \, ; \ x^2+y^2\leq 3 z^2, \ z\geq0\}$ by $\pi/3$ around the direction $(-1,1,0)$. It is easy to verify that if $X\in\mathcal{A}\cap\mathcal{M}$, then $\pi(X)\geq0$ and, hence, there are no good deals of second kind. We show that $\mathcal{C}$ is not closed. For every $n\in\mathbb{N}$ define $X_n=\left(1-\frac{1}{n},-1,0\right)$ and note that $(X_n,0)\in\mathcal{C}$ because $Z_n=(0,0,n^2)\in\mathcal{M}$ satisfies $\pi(Z_n)=0$ and $Z_n-X_n\in\mathcal{A}$. Clearly, we have $(X_n,0)\to(X,0)$ with $X=(1,-1,0)$. We conclude that $\mathcal{C}$ is not closed as $(X,0)\notin\mathcal{C}$.
\smallskip
(iii) The focus of \v{C}ern\'{y} and Hodges (2002) is on one-period frictionless markets. The reference acceptance set is convex. The reference probability space is general. In fact, the payoff space is an abstract locally-convex topological vector space. In Theorem 2.5 the authors establish a Fundamental Theorem under the absence of good deals with respect to the ``conified'' acceptance set. The statement is expressed in terms of strictly consistent price deflators and is proved under the additional assumption that ${\mathcal{X}}$ is an $L^p$ space for some $1<p<\infty$ and that $\mathcal{A}$ is boundedly generated, i.e., is included in the cone generated by a bounded set. This condition typically fails when the underlying probability space is not finite.
\smallskip
(iv) The focus of Staum (2004) is on multi-period markets with convex frictions. The reference acceptance set is convex. The reference probability space is general. In fact, the payoff space is an abstract locally-convex topological vector space. In Theorem 6.2 the author establishes a Fundamental Theorem under the assumption that for all payoffs $X\in{\mathcal{X}}$ and nonzero $Z\in{\mathcal{X}}_+$
\[
\inf\{\pi(Z) \,; \ Z\in\mathcal{M}, \ Z-X\in\mathcal{A}\}+\inf\{\pi(Z) \,; \ Z\in\mathcal{M}, \ Z-X\in{\mathcal{X}}_+\} > 0.
\]
The link with the absence of good deals is not discussed. The statement is in terms of strictly positive (not strictly consistent) price deflators. To deal with the infinite dimensionality of $\mathcal{M}$, which follows from the multi-period nature of the market model, the Fundamental Theorem is stated under the additional assumption that $\pi^+$ is lower semicontinuous. Sufficient conditions for this are provided when ${\mathcal{X}}=L^\infty$ (with respect the standard norm topology). Unfortunately, the proof of Lemma 6.1, which is key to deriving the Fundamental Theorem, is flawed. On the one side, Zorn's Lemma is evoked to infer that a family of sets that is closed under countable unions admits a maximal element. However, this is not true as illustrated, for instance, by the family of all countable subsets of $\mathbb{R}$. On the other side, it is tacitly assumed that, for a generic dual pair $({\mathcal{X}},{\mathcal{X}}')$, the series $\sum_{n\in\mathbb{N}}2^{-n}Y_n$ converges in the topology $\sigma({\mathcal{X}}',{\mathcal{X}})$ for every choice of $(Y_n)\subset{\mathcal{X}}'$, which cannot hold unless special assumptions are required of the pair $({\mathcal{X}},{\mathcal{X}}')$ (as those stipulated, e.g., in Assumption~\ref{standing assumption}). The underlying strategy of reproducing the exhaustion argument used in the classical proof of the Fundamental Theorem seems unlikely to work because it heavily relies on the existence of a (dominating) probability measure and, as highlighted in Remark~\ref{rem: assumptions dual ftap}, breaks down in the presence of nonpositive acceptable payoffs.
\smallskip
(v) The focus of Cherny (2008) is on one-period markets with convex frictions. The reference acceptance set is a convex cone. The reference payoff space is tailored to the chosen acceptance set by way of a duality construction, which often delivers standard $L^p$ spaces, for example, when the acceptance set is based on expected shortfall. In Theorem 3.1 the author establishes a version of the Fundamental Theorem under the absence of special good deals. In our setting, they correspond to payoffs $X\in\mathcal{M}$ with $\pi(X)\leq0$ and
\[
\inf\{m\in\mathbb{R} \,; \ X+m\in\mathcal{A}\} < 0.
\]
The statement is in terms of a special class of (not necessarily strictly positive) price deflators. The proof uses the additional assumption that the barrier cone of the acceptance set is compactly generated.
\smallskip
(vi) The focus of Madan and Cherny (2010) is on one-period frictionless markets. The reference acceptance set is induced by an acceptability index. The reference payoff space consists of suitably integrable random variables. In Theorem 1 the authors provide a version of the Fundamental Theorem under the absence of good deals. The statement is in terms of (not necessarily strictly positive) price deflators.
\smallskip
(vii) The focus of Cheridito et al.\ (2017) is on multi-period markets with general frictions admitting a frictionless asset. The reference acceptance set is also general but is required to ensure convexity of a set that, in our notation, corresponds to
\[
\{X\in{\mathcal{X}} \,; \ \exists Z\in\mathcal{M}, \ \pi(Z)\leq0, \ Z-X\in\mathcal{A}\} = \{X\in{\mathcal{X}} \,; \ (X,0)\in\mathcal{C}\}.
\]
The payoff space consists of suitable regular stochastic processes. Notably, no dominating probability measure is assumed to exist. In Theorem 2.1 the authors establish a Fundamental Theorem under the absence of a suitable class of strong good deals. To deal with the infinite dimensionality of $\mathcal{M}$, which follows from the multi-period nature of the market model, the Fundamental Theorem is stated under additional regularity assumptions on the market model and the acceptance set ensuring finiteness of superreplication prices of special call options. The statement is in terms of (not necessarily strictly) consistent
price deflators.
\end{remark}
\begin{comment}
(iii) Condition (3.2) is always ensured when $\mathcal{A}\cap{\mathcal{X}}$ is polyhedral, i.e.\ when it can be expressed as a finite intersection of closed halfspaces. It is worth noting that the argument in the proof only exploits the fact that, under condition (3.2), the barrier cone of $\mathcal{A}\cap{\mathcal{X}}$ is separable. It is not difficult to show that condition (3.2) is actually equivalent to the separability of the barrier cone of $\mathcal{A}\cap{\mathcal{X}}$. Indeed, let $\{Y_n \,; \ n\in\mathbb{N}\}$ be a dense subset of $\mathop{\rm bar}\nolimits(\mathcal{A}\cap{\mathcal{X}})$ and take any $X\in{\mathcal{X}}$ such that $\mathbb{E}_\mathbb{P}[XY_n]\leq0$ for every $n\in\mathbb{N}$. Then, it is clear that $\mathbb{E}_\mathbb{P}[XY]\leq0$ for every $Y\in\mathop{\rm bar}\nolimits(\mathcal{A}\cap{\mathcal{X}})$ by density. This implies that $\mathcal{A}\cap{\mathcal{X}}$ can be written as the intersection of the closed halfspaces $\{X\in{\mathcal{X}} \,; \ \mathbb{E}_\mathbb{P}[XY_n]\}$ for $n\in\mathbb{N}$.
\end{comment}
\begin{example}
Let $\Omega=\{\omega_1,\omega_2\}$ and assume that ${\mathcal{F}}$ is the power set of $\Omega$ and that $\mathbb{P}$ satisfies $\mathbb{P}(\omega_1)=\mathbb{P}(\omega_2)=\frac{1}{2}$. In this setting, we take ${\mathcal{X}}={\mathcal{X}}'=L^0$ and identify every element of $L^0$ with a vector of $\mathbb{R}^2$.
\smallskip
(i) Set $\mathcal{M}=\mathbb{R}^2$ and $\pi(x,y)=\max\{x,y\}$ for every $(x,y)\in\mathbb{R}^2$ and define
\[
\mathcal{A} = \mathbb{R}^2_+\cup\{(x,y)\in\mathbb{R}^2 \,; \ x<0, \ y\geq x^2\}.
\]
Note that $\mathcal{A}$ is not a cone. Note also that no scalable good deal exists. However, there exists no strictly consistent price deflator $D=(d_1,d_2)$. Indeed, for every $\lambda>0$ we could otherwise take $X_\lambda=(-\lambda,\lambda^2)\in\mathcal{A}$ and note that $\mathbb{E}[DX_\lambda]>0$ implies $ d_2\lambda>d_1$, which contradicts the strict positivity, hence the strict consistency, of $D$. This shows that, if we remove conicity, then the ``only if'' implication in assertion (a) in Theorem \ref{theo: FTAP} generally fails. It also shows that the converse of the second implication in assertion (b) in the same result generally fails as well.
\smallskip
(ii) Set $\mathcal{M}=\mathbb{R}^2$ and $\pi(x,y)=x+y$ for every $(x,y)\in\mathbb{R}^2$ and define
\[
\mathcal{A} = \mathbb{R}^2_+\cup\{(x,y)\in\mathbb{R}^2 \,; \ x<0, \ y\geq e^{-x}-1\}.
\]
Note that $\mathcal{A}$ is not a cone and $\mathcal{K}(\mathcal{A})=\mathbb{R}^2_+\cup\{(x,y)\in\mathbb{R}^2 \,; \ x<0, \ y\geq -x\}$ is pointed. Note also that $D=(2,2)$ is a (in fact, the only) strictly consistent price deflator. However, $X=(-1,1)\in\mathcal{K}(\mathcal{A})\cap\mathcal{M}$ satisfies $\pi(X)=0$ and is therefore a scalable good deal with respect to $\mathcal{K}(\mathcal{A})$. This shows that the converse of the first implication in assertion (b) in Theorem \ref{theo: FTAP} generally fails.
\end{example}
\subsection{Superreplication duality}
In this section, we first derive a dual representation of superreplication prices based on consistent price deflators under the assumption that the market is free of strong scalable good deals. We refer to Corollary 8 in Jaschke and K\"{u}chler (2001), Theorem 4.1 in Staum (2004), and Theorem 2.1 in Cheridito et al.\ (2017) for similar representations under the assumption of absence of good deals. We also refer to Proposition 3.9 in Frittelli and Scandolo (2006) for a similar representation in a risk measure setting. These representations were obtained under the assumption of lower semicontinuity of $\pi^+$. As mentioned in the proof of Proposition~\ref{prop: direct FTAP}, a sufficient condition for this to hold is precisely the absence of strong scalable good deals. In a second step, we improve the dual representation by replacing consistency with strict consistency. In a frictionless setting where the acceptance set is the standard positive cone, this is equivalent to moving from price deflators to strictly positive price deflators (equivalently, from martingale measures to equivalent martingale measures). This sharper result therefore extends the classical result on superreplication duality to markets with frictions and general acceptance sets.
\begin{theorem}[{\bf Superreplication duality}]
\label{theo: superhedging theorem}
The following statements hold:
\begin{enumerate}
\item[(i)] If there exists no strong scalable good deal, then for every $X\in{\mathcal{X}}$
\[
\pi^+(X) = \sup_{D\in\mathcal{D}}\{\mathbb{E}[DX]-\gamma_{\pi,\mathcal{M}}(D)+\gamma_\mathcal{A}(D)\}.
\]
\item[(ii)] If there exists no scalable good deal and if either $\mathcal{A}=L^0_+$ or $\mathcal{A}$ is a pointed cone and the norm predual of ${\mathcal{X}}'$ is separable with respect to its norm topology, then for every $X\in{\mathcal{X}}$
\begin{equation}
\label{eq: dual representation superreplication}
\pi^+(X) = \sup_{D\in\mathcal{D}_{str}}\{\mathbb{E}[DX]-\gamma_{\pi,\mathcal{M}}(D)\}.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
Assume the market is free of strong scalable good deals. It follows from Lemma~\ref{lem: closedness C} that $\mathcal{C}$ is closed and $(0,n)\notin\mathcal{C}$ for some $n\in\mathbb{N}$. Now, take an arbitrary $X\in{\mathcal{X}}$. Combining the representation of $\pi^+(X)$ in Lemma~\ref{lem: superreplication C} with the representation of (the closure of) $\mathcal{C}$ obtained in Lemma~\ref{lem: elementary properties C}, we infer that
\begin{align*}
\pi^+(X)
&=
\inf\{m\in\mathbb{R} \,; \ \mathbb{E}[DX]-m-\gamma_{\pi,\mathcal{M}}(D)+\gamma_\mathcal{A}(D)\leq0, \ \forall D\in\mathcal{D}\} \\
&=
\inf\{m\in\mathbb{R} \,; \ m\geq\mathbb{E}[DX]-\gamma_{\pi,\mathcal{M}}(D)+\gamma_\mathcal{A}(D), \ \forall D\in\mathcal{D}\} \\
&=
\sup\{\mathbb{E}[DX]-\gamma_{\pi,\mathcal{M}}(D)+\gamma_\mathcal{A}(D) \,; \ D\in\mathcal{D}\}.
\end{align*}
This proves (i). Now, let the assumptions in point (ii) hold. It follows from Theorem~\ref{theo: dual ftap A convex} that $\mathcal{D}_{str}$ is nonempty. Moreover, by Lemma~\ref{lem: closedness C}, $\mathcal{C}$ is closed and $(0,n)\notin\mathcal{C}$ for some $n\in\mathbb{N}$. We claim that the representation in Lemma~\ref{lem: elementary properties C} for (the closure of) $\mathcal{C}$ can be rewritten as
\begin{equation}
\label{eq: dual repc C Dstr}
\mathcal{C} = \bigcap_{Y\in\mathcal{D}_{str}}\{(X,m)\in{\mathcal{X}}\times\mathbb{R} \,; \ \mathbb{E}[XY]+m\leq\gamma_{\pi,\mathcal{M}}(Y)\}.
\end{equation}
Note that $\gamma_\mathcal{A}(Y)=0$ for every $Y\in\mathcal{D}$ by conicity of $\mathcal{A}$. Clearly, we only need to establish the inclusion ``$\supset$''. To this end, take any $(X,m)\in{\mathcal{X}}\times\mathbb{R}$ such that $\mathbb{E}[XY]+m\leq\gamma_{\pi,\mathcal{M}}(Y)$ for every $Y\in\mathcal{D}_{str}$. Fix $Y^\ast\in\mathcal{D}_{str}$ and take any $Y\in\mathcal{D}$. For every $\lambda\in(0,1)$ we have $\lambda Y^\ast+(1-\lambda)Y\in\mathcal{D}_{str}$ so that
\begin{align*}
\lambda(\mathbb{E}[XY^\ast]+m)+(1-\lambda)(\mathbb{E}[XY]+m)
&=
\mathbb{E}[X(\lambda Y^\ast+(1-\lambda)Y)]+m \\
&\leq
\gamma_{\pi,\mathcal{M}}(\lambda Y^\ast+(1-\lambda)Y) \\
&\leq
\lambda\gamma_{\pi,\mathcal{M}}(Y^\ast)+(1-\lambda)\gamma_{\pi,\mathcal{M}}(Y).
\end{align*}
Letting $\lambda\downarrow0$ delivers $\mathbb{E}[XY]+m\leq\gamma_{\pi,\mathcal{M}}(Y)$ and shows the desired inclusion. Now, take any payoff $X\in{\mathcal{X}}$. It follows from Lemma~\ref{lem: superreplication C} and \eqref{eq: dual repc C Dstr} that
\begin{align*}
\pi^+(X)
&=
\inf\{m\in\mathbb{R} \,; \ \mathbb{E}[DX]-m\leq\gamma_{\pi,\mathcal{M}}(D), \ \forall D\in\mathcal{D}_{str}\} \\
&=
\inf\{m\in\mathbb{R} \,; \ m\geq\mathbb{E}[DX]-\gamma_{\pi,\mathcal{M}}(D), \ \forall D\in\mathcal{D}_{str}\} \\
&=
\sup\{\mathbb{E}[DX]-\gamma_{\pi,\mathcal{M}}(D) \,; \ D\in\mathcal{D}_{str}\}.
\end{align*}
This establishes (ii) and concludes the proof.
\end{proof}
\subsection{Dual characterization of market-consistent prices}
The Fundamental Theorem also allows to derive our desired dual characterization of market-consistent prices with acceptable risk, which extends the classical characterization of arbitrage-free prices in terms of strictly positive price deflators. We complement this by showing that, contrary to the standard frictionless setting, for an attainable payoff with market-consistent superreplication price the supremum in the dual representation of the corresponding superreplication price need not be attained. Interestingly enough, this implies that a dual characterization of market-consistent prices for replicable payoffs in terms of strictly consistent price deflators is not always possible. The accompanying proposition shows a situation where the dual characterization holds also for replicable payoffs.
\begin{proposition}[{\bf Dual characterization of market-consistent prices}]
\label{theo: dual MCP}
If there exists no scalable good deal and if either $\mathcal{A}=L^0_+$ or $\mathcal{A}$ is a pointed cone and the norm predual of ${\mathcal{X}}'$ is separable with respect to its norm topology, then the following statements hold:
\begin{enumerate}
\item[(i)] If $\pi^+(X)\in\mathop {\rm MCP}\nolimits(X)$ and the supremum in \eqref{eq: dual representation superreplication} is attained or if $\pi^+(X)\notin\mathop {\rm MCP}\nolimits(X)$, then
\begin{equation}
\label{eq: dual representation MCP}
\mathop {\rm MCP}\nolimits(X) = \{p\in\mathbb{R} \,; \ \exists D\in\mathcal{D}_{str} \,:\, p\leq\mathbb{E}[DX]-\gamma_{\pi,\mathcal{M}}(D)\}.
\end{equation}
\item[(ii)] If $\pi^+(X)\in\mathop {\rm MCP}\nolimits(X)$ and the supremum in \eqref{eq: dual representation superreplication} is not attained, then the strict inclusion ``$\supset$'' holds in \eqref{eq: dual representation MCP}. This can occur even if both $\pi$ and $\mathcal{M}$ are conic and there exists no good deal.
\end{enumerate}
\end{proposition}
\begin{proof}
It follows from Theorem~\ref{theo: dual ftap A convex} that $\mathcal{D}_{str}$ is nonempty. First, we show the inclusion ``$\supset$'' in \eqref{eq: dual representation MCP}. Let $D\in\mathcal{D}_{str}$. Note that for every attainable payoff $Z\in\mathcal{M}$ such that $Z-X\in\mathcal{A}\setminus\{0\}$ we have
\[
\pi(Z)
\geq
\mathbb{E}[DZ]-\gamma_{\pi,\mathcal{M}}(D)
=
\mathbb{E}[D(Z-X)]+\mathbb{E}[DX]-\gamma_{\pi,\mathcal{M}}(D)
>
\mathbb{E}[DX]-\gamma_{\pi,\mathcal{M}}(D)
\]
by strict consistency. Note also that $\mathbb{E}[DX]-\gamma_{\pi,\mathcal{M}}(D)\leq\pi(X)$ in the case that $X\in\mathcal{M}$. This shows that $\mathbb{E}[DX]-\gamma_{\pi,\mathcal{M}}(D)$ is a market-consistent price for $X$ and yields the desired inclusion. Now, recall from Proposition~\ref{prop: interval MCP} that $\pi^+(X)$ is the supremum of the set $\mathop {\rm MCP}\nolimits(X)$. If $\pi^+(X)$ belongs to $\mathop {\rm MCP}\nolimits(X)$, then the inclusion ``$\supset$'' in \eqref{eq: dual representation MCP} is an equality if and only if the supremum in \eqref{eq: dual representation superreplication} is attained. We refer to Example~\ref{ex: MCP are not represented via pricing densities} for a concrete situation where the latter condition fails even if both $\pi$ and $\mathcal{M}$ are conic and the market admits no good deals. Finally, assume that $\pi^+(X)$ does not belong to $\mathop {\rm MCP}\nolimits(X)$. To complete the proof we only have to show the inclusion ``$\subset$'' in \eqref{eq: dual representation MCP}. To this effect, take an arbitrary market-consistent price $p\in\mathop {\rm MCP}\nolimits(X)$ and note that we must have $p<\pi^+(X)$. Hence, it follows from the representation \eqref{eq: dual representation superreplication} that $p<\mathbb{E}[DX]-\gamma_{\pi,\mathcal{M}}(D)$ for a suitable $D\in\mathcal{D}_{str}$. This concludes the proof.
\end{proof}
\begin{example}
\label{ex: MCP are not represented via pricing densities}
Let $\Omega=\{\omega_1,\omega_2\}$ and assume that ${\mathcal{F}}$ is the power set of $\Omega$ and that $\mathbb{P}$ is specified by $\mathbb{P}(\omega_1)=\mathbb{P}(\omega_2)=\frac{1}{2}$. In this simple setting, we take ${\mathcal{X}}={\mathcal{X}}'=L^0$ and identify every element of $L^0$ with a vector of $\mathbb{R}^2$. Take $\mathcal{A}=\mathbb{R}^2_+$, ${\mathcal{S}}=\mathbb{R}^2$ and $\mathcal{M}=\{(x,y)\in\mathbb{R}^2 \, ; \ 0\leq y\leq-x\}$. Define
\[
\pi(x,y)=
\begin{cases}
-\sqrt{x^2+xy} & \mbox{if} \ (x,y)\in\mathcal{M}\\
\infty & \mbox{otherwise}
\end{cases},
\]
which is convex because it is continuous on $\mathcal{M}$ and its Hessian matrix in the interior of $\mathcal{M}$
has nonnegative eigenvalues, namely $0$ and $\frac{1}{4}(x^2+y^2)(x^2+xy)^{-3/2}$. Both $\mathcal{A}$ and $\mathcal{M}$ are cones and $\pi$ is conic. Moreover, there exists no good deal. A direct inspection shows that strictly consistent price deflators $D\in{\mathcal{X}}'$ exist (for instance, take $D=(2,1)$) and satisfy $\gamma_{\pi,\mathcal{M}}(D)=0$ by conicity. Now, set $X=(-1,1)\in\mathcal{M}$. We have that $\pi^+(X)=\pi(X)=0$ since $(\mathcal{A}+X)\cap\mathcal{M}=\{X\}$. This also yields $0\in\mathop {\rm MCP}\nolimits(X)$ by Proposition \ref{prop: characterization mcp superreplication}. We show that there is no $D=(d_1,d_2)\in\mathcal{D}_{str}$ such that $\mathbb{E}[DX]=0$. Indeed, we would otherwise have $d_1=d_2$ and taking $Z_\lambda=(-1,\lambda)\in\mathcal{M}$ for $\lambda\in(0,1)$ would deliver
\[
\sup_{0<\lambda<1}\{\mathbb{E}[DZ_\lambda]-\pi(Z_\lambda)\}\leq0 \ \implies \ d_1\geq\sup_{0<\lambda<1}\frac{2}{\sqrt{1-\lambda}}=\infty.
\]
As a result, the supremum in~\eqref{eq: dual representation superreplication} is not attained.
\end{example}
\begin{proposition}
If $\mathcal{A}$ is a cone and there exists a strictly consistent price deflator $D\in{\mathcal{X}}'$ such that $\gamma_{\pi,\mathcal{M}}(D)=0$, then for every $X\in{\mathcal{X}}$ such that $\pi^+(X)\in\mathop {\rm MCP}\nolimits(X)$ and such that $X\in\mathcal{M}^\infty\cap(-\mathcal{M}^\infty)$ and $\pi$ is linear on $\mathop{\rm span}\nolimits(X)$ we have
\[
\mathop {\rm MCP}\nolimits(X) = \{p\in\mathbb{R} \,; \ \exists D\in\mathcal{D}_{str} \,:\, p\leq\mathbb{E}[DX]\}.
\]
\end{proposition}
\begin{proof}
It follows from Proposition \ref{prop: density implies no good deal} that the market has no scalable good deals. Now, take a payoff $X\in{\mathcal{X}}$ such that $\pi^+(X)\in\mathop {\rm MCP}\nolimits(X)$ and assume that $X\in\mathcal{M}^\infty\cap(-\mathcal{M}^\infty)$ and $\pi$ is linear on $\mathop{\rm span}\nolimits(X)$. By Proposition~\ref{theo: characterization mcp superreplication} we have $\pi^+(X)=\pi(X)$. Moreover, by Proposition~\ref{prop: pricing density}, we know that $\pi(X)=\mathbb{E}[DX]$. Hence the supremum in \eqref{eq: dual representation superreplication} is attained and the thesis follows from Proposition \ref{theo: dual MCP}.
\end{proof}
The next example shows that conicity is necessary for both Theorem~\ref{theo: superhedging theorem} and Proposition~\ref{theo: dual MCP} to hold.
\begin{example}
Let $\Omega=\{\omega_1,\omega_2\}$ and assume that ${\mathcal{F}}$ is the power set of $\Omega$ and that $\mathbb{P}$ is specified by $\mathbb{P}(\omega_1)=\mathbb{P}(\omega_2)=\frac{1}{2}$. In this simple setting, we take ${\mathcal{X}}={\mathcal{X}}'=L^0$ and identify every element of $L^0$ with a vector of $\mathbb{R}^2$. Define $\pi(x,y)=\max\{x,x+y\}$ for every $(x,y)\in\mathbb{R}^2$ and set
\[
\mathcal{M}=\{(x,y)\in\mathbb{R}^2 \, ; \ y\geq0\}, \ \ \ \ \mathcal{A} = \{(x,y)\in\mathbb{R}^2 \,; \ y\geq\max\{-2x,0\}, \ x\geq-1\}.
\]
Note that $\pi$ and $\mathcal{M}$ are both conic while $\mathcal{A}$ is not. Note also that there exists no good deal. It is not difficult to verify that strictly consistent price deflators exist. Indeed, for a strictly-positive $D=(d_1,d_2)$
\[
\begin{cases}
\sup\{\mathbb{E}[DX]-\pi(X) \,; \ X\in\mathcal{M}\}<\infty \\
\mbox{$\mathbb{E}[DX]>0$ for every nonzero $X\in\mathcal{A}$}
\end{cases}
\ \iff \
\begin{cases}
d_1=2 \\
1<d_2\leq 2
\end{cases}
.
\]
Set $X=(2,-4)\in{\mathcal{X}}$. Since $(\mathcal{A}+X)\cap\mathcal{M}=\{(x,y)\in\mathbb{R}^2 \,; \ x\geq1, \ y\geq0\}$, we see that $\pi^+(X)=\pi(1,0)=1$. As $X$ does not belong to $\mathcal{M}$, we have $\mathop {\rm MCP}\nolimits(X)=(-\infty,1)$ by Proposition~\ref{theo: characterization mcp superreplication}. Both \eqref{eq: dual representation superreplication} and \eqref{eq: dual representation MCP} fail, since for every strictly consistent price deflator $D=(d_1,d_2)$ we have $\gamma_{\pi,\mathcal{M}}(D)=0$ by conicity and
\[
\sup_{D\in\mathcal{D}_{str}}\{\mathbb{E}[DX]-\gamma_{\pi,\mathcal{M}}(D)\} = \sup_{1<d_2\leq 2}\{2-2d_2\} = 0.
\]
\end{example}
\begin{comment}
\section{Examples of acceptance sets}
\label{sect: applications}
In this section we present some concrete examples of acceptance sets. For each example we verify the closedness requirement stipulated in Assumption \ref{standing assumption} and focus on the key properties in the formulation of the Fundamental Theorems of Asset Pricing. More precisely, we provide an explicit description of the corresponding recession and generated cones, which play a role in the absence of (strong) scalable acceptable deals in
Theorems~\ref{theo: direct FTAP} and~\ref{theo: dual ftap A convex}, and verify the pointedness requirement in Theorem~\ref{theo: dual ftap A convex}.
\smallskip
The payoff spaces considered below belong to the broad class of Orlicz spaces, which is flexible enough to accommodate most of the spaces considered in the literature. On the one side, it covers the case of $L^p$ spaces for $1\leq p\leq\infty$. On the other side, by going beyond Orlicz function of power type, it allows to tailor the space to, e.g., general utility functions as carefully explained in (Biagini and Frittelli, 2008) and exploited in a pricing setting in (Arai, 2011) and (Arai and Fukasawa, 2014). Recall that $\Phi:[0,\infty)\to[0,\infty]$ is called an Orlicz function if it is convex, left-continuous, increasing, finite on a right neighborhood of zero, and satisfies $\Phi(0)=0$. The conjugate of $\Phi$ is the Orlicz function defined by
\[
\Phi^\ast(u) := \sup_{t\in[0,\infty)}\{tu-\Phi(t)\}.
\]
For every $X\in L^0(\mathbb{P})$ define the Luxemburg norm by
\[
\|X\|_\Phi := \inf\left\{\lambda\in(0,\infty) \,; \ \mathbb{E}_\mathbb{P}\left[\Phi\left(\frac{|X|}{\lambda}\right)\right]\leq1\right\}.
\]
The corresponding Orlicz space is given by
\[
L^\Phi(\mathbb{P}) := \{X\in L^0(\mathbb{P}) \,; \ \|X\|_\Phi<\infty\}.
\]
The heart of $L^\Phi(\mathbb{P})$ is the space
\[
H^\Phi(\mathbb{P}) := \left\{X\in L^\Phi(\mathbb{P}) \,; \ \mathbb{E}_\mathbb{P}\left[\Phi\left(\frac{|X|}{\lambda}\right)\right]<\infty, \ \forall \lambda\in(0,\infty)\right\}.
\]
The classical Lebesgue spaces are special examples of Orlicz spaces. Indeed, if $\Phi(t)=t^p$ for $p\in[1,\infty)$ and $t\in[0,\infty)$, then $L^\Phi(\mathbb{P})=H^\Phi(\mathbb{P})=L^p(\mathbb{P})$ and the Luxemburg norm coincides with the usual $p$ norm. Moreover, if we set $\Phi(t)=0$ for $t\in[0,1]$ and $\Phi(t)=\infty$ otherwise, then we have $L^\Phi(\mathbb{P})=L^\infty(\mathbb{P})$ and the Luxemburg norm coincides with the usual esssup norm. Note that, in this case, $H^\Phi(\mathbb{P})=\{0\}$. In a nonatomic setting one has $L^\Phi(\mathbb{P})=H^\Phi(\mathbb{P})$ if and only if $\Phi$ satisfies the $\Delta_2$ condition, i.e.\ there exist $s\in(0,\infty)$ and $k\in(0,\infty)$ such that $\Phi(2t)<k\Phi(t)$ for every $t\in[s,\infty)$. A well-known example of a nontrivial $H^\Phi(\mathbb{P})$ that is strictly contained in $L^\Phi(\mathbb{P})$ is obtained by setting $\Phi(t)=\exp(t)-1$ for $t\in[0,\infty)$. The norm dual of $L^\Phi(\mathbb{P})$ cannot be identified with a subspace of $L^0(\mathbb{P})$ in general. However, if $\Phi$ is finite valued (otherwise $H^\Phi(\mathbb{P})=\{0\}$), the norm dual of $H^\Phi(\mathbb{P})$ can always be identified with $L^{\Phi^\ast}(\mathbb{P})$. For the case $L^p(\mathbb{P})$, for $p\in[1,\infty)$, this is simply the well-known identification of the norm dual of $L^p(\mathbb{P})$ with $L^q(\mathbb{P})$ and $q=\frac{p}{p-1}$ (with the usual convention $\frac10:=\infty$). For more details on Orlicz spaces we refer, e.g., to (Rao and Ren, 1991).
\medskip
Throughout the remainder of this section we work under the following standing assumption.
\begin{assumption}
We assume that $(\Omega,{\mathcal{F}},\mathbb{P})$ is nonatomic. The reference payoff space is taken to be ${\mathcal{X}}=L^\Phi(\mathbb{P})$ for a fixed Orlicz function $\Phi$ and its companion dual space is ${\mathcal{X}}'=L^\infty(\mathbb{P})$.
\end{assumption}
\smallskip
We start by highlighting a number of sufficient conditions for the weak closedness required in Assumption \ref{standing assumption} to hold. These conditions are easy to check and fulfilled by virtually all acceptance sets of interest. As a preliminary step, we recall the notion of law invariance and surplus invariance. For every $X\in L^0(\mathbb{P})$ we denote by $\mathbb{P}_X$ the probability law of $X$ under $\mathbb{P}$.
\begin{definition}
We say that $\mathcal{A}$ is {\em law invariant} under $\mathbb{P}$ if for all $X,Y\in L^0(\mathbb{P})$ such that $\mathbb{P}_X=\mathbb{P}_Y$ we have $X\in\mathcal{A}$ if and only if $Y\in\mathcal{A}$. We say that $\mathcal{A}$ is {\em surplus invariant} if for all $X,Y\in L^0(\mathbb{P})$ such that $X^-=Y^-$ we have $X\in\mathcal{A}$ if and only if $Y\in\mathcal{A}$.
\end{definition}
\smallskip
\begin{proposition}
\label{prop: law invariant A}
Assume that one of the following conditions holds:
\begin{enumerate}
\item[(i)] $\mathcal{A}\cap L^1(\mathbb{P})$ is closed with respect to the norm topology of $L^1(\mathbb{P})$.
\item[(ii)] $\mathcal{A}$ is law invariant under $\mathbb{P}$ and for every sequence $(X_n)\subset\mathcal{A}\cap L^\Phi(\mathbb{P})$ and every $X\in L^\Phi(\mathbb{P})$
\[
\mbox{$X_n\to X$ $\mathbb{P}$-almost surely}, \ \sup_{n\in\mathbb{N}}|X_n|\in L^\Phi(\mathbb{P}) \ \implies \ X\in \mathcal{A}.
\]
\item[(iii)] $\mathcal{A}$ is surplus invariant and for every sequence $(X_n)\subset\mathcal{A}\cap L^\Phi(\mathbb{P})$ and every $X\in L^\Phi(\mathbb{P})$
\[
\mbox{$X_n\to X$ $\mathbb{P}$-almost surely}, \ \sup_{n\in\mathbb{N}}|X_n|\in L^\Phi(\mathbb{P}) \ \implies \ X\in\mathcal{A}.
\]
\end{enumerate}
Then, $\mathcal{A}\cap L^\Phi(\mathbb{P})$ is closed with respect to $\sigma(L^\Phi(\mathbb{P}),L^\infty(\mathbb{P}))$.
\end{proposition}
\begin{proof}
If (i) holds, then $\mathcal{A}\cap L^1(\mathbb{P})$ is $\sigma(L^1(\mathbb{P}),L^\infty(\mathbb{P}))$-closed by Theorem 5.98 in (Aliprantis and Border, 2006). Since $L^\Phi(\mathbb{P})$ is contained in $L^1(\mathbb{P})$, the desired closedness follows. Next, assume that (ii) holds. In this case, the set $\mathcal{A}\cap L^\Phi(\mathbb{P})$ is norm closed. This is because every sequence in $L^\Phi(\mathbb{P})$ that converges in norm admits a dominated subsequence that converges $\mathbb{P}$-almost surely. This follows from a straightforward extension of Theorem 13.6 in (Aliprantis and Border, 2006) to the Orlicz setting. As a result, the desired closedness follows again from Theorem 5.98 in (Aliprantis and Border, 2006) when ${\mathcal{X}}=L^1(\mathbb{P})$ and from Proposition 1.1 in (Svindland, 2010) when ${\mathcal{X}}=L^\infty(\mathbb{P})$. In all other cases it follows from Theorem 1.1 in (Gao et al., 2018). Finally, if (iii) holds, the desired closedness follows from Theorem 1 in (Gao and Munari, 2020).
\end{proof}
\smallskip
\begin{remark}
(i) Law invariance is a standard property in risk measure theory and stipulates that acceptability is only driven by the probability distribution of a payoff. Surplus invariance was introduced in (Koch-Medina et al., 2015) and thoroughly studied in (Koch-Medina et al., 2017) and (Gao and Munari, 2020) and stipulates that acceptability is only driven by the downside profile of a payoff.
\smallskip
(ii) The closedness under dominated $\mathbb{P}$-almost sure convergence stated in point (ii) and (iii) above is sometimes referred to as Fatou closedness. We refer to (Tantrawan and Leung, 2020) and (Gao and Munari, 2020) for a number of results linking Fatou closedness and topological closedness beyond the Orlicz space setting.
\end{remark}
\smallskip
Next, we highlight a sufficient condition for a conic acceptance set to be pointed, which is a crucial assumption in Theorem~\ref{theo: dual ftap A convex}. In fact, it is sufficient to focus on pointedness of the restricted acceptance set as observed in Remark~\ref{rem: assumptions dual ftap}.
\begin{proposition}
\label{prop: pointedness}
Assume that $\mathcal{A}$ is a law-invariant cone such that $\mathcal{A}\cap L^\Phi(\mathbb{P})$ is closed with respect to $\sigma(L^\Phi(\mathbb{P}),L^\infty(\mathbb{P}))$ and $-1\notin\mathcal{A}$. Then, one of the following alternatives holds:
\begin{enumerate}
\item[(i)] $\mathcal{A}\cap L^\Phi(\mathbb{P})$ is pointed.
\item[(ii)] $\mathcal{A}\cap L^\Phi(\mathbb{P})=\{X\in L^\Phi(\mathbb{P}) \,; \ \mathbb{E}_\mathbb{P}[X]\geq0\}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Consider the map $\rho:L^\Phi(\mathbb{P})\to[-\infty,\infty]$ defined by $\rho(X)=\inf\{m\in\mathbb{R} \,; \ X+m\in\mathcal{A}\}$. It is clear that $\rho$ is sublinear and satisfies $\rho(X)=\rho(Y)$ for all $X,Y\in L^\Phi(\mathbb{P})$ such that $\mathbb{P}_X=\mathbb{P}_Y$. It is also clear that $\rho$ is $\sigma(L^\Phi(\mathbb{P}),L^\infty(\mathbb{P}))$-lower semicontinuous. Since $-1\notin\mathcal{A}$, we must have $\rho(0)=0$ and, hence, $\rho$ cannot take the value $-\infty$ for otherwise $\rho(0)=-\infty$ by convexity and lower semicontinuity. Note also that
\[
\mathcal{A}\cap L^\Phi(\mathbb{P}) = \{X\in L^\Phi(\mathbb{P}) \,; \ \rho(X)\leq0\}.
\]
It follows from Proposition 5.9 in (Bellini et al., 2020) that two situations are possible. In the first case, for every nonzero $X\in L^\Phi(\mathbb{P})$ such that $\rho(X)\leq0$ we have $\rho(-X)>0$. This implies (i). In the second case, we have $\rho(X)=-\mathbb{E}_\mathbb{P}[X]$ for every $X\in L^\Phi(\mathbb{P})$, which yields (ii).
\end{proof}
\subsection{Expected Shortfall}
A prominent example of acceptance set defined in terms of a risk measure is the one based on Expected Shortfall at some level $\alpha\in(0,1)$. For a given random variable $X\in L^0(\mathbb{P})$ we define the Value at Risk of $X$ at level $\alpha$ as the negative of the upper $\alpha$-quantile of $X$, i.e.
\[
\mathop {\rm VaR}\nolimits_\alpha(X) := \inf\{x\in\mathbb{R} \,; \ \mathbb{P}(X+x<0)\leq\alpha\} = -\inf\{x\in\mathbb{R} \,; \ \mathbb{P}(X\leq x)>\alpha\}.
\]
The Expected Shortfall of $X$ at level $\alpha$ is defined by
\[
\mathop {\rm ES}\nolimits_\alpha(X) := \frac{1}{\alpha}\int_0^\alpha \mathop {\rm VaR}\nolimits_p(X)dp.
\]
Intuitively speaking, $\mathop {\rm ES}\nolimits_\alpha(X)$ coincides with the expectation of $-X$ conditional to the left tail beyond the upper $\alpha$-quantile. This interpretation is formally correct when, e.g., the distribution function of $X$ is continuous, in which case we can equivalently write
\[
\mathop {\rm ES}\nolimits_\alpha(X) = -\mathbb{E}_\mathbb{P}[X\vert X\leq-\mathop {\rm VaR}\nolimits_\alpha(X)].
\]
Note that we always have $\mathop {\rm ES}\nolimits_\alpha(X)\geq\mathop {\rm VaR}\nolimits_\alpha(X)>-\infty$. It follows that the quantity $\mathop {\rm ES}\nolimits_\alpha(X)$ is finite if and only if the negative part of $X$ is integrable under $\mathbb{P}$. Next, set
\[
\mathcal{A}_{\mathop {\rm ES}\nolimits}(\alpha) := \{X\in L^0(\mathbb{P}) \,; \ \mathop {\rm ES}\nolimits_\alpha(X)\leq0\}.
\]
In line with the above interpretation, the set $\mathcal{A}_{\mathop {\rm ES}\nolimits}(\alpha)$ consists of all the payoffs that are positive on average on the left tail beyond their upper $\alpha$-quantile. The next result follows from Propositions~\ref{prop: law invariant A} and~\ref{prop: pointedness}.
\begin{proposition}
The set $\mathcal{A}_{\mathop {\rm ES}\nolimits}(\alpha)$ is a conic acceptance set such that:
\begin{enumerate}
\item[(i)] $\mathcal{A}_{\mathop {\rm ES}\nolimits}(\alpha)\cap L^\Phi(\mathbb{P})$ is closed with respect to $\sigma(L^\Phi(\mathbb{P}),L^\infty(\mathbb{P}))$.
\item[(ii)] $\mathcal{A}_{\mathop {\rm ES}\nolimits}(\alpha)\cap L^\Phi(\mathbb{P})$ is pointed.
\end{enumerate}
\end{proposition}
\subsection{Gain-loss ratios}
Another prominent example of acceptance set defined in terms of a risk measure is the one based on the expectile at some level $\alpha\in\big(0,\frac{1}{2}\big]$. For a given random variable $X\in L^0(\mathbb{P})$ we define the expectile of $X$ at level $\alpha$ as the unique solution $e_\alpha(X)\in[-\infty,\infty]$ of the equation
\[
\alpha\mathbb{E}_\mathbb{P}[(X-e_\alpha(X))^+]=(1-\alpha)\mathbb{E}_\mathbb{P}[(e_\alpha(X)-X)^+]
\]
provided that either $X^+$ or $X^-$ belongs to $L^1(\mathbb{P})$ and $e_\alpha(X)=-\infty$ otherwise. Note that $e_\alpha(X)$ is finite if and only if $X$ belongs to $L^1(\mathbb{P})$. Moreover, $e_\alpha(X)=\mathbb{E}_\mathbb{P}[X]$ for every $X\in L^0(\mathbb{P})$ if $\alpha=\frac{1}{2}$. Now, set
\[
\mathcal{A}_e(\alpha) := \{X\in L^0(\mathbb{P}) \,; \ e_\alpha(X)\geq0\}.
\]
This set can be equivalently expressed as
\[
\mathcal{A}_e(\alpha) = \left\{X\in L^0(\mathbb{P}) \, ; \ \frac{\mathbb{E}_\mathbb{P}[X^+]}{\mathbb{E}_\mathbb{P}[X^-]}\geq\frac{1-\alpha}{\alpha}\right\},
\]
with the convention $\frac{\infty}{\infty}=-\infty$ and $\frac{0}{0}=\infty$. The set $\mathcal{A}_e(\alpha)$ thus consists of all the payoffs such that the ratio between the expected inflow of money (gains) and the expected outflow of money (losses) is sufficiently large. In particular, note that $\frac{1-\alpha}{\alpha}\geq1$ by assumption on $\alpha$, which implies that the expected gain must be at least large as the the expected loss. This type of acceptability criterion has been investigated in a pricing context by (Bernardo and Ledoit, 2000), even though the link with expectiles was not discussed there. The next result follows from Propositions~\ref{prop: law invariant A} and~\ref{prop: pointedness}.
\begin{proposition}
The set $\mathcal{A}_e(\alpha)$ is a conic acceptance set such that:
\begin{enumerate}
\item[(i)] $\mathcal{A}_e(\alpha)\cap L^\Phi(\mathbb{P})$ is closed with respect to $\sigma(L^\Phi(\mathbb{P}),L^\infty(\mathbb{P}))$.
\item[(ii)] $\mathcal{A}_e(\alpha)\cap L^\Phi(\mathbb{P})$ is pointed for every $\alpha\in\big(0,\frac{1}{2}\big)$.
\end{enumerate}
\end{proposition}
\subsection{Test scenarios}
Consider an event $E\in{\mathcal{F}}$ such that $\mathbb{P}(E)>0$ and define the set
\[
\mathcal{A}_E := \{X\in L^0(\mathbb{P}) \,; \ X\mathbbm 1_E\geq0\}.
\]
The set $\mathcal{A}_E$ consists of all the payoffs that are positive on the event $E$. In this case, the elements of $E$ can be seen as pre-specified test or control scenarios and the acceptability criterion boils down to requiring a positive payment in each of these scenarios. Clearly, the set $\mathcal{A}_E$ corresponds to the standard positive cone provided that we take $E=\Omega$ or more generally $\mathbb{P}(E)=1$. The next result follows from Proposition~\ref{prop: law invariant A}.
\begin{proposition}
The set $\mathcal{A}_E$ is a conic acceptance set such that:
\begin{enumerate}
\item[(i)] $\mathcal{A}_E\cap L^\Phi(\mathbb{P})$ is closed with respect to $\sigma(L^\Phi(\mathbb{P}),L^\infty(\mathbb{P}))$.
\item[(ii)] $\mathcal{A}_E\cap(-\mathcal{A}_E)=\{X\in L^0(\mathbb{P}) \,; \ X\mathbbm 1_E=0\}$.
\end{enumerate}
\end{proposition}
\subsection{Expected utility}
Let $u:\mathbb{R}\to[-\infty,\infty)$ be a nonconstant, increasing, concave, right-continuous function satisfying $u(0)=0$ and $\frac{u(x)}{x}\to\infty$ for $x\to-\infty$. We interpret $u$ as a classical von Neumann-Morgenstern utility function. The last condition requires that a rational agent with utility $u$ does not asymptotically behave like a risk-neutral agent for large losses. The case of a risk-neutral agent is covered in Section \ref{sect: polyhedral A}. For a fixed level $\alpha\in(-\infty,0]$ define
\[
\mathcal{A}_u(\alpha) := \{X\in L^0(\mathbb{P}) \, ; \ \mathbb{E}_\mathbb{P}[u(X)]\geq\alpha\}.
\]
This set consists of all the payoffs that yield a sufficiently large expected utility. In particular, the level $\alpha$ could coincide with some utility level, in which case $\mathcal{A}_u(\alpha)$ would consist of all the payoffs that are preferable, from the perspective of the utility function $u$, to a pre-specified deterministic monetary loss. This type of acceptability criteria has been considered in a pricing context by (\v{C}ern\'{y} and Hodges, 2002), (\v{C}ern\'{y}, 2003), (Kl\"{o}ppel and Schweizer, 2007), and (Arai, 2011). We also refer to (F\"{o}llmer and Leukert, 2000) for related hedging problems.
\begin{proposition}
The set $\mathcal{A}_u(\alpha)$ is an acceptance set such that:
\begin{enumerate}
\item[(i)] $\mathcal{A}_u(\alpha)\cap L^\Phi(\mathbb{P})$ is closed with respect to $\sigma(L^\Phi(\mathbb{P}),L^\infty(\mathbb{P}))$.
\item[(ii)] $\mathcal{A}_u(\alpha)^\infty\cap L^\Phi(\mathbb{P})=L^\Phi(\mathbb{P})_+$.
\item[(iii)] $L^{\Phi_u}(\mathbb{P})\subset\mathop{\rm cone}\nolimits(\mathcal{A}_u(\alpha))$ where $\Phi_u(t)=-u(-|t|)$ for $t\in[0,\infty)$.
\end{enumerate}
\end{proposition}
\begin{proof}
It follows from Proposition~\ref{prop: law invariant A} that (i) holds. To show (ii), it suffices to prove the inclusion ``$\subset$''. To this effect, take $X\in\mathcal{A}_u(\alpha)^\infty\cap L^\Phi(\mathbb{P})$ and assume that $\mathbb{P}(X<0)>0$. In this case, we find $\varepsilon>0$ such that $\mathbb{P}(X\leq-\varepsilon)>0$. Set $E=\{X\leq-\varepsilon\}$ and take $a,b\in\mathbb{R}$ such that $u(x)\leq ax+b$ for every $x\in\mathbb{R}$, which exist by concavity. Then, for every $\lambda>0$ we have
\[
\alpha \leq \mathbb{E}_\mathbb{P}[u(\lambda X)] \leq \mathbb{E}_\mathbb{P}[u(\lambda X)\mathbbm 1_E]+\mathbb{E}_\mathbb{P}[u(\lambda X)\mathbbm 1_{\{X\geq0\}}] \leq \mathbb{P}(E)u(-\lambda\varepsilon)+a\lambda\mathbb{E}_\mathbb{P}[X^+]+b.
\]
However, this is not possible because the right-hand side above diverges to $-\infty$ as $\lambda$ goes to $\infty$ due to our assumption on $u$ and to the fact that $X^+$ belongs to $L^1(\mathbb{P})$. As a consequence, $\mathbb{P}(X<0)=0$ must hold. Finally, to show (iii) take an arbitrary $X\in L^{\Phi_u}(\mathbb{P})$ and note that there exists $\lambda\in(0,\infty)$ such that $\mathbb{E}_\mathbb{P}[\Phi_u(\lambda X)]\leq-\alpha$ or equivalently $\mathbb{E}_\mathbb{P}[u(-\lambda|X|)]\geq\alpha$. This is because the two Orlicz functions $\Phi_u$ and $-\frac{1}{\alpha}\Phi_u$ induce the same space. Then, $\mathbb{E}_\mathbb{P}[u(\lambda X)]\geq\mathbb{E}_\mathbb{P}[u(-\lambda|X|)]\geq\alpha$, showing that $X$ belongs to $\mathop{\rm cone}\nolimits(\mathcal{A}_u(\alpha))$.
\end{proof}
\subsection{Test probabilities}
\label{sect: polyhedral A}
Consider a vector $\mathbb{Q}=(\mathbb{Q}_1,\dots,\mathbb{Q}_n)$ of probability measures on $(\Omega,{\mathcal{F}})$ that are absolutely continuous and have bounded Radon-Nikodym derivative with respect to $\mathbb{P}$. For a given vector $\alpha=(\alpha_1,\dots,\alpha_n)\in\mathbb{R}^n$ with nonpositive components define the set
\[
\mathcal{A}_\mathbb{Q}(\alpha) := \{X\in L^0(\mathbb{P}) \,; \ \mathbb{E}_{\mathbb{Q}_i}[X]\geq\alpha_i, \ \forall \ i\in\{1,\dots,n\}\}.
\]
The set $\mathcal{A}_\mathbb{Q}(\alpha)$ consists of all the payoffs whose expected value under each of the pre-specified test probabilities is above the corresponding floor. For instance, the test probabilities may be designed based on expert opinions or may correspond to appropriate distortions of the underlying probability measure $\mathbb{P}$. This type of acceptability criterion has been investigated in a pricing context by (Carr et al., 2001). In that paper, the probability measures used to define the acceptance set are called valuation test measures or stress test measures depending on whether the associated floor is zero or not.
\begin{proposition}
The set $\mathcal{A}_\mathbb{Q}(\alpha)$ is an acceptance set such that:
\begin{enumerate}
\item[(i)] $\mathcal{A}_\mathbb{Q}(\alpha)\cap L^\Phi(\mathbb{P})$ is closed with respect to $\sigma(L^\Phi(\mathbb{P}),L^\infty(\mathbb{P}))$.
\item[(ii)] $\mathcal{A}_\mathbb{Q}(\alpha)^\infty=\{X\in L^0(\mathbb{P}) \,; \ \mathbb{E}_{\mathbb{Q}_i}[X]\geq0, \ \forall \ i\in\{1,\dots,n\}\}$.
\item[(iii)] $\mathop{\rm cone}\nolimits(\mathcal{A}_\mathbb{Q}(\alpha))\cap L^\Phi(\mathbb{P})=\{X\in L^\Phi(\mathbb{P}) \,; \ \mathbb{E}_{\mathbb{Q}_i}[X]\geq0, \ \forall \ i\in\{1,\dots,n\} \,:\, \alpha_i=0\}$.
\end{enumerate}
\end{proposition}
\begin{proof}
It follows from Proposition~\ref{prop: law invariant A} that (i) holds. To show that (ii) holds, it suffices to prove the inclusion ``$\subset$''. To this end, take an arbitrary $X\in\mathcal{A}_\mathbb{Q}(\alpha)^\infty$. For every $i\in\{1,\dots,n\}$ we must have $\mathbb{E}_{\mathbb{Q}_i}[\lambda X]\geq\alpha_i$ for every $\lambda\in(0,\infty)$. Clearly, this is only possible if $\mathbb{E}_{\mathbb{Q}_i}[X]\geq0$, proving the claim. To establish (iii), it is enough to show the inclusion ``$\supset$''. Hence, take any $X\in L^\Phi(\mathbb{P})$ satisfying $\mathbb{E}_{\mathbb{Q}_i}[X]\geq0$ for every $i\in\{1,\dots,n\}$ with $\alpha_i=0$. We can always find $\lambda\in(0,\infty)$ such that $\mathbb{E}_{\mathbb{Q}_j}[\lambda X]\geq\alpha_j$ for every $j\in\{1,\dots,n\}$ with $\alpha_j\neq0$. This is because $X$ belongs to $L^1(\mathbb{P})$ and thus to $L^1(\mathbb{Q}_j)$ for every $j\in\{1,\dots,n\}$ (with $\alpha_j\neq0$) by our standing assumption on bounded Radon-Nikodym derivatives. This shows that $X$ belongs to $\mathop{\rm cone}\nolimits(\mathcal{A}_\mathbb{Q}(\alpha))$ as well.
\end{proof}
\subsection{Stochastic dominance}
Recall that a random variable $X\in L^0(\mathbb{P})$ with cumulative distribution function $F_X$ dominates a random variable $Y\in L^0(\mathbb{P})$ with cumulative distribution function $F_Y$ in the sense of second-order stochastic dominance whenever for every $t\in\mathbb{R}$ we have
\[
\int_{-\infty}^t F_X(x)dx \leq \int_{-\infty}^t F_Y(y)dy.
\]
In this case, we write $X\succeq_{SSD}Y$. Now, fix $Z\in L^0(\mathbb{P})$ such that $Z^-\in L^1(\mathbb{P})$ and $0\succeq_{SSD}Z$ and define
\[
\mathcal{A}_{SSD}(Z):=\{X\in L^0(\mathbb{P}) \,; \ X\succeq_{SSD}Z\}.
\]
According to this set, a payoff is acceptable precisely when it dominates the reference payoff $Z$ in the sense of second-order stochastic dominance. For instance, $Z$ may represent the terminal value of a pre-specified benchmark portfolio. Note that, by definition, we have $\mathbb{E}_\mathbb{P}[Z]\leq0$. The use of stochastic dominance rules in pricing problems dates back at least to (Levy, 1985).
\begin{proposition}
The set $\mathcal{A}_{SSD}(Z)$ is an acceptance set such that:
\begin{enumerate}
\item[(i)] $\mathcal{A}_{SSD}(Z)\cap L^\Phi(\mathbb{P})$ is closed with respect to $\sigma(L^\Phi(\mathbb{P}),L^\infty(\mathbb{P}))$.
\item[(ii)] $\mathcal{A}_{SSD}(Z)^\infty\cap L^\Phi(\mathbb{P})=L^\Phi(\mathbb{P})_+$.
\item[(iii)] $L^\infty(\mathbb{P})\subset\mathop{\rm cone}\nolimits(\mathcal{A}_{SSD}(Z))\cap L^\Phi(\mathbb{P})$ whenever $\mathbb{E}_\mathbb{P}[Z]<0$.
\end{enumerate}
\end{proposition}
\begin{proof}
It follows from Proposition~\ref{prop: law invariant A} that (i) holds. To establish (ii), we only need to show the inclusion ``$\subset$''. To this effect, take an arbitrary $X\in\mathcal{A}_{SSD}(Z)^\infty\cap L^\Phi(\mathbb{P})$ so that $\lambda X\succeq_{SSD}Z$ for every $\lambda\in(0,\infty)$. It is well known that second-order stochastic dominance can be equivalently formulated in terms of Expected Shortfalls. In particular, for every $\alpha\in(0,1)$ we obtain $\lambda\mathop {\rm ES}\nolimits_\alpha(X) = \mathop {\rm ES}\nolimits_\alpha(\lambda X) \leq \mathop {\rm ES}\nolimits_\alpha(Z)$ for every $\lambda\in(0,\infty)$. Then, we must have $\mathbb{P}(X<0)=0$ for otherwise we would find $\alpha\in(0,1)$ such that $\mathop {\rm ES}\nolimits_\alpha(X)>0$, which is impossible due to the above bound. To prove (iii), assume that $\mathbb{E}_\mathbb{P}[Z]<0$ and take an arbitrary $X\in L^\infty(\mathbb{P})$. Let $m\in(0,\infty)$ satisfy $X\geq-m$ and take $\lambda\in(0,\infty)$ such that $\lambda m\leq-\mathbb{E}_\mathbb{P}[Z]$. Then, we have $\mathop {\rm ES}\nolimits_\alpha(\lambda X) \leq \lambda m \leq -\mathbb{E}_\mathbb{P}[Z] \leq \mathop {\rm ES}\nolimits_\alpha(Z)$ for every $\alpha\in(0,1)$. This shows that $X$ belongs to $\mathop{\rm cone}\nolimits(\mathcal{A}_{SSD}(Z))$.
\end{proof}
\end{comment}
\begin{comment}
\section{The Kreps-Yan Theorem}
\label{sect: kreps yan}
In this section we record an abstract formulation of the classical Kreps-Yan Theorem established in Kreps \cite{Kreps1981} and Yan \cite{Yan1980}. This formulation is a minor extension of the general statement obtained by Jouini et al.~\cite{JouiniNappSchachermayer2005}. We refer to Cassese \cite{Cassese2007}, Gao and Xanthos \cite{GaoXanthos2017}, and Rokhlin \cite{Rokhlin2005, Rokhlin2009} for other special formulations of the Kreps-Yan theorem.
\begin{theorem}
\label{theo: kreps yan}
Let ${\mathcal{X}}$ and ${\mathcal{Y}}$ be nonzero real vector spaces equipped with a bilinear map $\langle\cdot,\cdot\rangle:{\mathcal{X}}\times{\mathcal{Y}}\to\mathbb{R}$. Moreover, let $\mathcal{C}\subset{\mathcal{X}}$ and $\mathcal{D}\subset{\mathcal{Y}}$ and assume that the following properties hold:
\begin{enumerate}
\item[(1)] Completeness: For every sequence $(Y_n)\subset\mathcal{D}$ there exist a sequence $(\lambda_n)\subset(0,\infty)$ and $Y\in\mathcal{D}$ such that $\sum_{k=1}^n\lambda_kY_k\to Y$ with respect to $\sigma({\mathcal{Y}},{\mathcal{X}})$.
\item[(2)] Countable separation: There exists a sequence $(Y_n)\subset\mathcal{D}\cap(-\mathop{\rm bar}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{C})))$ such that for every nonzero $X\in\mathcal{C}$ we have $\langle X,Y_n\rangle>0$ for some $n\in\mathbb{N}$.
\end{enumerate}
Then, there exists $Y\in\mathcal{D}$ such that $\langle X,Y\rangle>0$ for every nonzero $X\in\mathcal{C}$.
\end{theorem}
\begin{proof}
By the countable separation property, there exists a sequence $(Y_n)\subset\mathcal{D}\cap(-\mathop{\rm bar}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{C})))$ such that for every nonzero $X\in\mathcal{C}$ we have $\langle X,Y_n\rangle>0$ for some $n\in\mathbb{N}$. In particular, note that $\langle X,Y_n\rangle\geq0$ for all $X\in\mathcal{C}$ and $n\in\mathbb{N}$ because $(Y_n)\subset-\mathop{\rm bar}\nolimits(\mathop{\rm cone}\nolimits(\mathcal{C}))$. Moreover, by the completeness property, there exist a sequence $(\lambda_n)\subset(0,\infty)$ and $Y\in\mathcal{D}$ such that $\sum_{k=1}^n\lambda_kY_k\to Y$ with respect to the topology $\sigma({\mathcal{Y}},{\mathcal{X}})$. It is immediate to see that $\langle X,Y\rangle>0$ for every nonzero $X\in\mathcal{C}$.
\end{proof}
\smallskip
\begin{remark}
In the general formulation of the Kreps-Yan Theorem obtained by Jouini et al.~\cite{JouiniNappSchachermayer2005} the set $\mathcal{C}$ was assumed to be a convex cone satisfying $\mathcal{C}\cap(-\mathcal{C})=\{0\}$ and $\mathcal{C}-\mathcal{C}={\mathcal{X}}$ and
\[
\mathcal{D}=-\mathop{\rm bar}\nolimits(\mathcal{C})=\{Y\in{\mathcal{Y}} \,; \ \langle X,Y\rangle\geq0, \ \forall X\in\mathcal{C}\}.
\]
Note that the pointedness condition $\mathcal{C}\cap(-\mathcal{C})=\{0\}$ is automatically implied by the countable separation property (regardless of the special choice of $\mathcal{C}$).
\end{remark}
\end{comment}
\begin{comment}
\begin{theorem}
\label{theo: kreps yan}
Let $\mathcal{C}\subset{\mathcal{X}}$ satisfy $-{\mathcal{X}}_+\subset\mathcal{C}$ and assume that the following properties hold:
\begin{enumerate}
\item[(1)] Completeness: For every sequence $(Y_n)\subset\mathop{\rm bar}\nolimits(\mathcal{C})$ there exist a sequence $(\lambda_n)\subset\mathbb{R}_{++}$ and $Y\in\mathop{\rm bar}\nolimits(\mathcal{C})$ such that $\sum_{k=1}^n\lambda_kY_k\to Y$ with respect to $\sigma({\mathcal{Y}},{\mathcal{X}})$.
\item[(2)] Countable separation: There exists a sequence $(Y_n)\subset\mathop{\rm bar}\nolimits(\mathcal{C})$ such that for every nonzero $X\in{\mathcal{X}}_+$ we have $\langle X,Y_n\rangle>0$ for some $n\in\mathbb{N}$.
\end{enumerate}
Then, there exists $Y\in\mathop{\rm bar}\nolimits(\mathcal{C})$ such that $\langle X,Y\rangle>0$ for every nonzero $X\in{\mathcal{X}}_+$.
\end{theorem}
\begin{proof}
Set ${\mathcal{Y}}_+=\{Y\in{\mathcal{Y}} \,; \ \langle X,Y\rangle\geq0, \ \forall X\in{\mathcal{X}}_+\}$ and note that $\mathop{\rm bar}\nolimits(\mathcal{C})\subset{\mathcal{Y}}_+$. Indeed, recall that $-{\mathcal{X}}_+\subset\mathcal{C}$ and take arbitrary $Y\in\mathop{\rm bar}\nolimits(\mathcal{C})$ and $X\in{\mathcal{X}}_+$. Then, $\sup\{\langle -\lambda X,Y\rangle \,; \ \lambda>0\}\leq\sigma_\mathcal{C}(Y)<\infty$, which is only possible if $\langle X,Y\rangle\geq0$. By the countable separation property, there exists a sequence $(Y_n)\subset\mathop{\rm bar}\nolimits(\mathcal{C})$ such that for every nonzero $X\in{\mathcal{X}}_+$ we have $\langle X,Y_n\rangle>0$ for some $n\in\mathbb{N}$. Moreover, by the completeness property, there exist a sequence $(\lambda_n)\subset\mathbb{R}_{++}$ and $Z\in\mathop{\rm bar}\nolimits(\mathcal{C})$ such that $\sum_{k=1}^n\lambda_kY_k\to Z$ with respect to the topology $\sigma({\mathcal{Y}},{\mathcal{X}})$. It is immediate to see that $\langle X,Z\rangle>0$ for every nonzero $X\in{\mathcal{X}}_+$.
\end{proof}
\smallskip
\begin{remark}
Let $\mathcal{C}\subset{\mathcal{X}}$ be a $\sigma({\mathcal{X}},{\mathcal{Y}})$-closed and convex set containing $0$ and take a nonzero $X\in{\mathcal{X}}_+$. A necessary and sufficient condition for the existence of $Y\in\mathop{\rm bar}\nolimits(\mathcal{C})$ such that $\langle X,Y\rangle>0$ is that $\mathop{\rm cone}\nolimits(X)\not\subset\mathcal{C}$. The necessity is clear and the sufficiency follows at once from Lemma~\ref{lem: external representation}.
\end{remark}
\end{comment}
\begin{comment}
\section*{Acknowledgments}
\begin{remark}
The above theorem is stated under the conditions stipulated in Assumption \ref{standing assumption}. In particular, we require that ${\mathcal{S}}\subset{\mathcal{X}}$. It is not difficult to derive a formulation of the Fundamental Theorem without imposing any condition on ${\mathcal{S}}$. In this case, assume that there exists no scalable arbitrage opportunity. We can always find a probability measure $\mathbb{Q}$ that is equivalent to $\mathbb{P}$ and satisfies $\frac{d\mathbb{Q}}{d\mathbb{P}}\in L^\infty(\mathbb{P})$ and ${\mathcal{S}}\subset L^1(\mathbb{Q})$. Let ${\mathcal{X}}=L^1(\mathbb{Q})$ and ${\mathcal{X}}'=L^\infty(\mathbb{Q})$. A direct application of the above theorem (with $\mathbb{Q}$ replacing $\mathbb{P}$) yields the existence of $D_\mathbb{Q}\in L^\infty(\mathbb{Q})$ such that $D_\mathbb{Q}$ is strictly positive with respect to $\mathbb{Q}$ and
\[
\sup_{X\in\mathcal{M}}\{\mathbb{E}_\mathbb{Q}[D_\mathbb{Q} X]-\pi(X)\} < \infty.
\]
As a result, the random variable $D_\mathbb{P}=\frac{d\mathbb{Q}}{d\mathbb{P}}D_\mathbb{Q}$ satisfies the following properties:
\begin{enumerate}
\item[(1)] $D_\mathbb{P} X\in L^1(\mathbb{P})$ for every $X\in{\mathcal{S}}$,
\item[(2)] $D_\mathbb{P}$ is strictly positive with respect to $\mathbb{P}$,
\item[(3)] $\sup\{\mathbb{E}_\mathbb{Q}[D_\mathbb{Q} X]-\pi(X) \,; \ X\in\mathcal{M}\}<\infty$.
\end{enumerate}
In other words, $D_\mathbb{P}$ is a pricing density with respect to $L^0(\mathbb{P})$ that is strictly consistent with $L^0(\mathbb{P})_+$.
\end{remark}
\end{comment}
\section{Conclusions}
We established a version of the Fundamental Theorem of Asset Pricing in incomplete markets with frictions where agents use general acceptance sets to define good deals based on their individual preferences. The basic result states that the absence of scalable good deals is equivalent to the existence of strictly consistent price deflators. This extends and sharpens the existing versions of the Fundamental Theorem in the good deal pricing literature and allows to derive the appropriate version of superreplication duality. Even though our focus in on one-period models, we had to cope with technical challenges as the standard techniques used in arbitrage pricing (changes of numeraire, exhaustion arguments) break down in the presence of general acceptance sets. The new concepts and strategies developed in the paper are meant to be the building blocks for the construction of a complete multi-period theory of good deal pricing.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,786 |
"Loyal Like Sid & Nancy" is a song by American indie pop band Foster the People from their third studio album, Sacred Hearts Club (2017). It was released on June 30, 2017, as the album's second single.
Background and release
Mark Foster revealed this was the song from Sacred Hearts Club that took them the longest to finish: "The music went through a lot of iterations before we finally settled on a three act play sort of format. But I went particularly insane writing the lyrics to that one. Also, it was important to me that the vocal delivery was right. The vocal needed to be sensual to offset the aggression of the lyrical message and the beat." He also mentioned the political structure of the song's lyrics as another struggle to get it done: "The political message posed another challenge. Especially when it came to touching on issues like the murder of Eric Garner and Black Lives Matter. And the new US policy on accepting refugees. It was a delicate dance to get these points across in the right way." The title of the song refers to the tragic relationship between Sex Pistols's Sid Vicious and his girlfriend, Nancy Spungen.
The song was released as the second single from the album on June 30, 2017 through many streaming services, including Spotify. It was also made available as a pre-order track for whoever purchased the album on iTunes.
Composition
"Loyal Like Sid & Nancy" is an electronic song with hip hop influences. It was composed by the band's members Mark Foster and Isom Innis, the latter was also responsible for producing it. Innis said the song was originally an "atonal dance track" and was later rearranged by Foster: "Mark took it in the studio, added a chord progression, arranged a song that was really meant to be in the dance world. And that's when it started to transform." The last bridge of the song was inspired by Gene Wilder's memorable quotes from Willy Wonka & the Chocolate Factory. Foster revealed that the spoken-word piece had different lyrical variations, including samples from "Pure Imagination" that were later scrapped.
References
2017 singles
Foster the People songs
2017 songs
Political songs
Black Lives Matter art
Columbia Records singles
Songs written by Mark Foster (singer) | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,758 |
Vote November 6, 2018…Because You Can, Because It Matters
9 months ago Feature, Vote 2018
Vote November 6, 2018. This election is being called the most important in a generation. The future of the balance of power in Washington, D.C. will be decided in the House of Representatives and the United States Senate. But for many Americans, it will be the local and Statewide issues that will impact them directly. So it is with Los Angeles County and Long Beach voters.
There are five ballot measures on the Vote November 6 slate:
City Auditor's Authority – Measure AAA
Three-term limit on Mayoral and City Council service – Measure BBB
Ethics Commission – Measure CCC
Citizen Redistricting Commission – Measure DDD
Hotel Workplace Requirements and Restrictions – Measure WW
The Long Beach City Clerk has partnered with the Los Angeles County Registrar-Recorder/County Clerk's office to ensure that the Vote November 6 elections go off just fine. PalacioMagazine.com spoke by phone with Long Beach Assistant City Clerk Allison Bunma about this election, deadlines, and the work that is going into ensuring that every vote counts.
Allison Bunma, Assistant City Clerk, is a Certified Municipal Clerk. Ms. Bunma's career has led her to serve the City of Long Beach for over 27 years; the last 21 with the City Clerk's office. Bunma oversees the legislative business of the City which includes working with 9 city council offices, the Mayor, City Auditor, City Prosecutor, and City Attorney. She also oversees the City of Long Beach's thirty-five plus Boards and Commissions.
More for Vote November 6
If you can't make it to your polling place on Tuesday, November 6th don't worry, there are ten locations open two weekends before the November 6th General Election (October 27-28 and November 3-4). You don't need to bring anything with you, but the County clerk does recommend having your Sample Ballot booklet. Additionally, there is no restriction on where to go, you can visit any Weekend Early Voting site. To find out these locations, visit HERE
Important Things to Know Before Arriving:
These locations are also drop-off locations. If you already have your Vote by Mail ballot you do not need to wait in line.
You will not be using the ink-a-vote system used at a polling place. You will fill in your selections on a Vote by Mail ballot.
If you are in line before 4 pm you will be able to vote.
If you missed the registration deadline for this election you will still be able to vote. Under California Election Law, Conditional Voter Registration (CVR) allows a prospective voter to conditionally register and cast a provisional ballot.
Wait, There's More for Vote November 6
New Citizen Eligibility to Register and Vote: October 23-November 8:00 P. M. Election Day- A new citizen is eligible to register and vote at the office of, or at another location designated by, the county elections official at any time beginning on the 14th day before an election and ending at the close of polls on the election day
Certified List of Write-In Candidates: October 26- Suggested last date for Secretary of State to prepare and send to affected county elections officials a certified list of write-in candidates showing the name of every write-in candidate eligible to receive votes within the county at the General Election, their address and the offices to which they seek election. This list will be mailed to each person in the affected offices.
Polling Places – Publication: October 30- Not later than this date, a list of polling places for each precinct shall be published once in a newspaper of general circulation within the county.
Emergency Vote by Mail: October 31 (W)-November 6 (Tu) Election Day- Between these dates, any voter may apply for a Vote By Mail Ballot if conditions require his or her absence from the precinct on election day. The voter may designate an authorized representative to pick up and return the ballot.
Election Day: November 6, 2018- Polls open 7:00 a.m., close 8:00 p.m. At 8:00 pm, the staff will announce outside the polling place that the poll is now closed and allow voters in line by 8:00 pm to vote but make sure to identify who the last voter will be. There are many steps to close poll but essentially poll workers need to repack all supplies, count signatures, seal ballots, turn off and pack up the PRB machine, completing the official ballot statement to ensure that all ballots are being accounted for.
Election Day: November 6, 2018- Vote by Mail Ballots Returned – 8:00 P. M. Last day for Vote By Mail ballots to be received or turned in personally by the voter to the county elections official's office or at any polling place in the county. An authorized representative may return the voted ballot under specified conditions. Any Vote By Mail ballot cast under this division shall be timely cast if it is received by the voter's elections official via the United States Postal Service or a bona fide private mail delivery company no later than three days after election day in addition to the provisions set forth in E. C. 3020, Sections 1 and 2.
Provisional ballots: Provisional ballots are issued at the polls when a voter's name is not listed on the poll roster. Provisional ballots are sealed in special envelopes at the polls and must be individually researched and verified at the LA County Registrar-Recorder before ballots are counted or rejected, in accordance with election laws. Once a provisional voter's eligibility to vote is verified, the ballot is then counted.
Drop off Vote by Mail Ballots
If you find yourself wanting to drop off your Vote November 6 by Mail ballot instead of mailing it, here are the seven locations for the City of Long Beach:
Bay Shore Neighborhood Library located at 195 Bayshore Ave, Long Beach
Bret Harte Neighborhood Library, 1595 W. Willow St., Long Beach
Burnett Neighborhood Library at 560 E. Hill St, Long Beach
El Dorado Neighborhood Library at 2900 Studebaker Rd, Long Beach
Michelle Obama Neighborhood Library at 5870 Atlantic Ave, Long Beach
Cal State Long Beach at 1212 N Bellflower Blvd., Long Beach
Long Beach City Hall at 333 W. Ocean Blvd., Long Beach
Information Resources for Vote November 6, 2018
Official government information regarding all the various ballot measures along with candidates can be found at the following websites:
City of Long Beach City Clerk HERE
Los Angeles County Registrar-Recorder/County Clerk LAVote.net
California Secretary of State Election Information HERE
Then, there are the media resources. In the past week, we've come across the following:
From KPCC Southern California Public Radio: The Voter Game Plan
On the front page of the LBPost.com, you can find another great guide: CalMatters 2018 Election Guide
Great analysis of Long Beach Ballot Measure BBB- Three-term limit on Mayoral and City Council Service by Greggory Moore
Photo by alphabunny_photos
Photo by C x 2
Elections Long Beach Elections Long Beach Vote 2018 Vote November 6
Steven Chaparro Helps You Discover and Tell Your Story
Absentee Ballot: My Decisions, Decisions, and More Decisions
Roberto Uranga: Seeking Reelection...
Robert Molina of Roxanne's...
Ta-Nehisi Coates' Full Opening...
VoiceWaves: Making Media Waves... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,108 |
"Oh, I have seen the David," sings Guy Clark. "Seen the Mona Lisa, too. And I have heard Doc Watson sing 'Columbus Stockade Blues.' " My own favorite Doc Watson moment is more modest. It came the last time he appeared at the Ann Arbor Folk Festival, as he got ready to sing a song by the Blue Yodeler, Jimmie Rodgers. "This isn't really an old-time song," he said. Then, a trifle disdainfully: "Unless you consider the twenties old-time."
Much can be said (and probably will be, in advance of Watson's January 26 appearance at the Ann Arbor Folk Festival) of Watson's tremendous influence as a guitarist and banjoist — never flashy, he has a hypnotic way of exploring small musical spaces. For me, though, Watson as singer of songs is most compelling. His performances open up a view of American music in which the 1920s do seem fairly close at hand.
The point is not that Watson's repertoire skews toward an older, purer layer of folk music than those of the other southeastern performers who came to prominence during the 1960s folk revival. A working country musician for many years before he ever heard of folk music, Watson is adept at incorporating new pieces into his repertoire. Not long ago he recorded an entire album of rockabilly songs, and, as with all the very greatest white country musicians, the blues have touched nearly every number he does.
Rather, a Watson performance seems to capture whole a very old way of making music and being a musician. Blind since childhood in Deep Gap, North Carolina, Watson learned to play music on a homemade banjo given him by his father. He played for tips on street corners for a time. On stage Watson is a griot-like figure — a storyteller who carries centuries of cultural memory. His songbag ranges from medieval England to the present day, and his concerts, never the same twice, carry wisdom on top of beauty.
Watson is an icon of the 1960s rediscovery of folk music, and his popularity has never waned since he appeared at the Newport Folk Festival in 1963. I wish the State Department had sent him out on one of those goodwill tours meant to show the world the best of American culture. True, there have been previous warnings of a "last chance" to see the seemingly indestructible Doc Watson. But the man's nearly eighty. Don't miss this one. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,254 |
Q: websocket server minimum requirements I know websocket is a protocol, so it shouldn't depend on any operating system requirement as it happens to communicate via ftp or http. I found different libraries to use this protocol, but they seems to have requirements for .Net 4+ (no Windows XP), Visual Studio 2015 at least (so at least Windows 7) and often there's no requirement... until I try to compile or run the example of a certain library (after installing whatever it takes to compile it) and I get runtime errors about invalid parameters, something not supported, wrong this and wrong that.
After days of theese trials and errors, I decided to ask this strange question: there's a way to use such protocol with an older operating system, such as Windows XP or an old version of linux? I found every detail for browsers support, but obviously it is only for the client. Node.js for the server, written in js, can be used even with old computers and operating systems?
I cannot find anywhere what are the minimum requirements to write a server listening on a websocket (it's a protocol so it is not too hard to believe I found nothing :/ ) and I need to make deduction by languages or framework requirements (eg. .net 4+, so not Windows XP).
I'd like to do RPC with websocket but I cannot run server of WAMP (https://stackoverflow.com/a/10882808/1315873) in Windows XP and Visual Studio 2010 (it run perfectly on Windows 7 under Visual Studio 2015).
I can use other languages to accept connection on websocket... the other part of my software is written in .net 3.5 so I need to find something not to hard to call from and to .net 3.5.
Thank you for any help or explaination.
A: The only system requirement for writing a webSocket server is that you have TCP and can set up a TCP server, listening for incoming connections. webSocket is just a protocol on top of TCP.
webSocket connections are technically initiated with an HTTP request, but the part of HTTP that you need is extremely simple (just parse a few incoming headers to identify a security credential and the request to upgrade to the webSocket protocol) and once both sides agree on the upgrade, then the protocol is switched to webSocket and HTTP is not used any more on that connection.
What you are likely discovering is that the webSocket libraries you are looking at are themselves built on top of other libraries (such as .NET) which creates a dependency on those other libraries. That is purely a by-product of their implementation, not a requirement of the protocol in any way.
So, yes it is certainly possible to write a webSocket server that has no external dependencies other than a TCP library and that could easily run on Windows XP.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,081 |
\section{Introduction}
Astroparticle physics is currently experimentally driven and involves
many different existing or planned projects ranging from UHECR
observatories such as the Pierre Auger Observatory~\cite{auger}, to neutrino
telescopes~\cite{nu_review}, as well as ground and space based $\gamma-$ray
detectors operating at TeV and GeV energies,
respectively~\cite{gammarev}. It is clear that GeV-TeV $\gamma-$ray
and neutrino astronomy will prove an invaluable tool to unveil the
sources, and probe into the mechanism, of UHECRs. Even if a putative
source were to produce exclusively UHECRs, photo-pion~\cite{gzk} and
pair production by protons on the cosmic microwave background (CMB)
would lead to guaranteed secondary photon and neutrino fluxes that
could be detectable. Furthermore, spectra, power and sky distributions
of both primary UHECRs and secondary $\gamma-$rays and neutrinos
depend on the poorly known large scale cosmic magnetic fields.
It is, therefore, desirable to have a numerical tool that can treat
the interface between UHECR, $\gamma-$ray and neutrino astrophysics, and
large scale magnetic fields. To this end,
we have recently merged our Monte Carlo code for simulating
three dimensional propagation of UHECRs in a structured, magnetized
Universe~\cite{Sigl:2004yk}
with a one-dimensional transport code that solves electromagnetic (EM)
cascades and neutrino propagation~\cite{Lee:1996fp}.
We discuss the limitations due to the one-dimensional approximation
and implement a procedure to test the resulting uncertainty on the EM cascade
on the observable fluxes.
With the present paper, we release a public version of this code which
we hope will be useful for the cosmic ray, $\gamma-$ray and neutrino
communities.
In the following, we present the relevant interactions and propagation
phenomena taken into account, and the propagation algorithms
applied in CRPropa. We also present a few examples of how to use
the code in practice. The numerical package
and its detailed documentation are available for downloading
on the CRPropa website, {\tt http://apcauger.in2p3.fr/CRPropa}.
We use natural units, $c=\hbar=1$ throughout this paper.
\section{Propagation algorithms}
UHECRs are injected at specified sources, and propagated
step-by-step in either a one- or a three-dimensional environment. The
trajectories are regularly sampled, or recorded only at specific
locations (e.g. at a given distance from a source, or at an
``observer'' point). Each propagation step consists of integrating the
Lorentz equations, and computing the interactions and possibly the
secondaries generated by those interactions.
In the 3-dimensionnal case, a ``simulation box'' is defined and
periodic boundary conditions are assumed.
When deflections are taken into account, cosmological redshifts cannot
be computed,
because the propagation time until the particle reaches the observer
is not known before hand. Therefore, redshift evolution is only
accounted for in the 1D version of the package. The concordance
cosmology is used for which,
assuming a flat Universe,
the Hubble rate $H(z)$ at redshift $z$
in the matter dominated regime, $z\mathrel{\mathpalette\fun <} 10^3$, is given by
\begin{equation}\label{cosmo}
H(z)= H_0
\left[\Omega_{\rm m}(1+z)^3+\Omega_{\Lambda}\right]^{1/2}\,.
\end{equation}
The parameters $\Omega_{\rm m}$ and $\Omega_{\Lambda}$ can be freely
chosen, their standard values being $\Omega_{\rm m}=0.3$,
$\Omega_{\Lambda}=0.7$, and $H_0=h_0\,100~{\rm km}~{\rm s}^{-1}~{\rm
Mpc}^{-1}$ with $h_0=0.72$.
The general principle of the simulations is shown in
Fig.~\ref{crp_graph}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{crp_graph.eps}
\caption{\label{crp_graph} Principle of the propagation
algorithm. This scheme applies to all configurations.}
\end{center}
\end{figure}
\subsection{Nucleon Interactions}
The most famous interaction of nucleons with the low-energy photon
backgrounds is pion production, which generates the GZK feature. In
order to handle pion production, we use the event generator
SOPHIA~\cite{sophia}, that has been explicitely designed to study
this phenomenon and that uses the particle production
cross-sections measured in accelerators. We have also augmented the SOPHIA
package for interactions with a low energy extragalactic background
light (EBL) with a general energy distribution. SOPHIA allows to
determine the distribution of the stables particles generated by an
interaction with a low-energy photon.
Pair production by protons (PPP) on the CMB, also known as
Bethe-Heitler process, is taken into account as a continuous energy loss whose
rate we evaluate following the expressions in
Refs.~\cite{Blumenthal:1970nn,chodorowski}.
For the spectrum of the pairs we exploit the fact that Bethe-Heitler
and triplet pair production, $e\gamma_b\to ee^+e^-$, are analogous
electromagnetic processes, their cross sections and inelasticities
converging for relativistic pairs. Fig.~2 of Ref.~\cite{mastichiadis}
then shows that the spectrum of electron-positron pairs (heretoafter
simply referred to as electrons) generated by a proton of energy $E$
can be approximated by a power-law energy distribution $dn/dE_e\propto
E_e^{-7/4}$. Kinematics implies that this power law holds for
$E_{\rm min}\leq E_e\leq E_{\rm PPP}$, where the minimal
and maximal energies are given by~\cite{Lee:1996fp}
\begin{eqnarray}
E_{\rm PPP}&\simeq&\frac{4E^2\varepsilon}{4E\varepsilon+m_p^2}
\simeq\frac{4.5\times10^{15}\left(\frac{E}{{\rm EeV}}\right)^2
\left(\frac{\varepsilon}{{\rm meV}}\right)\,{\rm eV}}
{4.6\times10^{-3}\left(\frac{E}{{\rm EeV}}\right)
\left(\frac{\varepsilon}{{\rm meV}}\right)+1}\nonumber\\
E_{\rm min}&\simeq&\frac{m_e^2}{8\varepsilon}\simeq3.3\times10^{13}\,
\left(\frac{\varepsilon}{{\rm meV}}\right)^{-1}\,{\rm eV}
\,.\label{E_ppp}
\end{eqnarray}
In Eq.~(\ref{E_ppp}), $m_p$ and $m_e$ are the proton and electron masses,
respectively, $\varepsilon$ is the low energy target photon energy, and
the approximation for $E_{\rm min}$ holds for
$m_e m_p\mathrel{\mathpalette\fun <} 4E\varepsilon\mathrel{\mathpalette\fun <} m_p^2$.
The average electron energy is then $\overline{E_e}=
\int^{E_{\rm PPP}}_{E_{\rm min}}dE_e E_e E_e^{-7/4} /
\int^{E_{\rm PPP}}_{E_{\rm min}}dE_e E_e^{-7/4}
\simeq3\,E_{\rm min}^{3/4}E_{\rm PPP}^{1/4}$ which is indeed much smaller
than the primary proton energy $E$. From Eq.~(\ref{E_ppp}), the
inelasticity $K\equiv\overline{E_e}/E$,
whose precise energy dependence can be found in Ref.~\cite{chodorowski}, for
$m_e m_p\la4E\varepsilon\mathrel{\mathpalette\fun <} m_p^2$ can thus be approximated by
\begin{eqnarray}
K(E\varepsilon)&\sim&\frac{3}{2^{7/4}}
\frac{m_e^{3/2}}{\left(E\varepsilon\,m_p\right)^{1/2}}\label{K}\\
&\simeq&3.4\times10^{-4}\left(\frac{E}{{\rm EeV}}\right)^{-1/2}
\left(\frac{\varepsilon}{{\rm meV}}\right)^{-1/2}\,,\nonumber
\end{eqnarray}
This is consistent with Figs.~1 and~2 in Ref.~\cite{Mastichiadis:2005nj}.
For our purposes,
we are not sensitive to the lower kinematic limit since the total energy
produced $\propto\int^{E_{\rm PPP}}_{E_{\rm min}}dE_e E_e E_e^{-7/4}
\simeq4E_{\rm PPP}^{1/4}$ is insensitive to $E_{\rm min}$ as long as
$E_{\rm min}\ll E_{\rm PPP}$, but rather is dominated by the highest energies.
As a consequence, the total proton energy loss rate due to
pair production is dominated by the highest energy electrons close to
$E_{PPP}$. However, because the production cross section of these
highest energy electrons is much smaller than the one
for the more numerous lower energy electrons, the average inelasticity
Eq.~(\ref{K}) is nevertheless small, below $10^{-3}$ everywhere above
the pair production threshold. The spectrum
and maximal energy of the pairs will be important for
the synchrotron spectrum emitted by these electrons in an EGMF of strength $B$
which peaks at
$\simeq6.8\times10^{11}\,(E_e/10^{19}\,{\rm eV})^2(B/0.1\,\mu{\rm G})\,$eV.
Nucleons can be followed down to $10^{17}\,$eV with CRPropa, below which
interactions become negligible.
\subsection{Secondary Electromagnetic Cascades and Neutrinos}
The secondary neutrinos from pion production of nucleons are propagated
in straight lines assuming no energy losses except redshift effects.
All the EM products of these interactions are evolved using
an EM cascade code based on Ref.~\cite{Lee:1996fp}. The photons and
pairs are followed until either their energy drops below 100 MeV
or they reach an observer. All relevant interactions
with a background photon $\gamma_b$ are taken into account, namely single pair
production (PP), $\gamma\gamma_b\to e^+e^-$, double pair production (DPP),
$\gamma\gamma_b\to e^+e^-e^+e^-$, inverse Compton scattering (ICS),
$e\gamma_b\to e\gamma$, and triplet pair production (TPP), $e\gamma_b\to ee^+e^-$
(see also Ref.~\cite{bs} for a detailed discussion of implemented interactions).
In addition, synchrotron losses of electrons in the (in
general) inhomogeneous EGMF are
taken into account and the resulting lower energy synchrotron photons are
also followed in the subsequent EM cascade.
This module has been applied to EM cascades from discrete magnetized
proton sources in galaxy clusters in Ref.~\cite{Armengaud:2005cr}.
The EM cascades that are followed with the current version of CRPropa
are propagated in straight lines, even in the case of 3-dimensionnal
simulations for UHECRs:
Every time a primary hadron interacts and initiates an EM
cascade, it is assumed that the secondaries propagate
along straight lines and it is checked whether the line of sight crosses
the observer. If this is the case, the EM cascade module is called with the
corresponding propagation distance and the projected magnetic field profile.
Electrons in the EM cascade can of course be deflected in the
EGMF, and we discuss here the validity of this one-dimensionnal approximation.
In a magnetic field of strength $B$ the synchrotron cooling time
for an electron of energy $E_e$ is given by
\begin{eqnarray}
t_{\rm synch}&=&\frac{E_e}{dE_e/dt}=\frac{6\pi m_e^2}{\sigma_T E_e B^2}
\label{synchro}\\
&\simeq&3.84\,{\rm kpc}\,\left(\frac{E_e}{10^{15}\,{\rm eV}}\right)^{-1}
\left(\frac{B}{\mu\,{\rm G}}\right)^{-2}\,,\nonumber
\end{eqnarray}
where $\sigma_T = 8 \pi \alpha^2 / 3 m_e^2$ is the Thomson cross
section, with $\alpha$ the fine structure constant.
At high energies, in the Klein-Nishina regime the inverse Compton energy loss length is
roughly~\cite{bs}
\begin{equation}
t_{\rm IC}\mathrel{\mathpalette\fun <} 400\,{\rm pc}\,\left(\frac{E_e}{10^{15}\,{\rm eV}}\right)
\quad\mbox{for}\,E_e\mathrel{\mathpalette\fun >} 10^{15}\,{\rm eV}\,.\label{ic}
\end{equation}
At energies $E_e\mathrel{\mathpalette\fun >} 10^{18}\,$eV in Eq.~(\ref{ic}) the
energy loss length is between a factor $\sim30$ and a few hundred smaller
than the numerical value in Eq.~(\ref{ic}) due to contributions from the
universal radio background. For a conservatively large $t_{\rm IC}$ at these energies we use an interpolation of Fig.~12 in~\cite{bs} for the
conservatively low radio background estimate.
For $E_e\mathrel{\mathpalette\fun <} 10^{15}\,$eV,
ICS on the CMB is in the Thomson regime, with an interaction length $\lambda_{\rm IC} \sim
1/\sigma_T n_{\rm CMB} \sim 1.2$ kpc, with $n_{\rm CMB}$ the CMB
photon density. The energy lost by an electron
at each interaction is $\delta E_e \sim 4 \epsilon E_e^2/ 3 m_e^2$,
where $\epsilon$ is a typical CMB photon energy. As a consequence, the
energy loss length at energies below $\sim 10^{15}$ eV is:
\begin{equation}
t_{\rm IC} \simeq \frac{3 \lambda_{\rm IC} m_e^2}{4 \epsilon E_e} \sim
400\,{\rm pc}\,\left( \frac{10^{15}\,\rm{eV}}{E_e}\right)\,. \label{ict}
\end{equation}
These length scales as well as the maximal propagation distance must
be compared with the Larmor radius
\begin{equation}
r_L=\frac{E_e}{eB}\simeq1.08\,{\rm pc}\,
\left(\frac{E_e}{10^{15}\,{\rm eV}}\right)
\left(\frac{B}{\mu\,{\rm G}}\right)^{-1}\,.\label{Larmor}
\end{equation}
In order for a one-dimensional treatment of EM cascades
to be a good approximation, the Larmor radius has to be much larger than either
the total propagation length, the IC or the synchrotron
loss lengths. For a given magnetic field, the condition $r_L >
A\times\rm{min}(t_{\rm synch}, t_{\rm IC})$ results in a condition $E_e >
E_c(B,A)$, corresponding to deflections of the pairs by
$\mathrel{\mathpalette\fun <}(10/A)\times 6^{\circ}$. This estimate of the deflection angle is
however conservatively large: in realistic situations, magnetic fields
are inhomogeneous with many reversals along the line of sight,
and the actual deflection angle will be smaller provided the magnetic
field coherence length is smaller than the energy loss length.
The dependence of the ``critical''
energy $E_c$ on $B$ and $A$ is shown in Fig.~\ref{fig:ecrit}:
For $A=10$, corresponding to deflection by less than $\sim6^\circ$,
$E_c$ is determined by the competition between deflections and ICS
for $B\mathrel{\mathpalette\fun <} 300$ pG, or between deflection and synchrotron emission
for $B\mathrel{\mathpalette\fun >} 300$ pG. For $A=100$, corresponding to deflection by less than
$\sim0.6^\circ$, the transition between ICS and synchrotron
emission as dominant losses to be compared with deflection occurs
at $B\simeq20$ pG.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{e_crit.ps}
\caption{\label{fig:ecrit} The critical energy $E_c$, below which the $e^{+/-}$
are deflected before cascading to lower energies, as a function of the
order of magnitude of the magnetic field. $E_c$ is obtained with the
parameterization of various timescales given in the text, for $A=10$ (solid
line) and $A=100$ (dashed line). This corresponds to cutting all pairs
being deflected by more than $\sim6^\circ$ or $0.6^\circ$, respectively.
Note that the jump around $\simeq3\times10^{-10}\,$G and $\simeq2\times10^{-11}\,$G, respectively, is due to the transition
from ICS to synchrotron emission (at large fields) as dominant energy
loss.}
\end{center}
\end{figure}
It turns out that whenever $E_c(B,A)\la10^{15}\,$eV, the $\gamma-$ray
flux from deflected pairs is sub-dominant. This is simply due to the
fact that the pair flux at energies $E_e\la10^{15}\,$eV is suppressed
compared to the $\gamma-$ray flux which has a much larger interaction length
and piles up below the pair production threshold. The $\gamma-$ray
flux from deflected pairs can only be important if $E_c(B,A)\gg10^{15}\,$eV
which, from Fig.~\ref{fig:ecrit}, requires that synchrotron emission dominates
the losses of the deflected pairs. In this case, a significant fraction
of the energy flux going into pairs is deflected more than
$\sim(10/A)\times 6^{\circ}$, thus modifying
the $\gamma-$ray point flux at energies
\begin{equation}\label{eq:e_synch}
E_{\gamma} \mathrel{\mathpalette\fun <} 2.2\times 10^{8} \,{\rm eV} \left(\frac{E_c(B,A)}{\rm
EeV} \right)^2 \left( \frac{B}{\rm nG}\right)\,.
\end{equation}
In the following we will confirm these expectations with numerical
simulations.
Within CRPropa, the parameter $A$ can be chosen by the user,
and the local contribution of electrons with energy $E_e
< E_c(B,A)$ to the $\gamma$-ray flux can be switched on or off: This
allows to estimate the uncertainty in the $\gamma$-ray flux arriving
within a certain angle $\sim(10/A)\times 6^{\circ}$ from a point
source due to the 1D approximation. An example is shown in
Fig.~\ref{fig:cascade_cut}, where the computed $\gamma$-ray fluxes
from a single proton source located at 100 Mpc from the Earth in
a uniform magnetic field of amplitude 100 pG are compared with and
without cutting the charged component of the EM cascade deflected by more
than $6^\circ$ and $0.6^\circ$, respectively.
For the flux arriving within $6^\circ$ in Fig.~\ref{fig:cascade_cut},
$E_c\simeq3\times10^{14}\,$eV,
see Fig.~\ref{fig:ecrit}, and indeed a discernible but still modest,
$\sim30\%$, modification appears only for $E_{\gamma} \mathrel{\mathpalette\fun <} 0.1$ TeV, where
the photon energy flux becomes comparable to the pair energy
flux around $E_c$.
For the flux arriving within $0.6^\circ$ in Fig.~\ref{fig:cascade_cut},
$E_c\simeq2\times10^{19}\,$eV, see Fig.~\ref{fig:ecrit}, and by the above
argument and Eq.~(\ref{eq:e_synch}) we expect the photon flux to be significantly
modified below $\sim10\,$TeV. Indeed, at these energies the flux is
reduced by a factor $\sim5$.
In case of comparatively strong magnetic fields of order $\mu$G,
typical in galaxy clusters, $E_c\la10^{18}\,$eV, and $\gamma-$ray
point fluxes arriving within $\sim0.6^\circ$ should only be modified
significantly for $E_\gamma\la100\,$GeV. Also
note that for the production of secondaries inside a magnetized
region where the parent UHECR particles are isotropically
distributed, the full three dimensional treatment of the EM
cascade is not necessary
because for any $e^-$ that is deflected away from the line of sight
there is always another $e^-$ that is deflected into the line of sight.
In realistic situations, the magnetic fields are highly
structured with typical amplitudes of $\sim \mu G$ in the clusters,
and $\mathrel{\mathpalette\fun <} 10$ pG in the voids. The above discussion shows that in
all these cases CRPropa can estimate the minimal $\gamma-$ray flux
arriving within an angle $(10/A)\times 6^{\circ}$ from the source.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{cascade_cut.ps}
\caption{\label{fig:cascade_cut} Flux of secondary neutrinos (all
flavors are added) and $\gamma$-rays from a single source of UHE
protons with injection spectrum $\propto E^{-2}$ up to
$5\times 10^{20}$ eV, computed assuming a straight line propagation
for the protons but taking into
account the influence of a 100 pG magnetic field on the EM
cascades. The red line is the flux computed by ``cutting'' the
local $e^{+/-}$ flux below $E_c(B,A)$ (see text), for $A=10$
(continuous line) and $A=100$ (dashed line). This corresponds to
pair deflections of $6^{\circ}$ or $0.6^{\circ}$, respectively.}
\end{center}
\end{figure}
\subsection{Background Photon Spectra and their Evolution}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{fig1.ps}
\caption{\label{fig1}Models implemented for the low energy photon background at
zero redshift. The IRB consists basically of a peak in the far infrared around
100$\mu\,$m dominated by dust and a peak in the near infrared dominated by stars.}
\end{center}
\end{figure}
Fig.~\ref{fig1} shows the EBL energy distributions that have been
implemented. The most important is the CMB.
For the infrared background (IRB) we implemented three
distributions, a low and a high version of Franceschini
et al.~\cite{Franceschini:2001yg}
which differ roughly by a factor 5, as well as the one by Primack
et al.~\cite{Primack:2005rf}. The low Franceschini et al. and the
Primack et al. backgrounds are consistent with recent upper limits
from blazar observations in TeV $\gamma-$rays by HESS~\cite{Aharonian:2005gh}.
For a recent review of the IRB see for example Ref.~\cite{Lagache:2005sw}.
The IRB has a significant influence on EM cascades only
around the threshold for pair production, i.e. between a few Tev and
$\simeq100\,$TeV. At higher energies, the $\gamma-$ray flux is suppressed by
interactions with the CMB and, above $\simeq10^{19}\,$eV, by interactions with
with the radio background. At energies below
$\sim\,$TeV, the Universe acts as a calorimeter and the total photon
flux is proportional to the total EM energy injected above $\sim\,$PeV
with a rather universal shape~\cite{Coppi:1996ze}.
Although its photon number
density $\simeq2\,{\rm cm}^{-3}$ is a factor $\simeq200$ smaller than for
the CMB, below the GZK-cutoff and above $\sim10^{17}\,$eV the IRB
can significantly reduce the nucleon mean free path for pion production.
This can be important
for secondary photon and neutrino~\cite{Stanev:2004kz,Bugaev:2004xt}
production, especially for a steep primary injection spectrum and/or
strong redshift evolution.
For the universal radio background (URB) we use a weak and a strong version
based on Ref.~\cite{Protheroe:1996si} and on observations~\cite{obs_radio}.
The URB is mostly important for EM cascades above $\sim10^{18}\,$eV where
it can inhibit cascade development due to the resulting small pair production
lengths, especially for fast synchrotron losses of electrons in the
presence of strong magnetic fields.
Since URB photons can give rise to pion
production only above a few times $10^{22}\,$eV, where the interaction rate
is essentially proportional to the total EBL photon density which is dominated
by the CMB by a factor $\sim10^3$, see Fig.~\ref{fig1}, the URB is negligible
for pion production. The same applies to pair production by protons.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{proton_rate.ps}
\caption{\label{fig2} Proton energy loss length for pair production on
the CMB (continuous line), interaction length for pion production on
the CMB (dashed line) and on the Primack et al. IRB (dotted line) at
$z=0$. The irregularities in the dashed curve are due to the
piecewise power law fits of the Primack et al. IRB.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{photon_rate.ps}
\caption{\label{fig3}Photon interaction length at $z=0$ on the EBL
consisting of the CMB, the Primack
et al. IRB, and the strong URB version. Dotted line: Interaction
length in the CMB only at $z=0$.}
\end{center}
\end{figure}
Figs.~\ref{fig2} and~\ref{fig3} show interaction and energy loss lengths
for protons and interaction lengths of photons, respectively, and their
dependence on EBL models at zero redshift. This demonstrates that the
IRB becomes important
for pion production by protons below the GZK cutoff and for pair production
by photons below the threshold in the CMB at $\sim10^{14}\,$eV. It also shows
that the URB tends to dominate pair production by photons above $\sim10^{19}\,$eV.
The redshift evolution of the cosmic microwave background
(CMB) is trivial. The redshift evolution of the radio and infrared
distributions is more complicated: Ultra-relativistic particles of energy
$E$ injected at redshift $z^\prime$ with a rate per energy and comoving volume
$\Phi(E,z^\prime)$ result in a {\it physical} number density per energy
at redshift $z$ given by
\begin{eqnarray}
n(E,z)&=&(1+z)^3\int_z^\infty
dz^\prime\frac{4\pi\Phi\left[E_i(E,z,z^\prime),z^\prime\right]}
{(1+z^\prime)H(z^\prime)}\nonumber\\
&&\hskip2cm\times\frac{dE_i}{dE}(E,z,z^\prime)\,,\label{bkg1}
\end{eqnarray}
where it is assumed that the particle looses energy continuously such that
its injection energy can be computed analytically, $E_i(E,z,z^\prime)$.
Interactions of the low energy EBL photons, whose differential number densities
we will denote by $n_b(\varepsilon,z)$ in the following to distinguish
from the high energy particles, can safely be neglected after
recombination, $z\la10^3$, such that $E_i(E,z,z^\prime)=(1+z^\prime)E/(1+z)$.
Eq.~(\ref{bkg1}) then simplifies to
\begin{equation}
n_b(\varepsilon,z)=(1+z)^2\int_z^\infty
dz^\prime\frac{4\pi\Phi\left[(1+z^\prime)\varepsilon/(1+z),z^\prime\right]}
{H(z^\prime)}\,,\label{bkg2}
\end{equation}
By using $|dt/dz|=[(1+z)H(z)]^{-1}$, one can see easily that the total energy
density per comoving volume redshifts as $\int d\varepsilon\, \varepsilon\,
n_b(\varepsilon,z)/(1+z)^3=(1+z)
\int dt\,d\varepsilon_i\,\Phi(\varepsilon_i,z^\prime)/(1+z^\prime)$, as it should be.
For the URB we implemented a nontrivial redshift evolution in the cascade module,
as this can be relevant for EM cascade development. We assume that
$\Phi_{\rm URB}(\varepsilon,z)=\phi_{\rm URB}(\varepsilon)g_{\rm URB}(z)$
factorizes into an energy dependence $\phi_{\rm URB}(\varepsilon)$ motivated
by the observations~\cite{obs_radio} and theoretical estimates~\cite{Protheroe:1996si}
and a redshift dependence given by
\begin{equation}
g_{\rm URB}(z)=10^{1.18z-0.28z^2}\,,
\end{equation}
as in Ref.~\cite{Lee:1996fp}.
For the Primack et al. IRB~\cite{Primack:2005rf} we use for simplicity
the differential photon energy distribution evolution
\begin{equation}
n_b(\varepsilon,z)=
\left\{\begin{array}{ll}
(1+z)^2n_b\left(\frac{\varepsilon}{1+z},z=0\right)\,
& \mbox{for $z\leq z_b$}\,,\\
0 & \mbox{otherwise}
\end{array}\right\}\label{trivial_evol}
\end{equation}
which corresponds to instantaneous creation of the background at redshift
$z_b$ with $\Phi(\varepsilon,z^\prime)=H(z_b)n_b[\varepsilon/(1+z_b),z=0]
\delta(z^\prime-z_b)/(4\pi)$ in Eq.~(\ref{bkg2}). It strictly applies to
the CMB which was effectively produced at decoupling, $z_b\sim1100$.
For the IRB we assume $z_b=5$. Interaction
lengths $l(E,z)$ and, in case of continuous energy loss processes such as PPP,
energy loss rates $b(E,z)\equiv dE/dt$ then follow simple scaling relations
in redshift~\cite{Bugaev:2004xt},
\begin{eqnarray}
l(E,z)^{-1}&=&(1+z)^3l\left[(1+z)E,z=0\right]^{-1}\nonumber\\
b(E,z)&=&(1+z)^2b\left[(1+z)E,z=0\right]\,.\label{scaling}
\end{eqnarray}
This simplifies implementation in SOPHIA.
\subsection{Distributions and Properties of Sources}
Both single sources and realizations of both discrete or continuous source
distributions can be used in CRPropa. In the latter case, the distributions can
be selected, for example, to follow the baryon density from a large
scale structure simulation box, and are periodically repeated.
The UHECR particles are injected isotropically around the sources with
a monochromatic or a power-law energy distribution between
a minimal and a maximal energy,
$E_{\rm min}$ and $E_{\rm max}$, respectively:
$$ \frac{dN}{dE_{\rm inj}} \propto E_{\rm inj}^{-\alpha}
\qquad E_{\rm min} \leq E_{\rm inj} \leq E_{\rm max}$$
For each trajectory reaching the observer and being registered, the
source identity $i$ is also registered. This allows to apply a re-weighting procedure
on the recorded ``events'', in order to vary individual source properties such
as their injection power law index $\alpha_i$
or luminosity $Q_i$. For example, it is most efficient in terms
of CPU time
to inject the UHECRs with a spectral index $\alpha_0 = 1$ at the
sources, that is with a uniform distribution in the logarithm of the
energy. By re-weighting each recorded event by a factor
$ w \propto Q_i E_{\rm inj}^{\alpha_i-1} $, the source $i$ would
contribute with a power $Q_i$ and an effective injection power law index
$\alpha_i$ in all observables constructed from the weighted trajectory
sample.
\section{Large Scale Structure and Magnetic Fields}
The strength and distribution of the EGMF is currently poorly known and their
impact on UHECR are hard to quantify, as demonstrated by the different results
in Refs.~\cite{Sigl:2004yk,dolag}. See also Ref.~\cite{Sigl:2004gi}
for a discussion of
these differences and Ref.~\cite{bo_review} for a review on EGMF. We note that
there are recent observational hints of EGMF as strong as $\sim0.1\,\mu$G on
scales as extended as superclusters~\cite{Xu:2005rb}, as well as
theoretical motivations for such fields~\cite{Medvedev:2005ep}.
Enhanced magnetic fields around large scale structures such as
galaxy clusters together with associated larger EBL densities can lead
to increased production of $\gamma-$rays and neutrinos.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\textwidth]{fig4a.ps}
\includegraphics[width=0.6\textwidth]{fig4b.ps}
\caption{\label{fig4}A 2D cross section through the relative size and
polarization of the EGMF in linear scaling,
(top panel) and the relative
baryon density in logarithmic scaling (bottom panel)
in the environment of a galaxy cluster from the simulations from
Ref.~\cite{ryu,miniati}.}
\end{center}
\end{figure}
The EGMF from the large scale structure simulation from Ref.~\cite{ryu,miniati}
has so far been implemented in CRPropa, but any magnetic field model
can be used. Within the public package CRPropa, only a small subgrid
of the simulations from~\cite{ryu,miniati} is provided in order to
allow simple tests. Fig.~\ref{fig4} shows a 2D cross section through
the environment of a galaxy cluster from this simulation. In
this simulation, the magnetic fields follow the baryon density, and in
particular the regions that are filled with sub-$\mu$G fields are
quite extended around the large-scale structures (with a typical extension of
a few Mpc). This is due, in particular, to the fact that magnetic
fields are generated at the LSS shocks within that model. Of course,
the properties of $\gamma$-ray sources associated with UHECR sources as
well as the feasibility of ``charged particle astronomy'' depend
strongly on the magnetic field model~\cite{Sigl:2004gi},~\cite{dolag}.
Large scale structure simulations usually cover only a small fraction of
today's Universe, typically of order 100 Mpc in linear scale. Since sources
at much larger, cosmological distances can contribute to the fluxes of UHECR
below the GZK cutoff, of photons below $\sim\,$TeV and of neutrinos, the EGMF
and source distributions are periodically continued in the 3D version of the
code.
EGMF with homogeneous statistical properties and power law spectra in
Fourier space (e.g a Kolmogorov spectrum) have also been implemented
in the package.
\section{Simple Applications}
We present here applications of CRPropa that are obtained with
very simple configurations requiring little CPU time. The results
can easily be compared with previous results from the literature.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{traj1d.ps}
\caption{\label{traj1d}Evolution of the energy of nucleons as a
function of propagation distance, for initial energies of 5, 50 or
500 EeV. The thin lines indicate the dispersion induced by the
stochasticity of pion production.}
\end{center}
\end{figure}
Fig.~\ref{traj1d} shows the averages and dispersions of the energy of
nucleons in a one-dimensional simulation, as a function of propagated
distance for various initial energies.
Using SOPHIA automatically enables us to reproduce the
stochasticity of pion production.
Fig.~\ref{secondaries} shows the spectra of secondaries generated
during the one-dimensional propagation of UHECRs from a source
located at 20 Mpc or 100 Mpc from the observer. Note that the
neutrino flux increases with distance to the source, whereas the
photon flux above $\sim10^{14}\,$eV decreases, but the photon flux
below this energy increases. This is because more secondary neutrinos
and EM particles are produced for larger propagation distances, but
EM particles above $\sim10^{14}\,$eV are quickly degraded and cascade
down to sub-PeV energies. A more detailed analysis of
the fluxes of secondaries from a single UHECR source (e.g. the
relative contribution of pair production and pion production on the
$\gamma$-ray flux) can be found in Ref.~\cite{Armengaud:2005cr}. The
study of secondary photons from UHECR sources has also been carried
out in various situations in
Refs~\cite{Gabici:2005gd,Ferrigno:2004am,Rordorf:2004jp,Inoue:2005vz,aharonian}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{secondaries.ps}
\caption{\label{secondaries}Spectrum of secondary photons and
neutrinos (all flavors added) generated by pion and pair production
from a single UHECR source at a given distance. We consider here a
one-dimensionnal model, with an injection
spectral index $\alpha = 2$ for the UHECRs. A uniform magnetic field
of 0.1 nG is assumed. Note that below $\sim$TeV the $\gamma-$ray
flux would be spread over several degrees and that, as shown in
Fig.~\ref{fig:cascade_cut}, the 1D approximation of the EM cascade
does not significantly affect the accuracy of the $\gamma-$ray flux
arriving within such angles.
The fluctuations at the highest energies are statistical.}
\end{center}
\end{figure}
Fig.~\ref{sourcenu} shows the spectra of secondary neutrinos from a
source located at 20 Mpc from an observer, depending in particular on
the magnetic field effects. It is remarkable that, for a given source
luminosity, the flux of secondary neutrinos is increased by a factor
of more than two due to the enhancement of the UHECR propagation
distance generated by the $\mu$G-level magnetic fields that surround
this source.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{sourcenu_20_magfield.ps}
\caption{\label{sourcenu} Secondary neutrinos (all flavors added)
from a nearby source of
UHECRs with a given luminosity. The flux increases at high energies
both with maximum UHECR acceleration energy and with the strength of
magnetic fields surrounding the source. The
fluctuations at low energy are statistical. The $y$-axis is in
arbitrary units.}
\end{center}
\end{figure}
Fig.~\ref{source_3d} compares the spectral shape of UHECRs
from a source located at 100 Mpc from an observer, depending on the
presence of magnetic fields around the source. If magnetic fields of amplitude
$\sim \mu G$ surround the source over a few Mpc, the observed spectrum
is clearly modified: 1) there is a dispersion in the true propagation
distance, compared to a fixed propagation distance of 100 Mpc.
This reduces the amplitude of the "bump"; 2) the mean propagation
distance is increased compared to 100 Mpc. This leads to a GZK
cut-off at slightly lower energies.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{source_3d.ps}
\caption{\label{source_3d}UHECR spectrum from a source located at 100
Mpc from an observer, injecting protons with a spectrum $\propto E^{-2}$
up to $E_{\rm max}=10^{21}$ eV. The red curve
is obtained from a full 3-dimensional simulation, where the source is
embedded in a region with $\mu$G fields over a few Mpc.}
\end{center}
\end{figure}
Fig.~\ref{test_pairprod_dip} compares the spectra obtained with CRPropa to the one presented on the red curve of Fig.14 in~\cite{Berezinsky:2002nc} for a model of cosmologically distributed proton sources with spectral index $\gamma_s=2.6$ and a source evolution parameter $m=2.4$. We see that, for a given model, the spectrum estimations obtained with our Monte-Carlo method and with a direct integration of the transport equations (for~\cite{Berezinsky:2002nc}) agree within a few \%.
The blue and red curves of the lower panel in Fig.~\ref{test_pairprod_dip} show the influence of two numerical parameters on the accuracy of the derived spectrum at the highest energies. The maximum injection energy allowed in the Monte-Carlo has an influence at energies above $10^{20}$ eV, in agreement with results shown, for example, in Fig.5 of~\cite{Berezinsky:2002nc}. The use of a propagation stepsize of 0.3 Mpc instead of 1 Mpc does not lead to a significant change in the simulated spectrum. Other tests showed that using a propagation stepsize of 5 Mpc instead of 1Mpc results in a $\sim 10\%$ overestimation of the spectrum in the specific energy range $100 < E < 160$ EeV, and an underestimation of the spectrum at higher energies. A 1 Mpc stepsize is therefore reasonable to reach the typical accuracies required for comparison with current and forecoming experimental data.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{test_pairprod_dip.ps}
\caption{\label{test_pairprod_dip} Top: Comparison of the spectra obtained with CRPropa to the one found in~\cite{Berezinsky:2002nc} for a model of cosmologically distributed sources. The specific parameters of the model correspond to the red curve of Fig.14 in~\cite{Berezinsky:2002nc}. We use a propagation stepsize of 1Mpc and a maximum injection energy of $10^{21}$ eV. Bottom: Relative difference with respect to the curve of~\cite{Berezinsky:2002nc}(black). The error bars are the statistical uncertainties due to the finite number of propagated nucleons. Red: same, for a simulation using a stepsize of 0.3 Mpc. Blue: same, for a simulation using a maximal injection energy of $10^{22}$ eV.}
\end{center}
\end{figure}
\section{Conclusions}
We have presented the first public package to study systematically the
properties of the propagation of UHECRs and their secondaries in a
structured magnetized Universe. We have
detailed the interactions that are already implemented, and presented a
few simple examples obtained directly by running the CRPropa code.
A major advantage of CRPropa is its large modularity, which should
allow various users to implement their own modules, adapted to
specific UHECR propagation models. Many possible upgrades of the
CRPropa package can be considered: This includes the implementation
of non-uniform grids for magnetic field models, of UHE nuclei and secondary
neutrinos and EM particles from their interactions, of inhomogeneous
low energy target photon backgrounds for the UHE nuclei and EM cascade
interactions, and of hadronic interactions with the baryon gas in
dense parts of the large scale structure. Finally, interactions of UHE
neutrinos with relic neutrinos of arbitrary mass and clustering properties
could also be implemented, including the resulting secondary particles.
\ack
FM acknowledges partial support by the Swiss Institute of Technology through a Zwicky
Prize Fellowship. We thank all the people who built the previous
codes from which the development of CRPropa has largely taken profit,
in particular
Martin Lemoine, Gianfranco Bertone, Claudia Isola, and Sangiin Lee. We also thank S\'ebastien Renaux-P\'etel for useful tests.
CRPropa makes use of the public code SOPHIA~\cite{sophia}, and the
TinyXML~\cite{tinyxml}, CFITSIO~\cite{cfitsio} and
CLHEP~\cite{clhep} libraries.
\section*{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,534 |
\section{Introduction}
A fundamental question of stellar evolution theory is which stars end
their lives as supernovae. Current theory for isolated massive stars
makes two basic predictions for SNe. For one, the
zero-age-main-sequence mass (M$_{\rm ZAMS}$) and mass-loss history control
whether a SN occurs, and secondly, for single stars there is a clear
mapping between M$_{\rm ZAMS}$\ and the type of SN
\citep{woosley02,heger03a,dessart11}. In particular, the lower masses
explode with their hydrogen envelopes intact (e.g. II-P, II-L, IIn),
and the most massive stars lose much of their envelopes and explode as
hydrogen deficient SNe (e.g. IIb, Ib/c). However, given the
complexity of the underlying physics, especially binary evolution,
winds, and episodic eruptions, it is unclear whether nature obeys the
same well-delineated mass-dependence.
In fact,the relatively high observed rates of H-deficient SNe
\citep{smith11c} and low upper limits on progenitor masses of Type Ibc
SNe \citep{yoon2012,eldridge2013} imply that binary evolution may
figure prominently in producing the H-deficient SNe. Furthermore,
theory predicts that binary evolution can significantly affect the
mapping between initial stellar mass as SNe type
\citep[e.g.,][]{podsiadlowski1992,tutukov1992,nomoto1995,dedonder1998,yoon2010bin,claeys11,dessart11,eldridge2011}.
It is clear that progress in understanding both mass loss and SNe
requires observational constraints linking the progenitor mass with
the eventual SN.
Unfortunately, of the dozens of SNe that have progenitor mass limits,
only 17 have masses measured from a directly detected progenitor (6 of
these 17 overlap this work). \citet{smartt09b} reviewed the mass
distribution of 30 core-collapse SNe progenitors (20 type IIP and 10
others), but only eight of these had measurements beyond an upper
limit. At that point, there were also 4 other nearby SNe progenitors
with full mass constraints
\citep{woosley1988,aldering1994,crockett2008,fraser2010}, bringing the
total to 12. Since 2009, 5 additional SNe have been measured
\citep{maund11,murphy11a,fraser2012,vandyk2012b,maund2013,fraser2013,vandyk2014b},
and some measurements improved \citep{vandyk2012a,maund2014} making
the total 17 measurements.
These mass estimates are based on serendipitous direct imaging of the
progenitor. First, one searches through the HST archive for bright
evolved stars at the SN position. Then, if a star is found, the
endpoints of stellar evolution models which pass through the color and
magnitude of the likely progenitor are used to estimate the star's
initial main sequence mass. If a star is not found, an upper limit on
the progenitor luminosity is measured.
Even with the limited number of measurements available, there is a
hint of a minimum M$_{\rm ZAMS}$\ for explosion and that the least massive
stars explode as SN II-P \citep{smartt09a}. Interestingly, these
measurements also suggested that the maximum mass for SN II-P may be
lower than expected, with some perhaps having been merged binaries
\citep{smartt09a}, although circumstellar dust may also explain the
observations \citep{walmswell2012}. Counter to expectations, some
H-rich SNe (in particular IIn) have been associated with very massive
stars \citep{galyam09,smith11b,smith11c}. While tantalizing, these
initial results are poorly constrained, and even the simple
$\sim$8~M$_{\odot}$ lower mass limit requires more observational
constraints.
While direct imaging of progenitors is the standard method for
progenitor mass estimation, it suffers from a number of limitations.
First among these is the requirement that the precursor imaging
actually exist. The majority of past SNe have neither pre-existing HST
imaging, nor sufficiently accurate astrometry. Consequently, of the
$\sim\!40$ historic SNe within $\sim\!10$ Mpc, only a handful have
identified progenitors. Some future nearby SNe may also lack
precursor imaging due to the limited observations in the HST archive.
The second major limitation is that even when precursor imaging is
available, interpretation of that imaging depends on modeling of the
most uncertain stages of stellar evolution
\citep{gallart05,smartt09a,yoon2010,langer2012,eldridge2013}.
Existing studies estimate the mass of a precursor by fitting endpoints
of stellar evolution models to its color and magnitude; however, an
evolved star's appearance is not well constrained during the final
evolutionary stages. Binarity, mass loss, pulsation, internal mixing,
the formation of dust in stellar winds, and convective instabilities
in shell-burning layers all contribute to systematic and random
uncertainties in such model endpoints. Matching individual endpoints
of stellar evolution models to a single highly-evolved star on the
brink of explosion therefore places weak constraints on the stellar
mass once systematic uncertainties are taken into account.
In this paper, we take a complementary approach that obviates both of
these limitations. We build on a technique developed by several
investigators over the past two decades, starting with measuring ages
for host star clusters
\citep{efremov1991,walborn1993,panagia2000,barth1996,vandyk1999,maiz2004,wang2005,vinko2009,crockett2008},
moving into finding coeval field populations around both SNe and
supernova remnants
\citet[SNRs;][]{badenes2009,gogarten2009,murphy11a,jennings2012}. We
note that, while we have applied this approach to many SNR locations,
historic SNe are less numerous, but more reliable, targets because
they have well-established locations and types, ensuring that every
progenitor mass corresponds to a {\it bona~fide} core-collapse event.
The technique finds the masses of SNe precursors by analyzing the
stellar populations of stars surrounding the SNe. Using
well-established techniques of stellar population modeling, we can
age-date the star formation (SF) event that led to each SN. The
resulting age places strong constraints on the mass of the precursor,
using the well-understood properties of main-sequence stars. As a
result, in cases where the specific stellar population can be
identified, our technique provides a more reliable progenitor age than
direct imaging, as it does not depend on whether the progenitor was a
single star or a binary. Furthermore, the method works even when
there is no imaging prior to the SN, or when the SN position is only
localized to within a few arcseconds. Hence our technique can be
applied to the location of any historic SN that has sufficiently
high-resolution and deep imaging to measure resolved stellar
photometry of the upper main sequence and/or He-burning sequence.
In \S~2, we discuss our sample, the data analyzed in our study, detail
our analysis technique, and demonstrate its efficacy in test cases.
In \S3, we provide our results in the form of age distributions, a
table of masses, and a table of probability distributions for each
progenitor. Finally, \S~4 gives a summary of our findings.
\section{Data}
\subsection{Sample}
We selected all positively-typed historic core-collapse SNe within $8
{\rm{\,Mpc}}$ that also have $\sim$arcsecond accuracy in their positions from
the Asiago Supernova Catalog \citep{barbon1999}; higher positional
accuracy is not needed, due to the large size of the aperture within
which the CMD is constructed.\footnote{One arcsec corresponds to 5 pc
for every Mpc of distance.} We have shown in \citet{murphy11a} that
our method provides results consistent with direct progenitor
detections \citep[as confirmed by][]{vandyk2013} in galaxies as
distant as 8~Mpc. Beyond this limit, our method is not tested, and
therefore we have confined our sample to be within this distance.
We cross-referenced the SNe catalog with the HST archive and
identified SNe that have HST imaging in at least 2 broadband filters
in ACS, WFPC2, or UVIS. Even relatively shallow data can provide
constraints given that, for the most massive $\sim$50~M$_{\odot}$
progenitors, the surrounding populations are likely to have other very
massive stars with M$_{\rm V}{<}{-}$5. We found 22 SNe that match
these requirements. Another SN, 2011dh, has already been analyzed by
our technique in \citet{murphy11a}. Of our sample there are two that
are possibly SN impostors. These are SN1954J \citep{smith2001} and
SN2002kg \citep{weis2005}. We still include these impostors because
the classification as SN impostors are not definitive and it is likely
that these transients are associated with the last stages of stellar
evolution, in which our proposed method to derive progenitor masses is
still of interest. Table~\ref{sample} shows these SNe for which we can
attempt to derive the SFH and progenitor mass, along with the proposal
ID for the dataset used for out photometry.
There were five SNe which we attempted to analyze, but were not able to
constrain with confidence. These had relatively shallow data for
quite distant events (SN~1980K, SN1985F, SN2002ap, SN2002bu, and
SN2003gd) with fewer than 5 stars detected within a 50 pc physical
radius. We do not consider these due to the sparsity of data, although
it is likely that deeper imaging of these locations would yield
photometry that would result in reliable mass estimates. However it is
also possible these progenitors were runaway stars exploding some
distance from their co-eval population \citep[c.f.,][]{eldridge2011}.
\subsection{Analysis Method}
Our method has been described in several other publications, including
its application to SN 2011dh \citep{murphy11a}, 121 SN remnants (SNRs)
in M31 and M33 \citep{jennings2012,zach2}, and an unusual transient in
NGC~300 \citep{gogarten09a}. We provide a description of the method
here as well for convenience.
In brief, we fit SFHs to CMDs of the population surrounding the site
of the SN to determine the age, and thus mass, of the SN progenitor.
The measurements are anchored by the main-sequence stars surrounding
the event; thus, our age estimates do not depend on whether a binary
or single-star progenitor is assumed. Furthermore, the measurements
are not sensitive to any circumstellar dust present around the
progenitor itself.
In the following subsections we first summarize how well the method
works and provide a proof of concept. Then we detail how our
photometry was performed, how the stellar samples were generated, and
how their age distributions were derived.
\subsubsection{Overview}
The method takes advantage of the fact that most stars form in stellar
clusters \citep{lada03} with a common age ($\Delta
t\!\lesssim\!1-4{\rm{\,Myr}}$) and metallicity. Indeed, over 90\% of stars
form in rich clusters containing more than 100 members with
$M\!>\!50$M$_{\odot}$ \citep{lada03}. The stars that formed in a
common event remain spatially correlated on physical scales up to
$\sim\!100{\rm{\,pc}}$ during the $100{\rm{\,Myr}}$ lifetimes of $4$M$_{\odot}$ stars,
even if the cluster is not gravitationally bound \citep{bastian06}; we
have confirmed this expectation empirically in several test cases
\citep{gogarten09a,murphy11a}. Thus, it is reasonable to assume that
most young stars within a 50 parsecs of many SN are coeval. However,
we note that our assumption breaks down for SNe from runaway stars
\citep{eldridge2011}, which would not be coeval with their surrounding
population.
The age of a SN's host stellar population can be recovered from its
color-magnitude diagram (CMD). In the simplest method, one can fit a
single isochrone to an observed CMD and estimate the turnoff mass of
the youngest stars. However, due to the small numbers of massive
stars, one can easily underestimate the mass, since CMDs can show an
apparent turnoff that is fainter than the true turnoff luminosity
simply because of poor Poisson sampling of the upper end of the IMF.
Instead, we adopt more sophisticated methods that take advantage of
the entire CMD. These methods fit superpositions of stellar
populations to reproduce the observed CMD, using the recovery of
artificial stars to generate realistic distributions of stars from
theoretical isochrones. The recovered recent SFH therefore fits not
just the turnoff luminosity, but the full luminosity function of the main
sequence and the blue and red core Helium-burning sequences as well.
Including the well-populated lower end of the main sequence adds
significant statistical weight when interpreting the sparsely sampled
population of massive upper main sequence stars.
The method allows dust extinction to be reliably taken into account.
The main sequence has a well defined color, such that any shift
towards redder colors must be produced by foreground reddening,
allowing the dust extinction to be inferred from the CMD itself.
Differential extinction can be constrained as well, using the observed
widening of the main sequence over what is expected from photometric
errors. The resulting reddening constraints are dominated by the
young stars in which we are most interested.
\subsubsection{Method Validation}
An example of the efficacy of the method is in its application to SN
1987A. We have run our model fits on deep WFPC2 photometry measured
from the archival data of proposal ID 7434 (PI: Kirshner). We fit the
F555W-F814W CMD in the range 12$<$F814W$<$25 as shown in
Figure~\ref{87deep}, and get a well-constrained median
(22.0$^{+2.3}_{-5.8}$ M$_{\odot}$), which is consistent with the mass
(19$\pm 3$ M$_{\odot}$) derived in direct imaging studies
\citep{woosley88}, and with the combined mass of the binary merger
scenario
\citep[16$+$3$\rightarrow$19;][]{podsiadlowski1990,podsiadlowski1992}.
This comparison between techniques provides strong verification of our
proposed method.
While this test is encouraging, the data for 1987A is significantly
deeper than that of our more distant objects. Two more tests,
however, suggest that our technique works even with much shallower
data. First, we ran our model fits on 1987A only including the
photometry for stars brighter than apparent magnitude of 18.5
(absolute magnitude of 0), comparable to the depth of most of our more
distant targets. The resulting median mass was more poorly
constrained (22.8$^{+2.5}_{-14.4}$ M$_{\odot}$), but was still
consistent with the known mass. Thus, we may lose precision with
shallower data, but we can still obtain useful constraints.
In addition to our tests on SN 1987A, we have verified our technique
out to $\sim$8 Mpc by applying it to SN 2011dh in M51
\citep{murphy11a}, for which we found a progenitor mass of
13$^{+2}_{-1}$ M$_{\odot}$. \citet{maund11} identified a progenitor
in archival HST images, which has since vanished \citep{vandyk2013},
and fit the bolometric luminosity to stellar-evolution models and
derived a progenitor mass of 13$\pm 3$ M$_{\odot}$, consistent with
our constraints.
\subsection{Resolved Stellar Photometry}
To generate the CMD, we measure resolved stellar photometry of the
{\it HST} field containing the location of the historic SN. This
photometry was performed using the packages {\tt HSTPHOT} (for WFPC2
data) or {\tt DOLPHOT} \citep[for ACS data;]{dolphin2000}. These
packages perform point spread function fitting optimized for the
undersampled flat-fielded images that come from {\it HST}. All of the
photometry we use has been publicly released to the High-Level Science
Products in the HST archive through the ANGST and ANGRRR programs
(GO-10915 and AR-10945; PI: Dalcanton). The details of the fitting
and culling parameters used are provided in \citet{dalcanton2009} and
the ANGRRR public data archive
\footnote{https://archive.stsci.edu/prepds/angrrr/}. As part of these
programs, hundreds of thousands of artificial star tests were also
performed to assess completeness and photometric accuracy. These
tests consist of inserting a single star into the data, rerunning the
the data reduction, and assessing whether the fake star was recovered,
and if so, how close its measured brightness was to the input
brightness.
\subsubsection{CMD Sample Selection}
To isolate the subset of stars from our catalogs that were co-spatial
with the historic supernovae, we used the coordinates for the SNe from
the Asiago Supernova Catalog \citep{barbon1999}, and galaxy distances
from \citet{dalcanton2009}. We corrected the astrometry in our
catalogs by cross-correlating 2MASS positions for the bright stars in
our catalogs with our positions. Our catalog astrometry is then
corrected such that the star positions agree with those of 2MASS as
precisely as our centroiding for these bright stars will allow
(typically $\sim$0.1$''$). This correction to our catalog astrometry
made sure that our positions were at least as precise as those in the
SNe catalog (within a few tenths of an arcsecond).
With the location and the distance well-measured, we were able to pull
stars measured within a projected radius of 50~pc of each SNe. In the
most distant cases, this radius is only a bit more than 1 arcsecond,
making the necessary precision of astrometry only $\sim$1$''$.
To provide fake star statistics for the photometric completeness and
precision appropriate to our sample, we required a minimum of ten
thousand fake stars into our images. To reach this number, we
included fake stars from a region of up to a factor of 7 larger than
the real stars. In fields where the quality of the data varies
quickly with position, such as near the center of M82, we applied
additional computing resources to obtain more artificial star tests
within the same radius as the stellar sample. However, for almost all
of our fields, changes in stellar density, and therefore photometric
quality, were small over the field, making it possible to use a large
suite of artificial star tests to improve statistics on our CMD
fitting.
\subsubsection{CMD Fitting}
Our CMD fitting process was very similar to that performed in
\citet{jennings2012}. We used the CMD-fitting package MATCH
\citep{dolphin2002,dolphin2013} to fit each CMD with the stellar
evolution models of \citet{girardi2002} with updates in
\citet{marigo2008} and \citet{girardi2010}. The package allows models
to be shifted in temperature and luminosity space to mimic systematic
uncertainties, and it allows differential extinction to be applied to
the models during fitting.
First, we determined the best-fitting amount of differential
extinction to apply when fitting each SN. We fitted the data with a
grid of values for the differential extinction ($dA_{\rm V}$) and
foreground extinction A$_{\rm V}$. We chose the $dA_{\rm V}$ value
that provided the best fit to the data without requiring an $A_{\rm
V}$ value below the known foreground extinction from
\citet{schlegel1998}. We show an example plot summarizing this
extinction determination method in Figure~\ref{dav}.
With the distance, extinction, and differential extinction values
fixed, we fitted the CMD to find the most likely age distribution,
allowing metallicities for the young population in the range of
-0.6$\leq$[Fe/H]$\leq$0.1. Examples for objects with data quality
typical of most of our sample are shown in
Figure~\ref{93J}-\ref{02hh}, where we show an image of our extraction
region, a plot of the CMD of the 50 pc region, and a final cumulative
star formation history from the fitting routine. In
Figure~\ref{correlations1} we plot our final derived masses against
distance and $A_{\rm V}$. The lack of correlations in the derived
masses as a function of these parameters suggests they do not
introduce any significant bias into our measurements.
To assess systematic uncertainties (due to any model deficiencies), we
reran the fitting with several changes to the models, following
\citet{dolphin2012}. We allowed the effective temperature of the
models to vary by $\Delta$log(T$_{eff}$)=0.02. We allow the
bolometric luminosity of the models to vary by
$\Delta$log(L$_{bol}$)=0.17. Furthermore, we allow the differential
extinction applied to the model to vary by ${\Delta}dA_{\rm V}$=0.2 in
cases of high $dA_{\rm V}$ ($>$0.4). We run 100 fits to 100
realizations of the model, varying all of these parameters to account
for systematic uncertainties resulting from our stellar evolution
models and our treatment of the extinction.
Then, to measure the random uncertainties due to number of stars and
depth of the photometry, we use the {\tt hybridMC} task within the
MATCH package as described in \citet{dolphin2013}. This task
determines the star formation rate that would allow an acceptable fit
to the data for each age bin, thus providing robust upper-limits for
bins where the best-fit star formation rates were 0. With both
uncertainty determinations complete, we combine the random and
systematic uncertainties in quadrature using the MATCH routing {\it
zcmerge} to calculate our final uncertainties on the star formation
rates in each age bin.
Next, we use our total uncertainties on the star formation rates in
each age bin to determine the uncertainty on the fraction of stellar
mass present in each age bin back to 50~Myr. We perform 1000 Monte
Carlo realizations of the measured SFH from 50 Myr to the present. We
then calculate the 16\% and 84\% ranges in stellar mass fraction in
each age bin from these tests. We adopt these percentiles as the
uncertainties on fraction of stellar mass formed in each age bin
relative to the total stellar mass produced in the past 50 Myr.
\section{Progenitor Masses}
Once we have the mass fraction (and associated mass fraction
uncertainty) in each age bin, we calculate our first estimate of the
progenitor mass by determining the median age of the best fit. We
then use our uncertainties to determine the age bins consistent with
containing the median to assign uncertainties on that median age.
Finally, we convert these ages to masses by taking the most massive
star remaining in the model isochrone corresponding to the each age
\citep[see][for more details]{jennings2012}. These values provide our
the nominal progenitor mass and associated uncertainties for each SN.
These median masses and associated uncertainties ($\sigma_{med}$) are
provided in Table~\ref{median}.
Although the assignment of a single progenitor age is of interest,
many of our SFHs contain multiple coeval populations, making a more
complex distribution of the mass probability desirable for some
purposes. We therefore have also tabulated the uncertainty for each
progenitor due to the spread in the recovered age distribution
($\sigma_{pop}$). These uncertainties encompass 68\% of the total
population mass (about the median value of the best fit) with ages
$<$50 Myr including uncertainties and account for the full
distribution ages present at the SN location, similar to the technique
adopted in earlier work \citep{murphy11a,jennings2012}. In most cases,
the stellar mass is relatively well confined to a small age range, but
including this second set of uncertainties shows where there are
multiple ages present. For example, the median age of the young
population surrounding SN1994I is well-determined, providing a
high-precision mass measurement of 10.2$\pm$0.7 M$_{\odot}$; however,
there is also a younger population present that represents a
significant fraction of the stellar mass. If the presence of this
population is taken into account, the uncertainties on the progenitor
mass increase substantially to 10.2$^{+59.2}_{-1.8}$. Thus, in this
case, only under the assumption that progenitor was a member of the
dominant young population is the mass of the progenitor
well-constrained. Otherwise, it is only a lower limit.
Looking at these uncertainties, one can determine which SNe would
benefit most from improved photometry data. Large spreads in the 68\%
population mass accompanied by small errors on the median seem to
occur for SNe with few stars in the CMD. For example, SN1951H has
only 11 stars for fitting, a 10\% error on the median, but
$\sigma_{pop}$ values that encompass the full mass range of the
models. Thus, the well-constrained median suggests that the mass
should also be well constrained, but the small number of stars results
in large uncertainties for other age bins which would likely be
reduced with deeper data and a larger number of detected stars.
Finally, to provide detailed probability distributions for all SN, we
tabulate the probability that the progenitor was in each age/mass bin,
given the SFH and associated uncertainties. These probability
distributions are given in Table~\ref{pdfs}, where each mass bin is
assigned a probability which comes from the most likely SFH, and an
associated uncertainty on the probability, which comes from applying
the uncertainties in the SFH to the mass probability distribution.
Thus, in order to account for both sources of uncertainty on the
progenitor mass (the fitting uncertainty and the uncertainty
associated with the intrinsic range of ages), it is necessary to
assign uncertainties to our probabilities. However, in many cases,
the median mass is relatively well defined ($\sigma{<}20$\%), and
provides a simple, though less thorough, constraint on the progenitor
mass.
Six of the events in our sample have previously-measured progenitor
masses from direct imaging (SN1987A, SN1993J, SN2004dj, SN2004et,
SN2005cs, SN2008bk, see Table~\ref{median}), and while our
measurements are less precise in some cases, they are consistent with
the previous measurements in all cases, as shown in Figure~\ref{comp}.
Indeed, in all cases the previous measurements are consistent with our
most optimistic uncertainties---the uncertainty on the median age of
the young population.
\subsection{Extreme Cases}
A few SNe in our samples stand out as extremely challenging of our
ability to measure star formation histories. For example, in
Figure~\ref{04am} we show the image, photometry, and fitting results
for our most heavily extincted location, SN2004am. In this case, even
though there is a very high amount of differential extinction
($dA_{\rm V}$=2.5), the relatively large number of stars provides a
good constraint on the age distribution. Unfortunately, with this
much dust, there is clearly the possibility of a significant number of
more massive stars being completely hidden from the sample, which
would not be accounted for by our method. We cannot account for stars
that are extincted out of our photometry sample. Thus, this amount of
dust may make this result less reliable than many of the others. Such
examples are unlikely to improve without much deeper data to probe to
very high extinctions.
Another extreme case is SN2004et, where we only have 6 stars in the
CMD due to shallow imaging and a far distance (meaning a small
extraction region on the sky). We show our results for this SN in
Figure~\ref{04et}, where the lack of stars results in very large
uncertainties. Although the uncertainties on the progenitor mass are
large, the full range of masses are not allowed by our uncertainties,
which suggest the mass is $>$16 M$_{\odot}$. These uncertainties are
reliable, as the best-mass is well away from (but within the large
errors of) the mass measured from direct imaging. Interestingly, even
with the large uncertainties, the mass constraint is useful since it
rules out masses lower than the best-fit mass from direct imaging.
This example confirms that our uncertainty estimates are reliable, but
also demonstrates that attempting this technique with any fewer stars
is of little value.
Finally, our results in this work add further validation to our
method. For the 6 SNe with we measure here that have literature
measurements, we plot our measurements against those from the
literature in Figure~\ref{comp}. In all cases our measurements are
consistent with previous measurements within the uncertainties, and no
systematic bias is seen.
\subsection{Progenitor Mass Distribution}
We note that our results are consistent with no SN progenitors
$>$20~M$_{\odot}$, as are all of the progenitor mass measurements
currently available in the literature (see references in Section 1).
While we do have some best estimates that are higher mass, their
uncertainties all extend below 20~M$_{\odot}$. Our most massive
central values are for SN2004et and SN1962M, but these only have 75\%
and 82\% probability of being $>$20~M$_{\odot}$. Furthermore, the
direct imaging mass for SN2004et has an upper limit of 20~M$_{\odot}$,
suggesting that the correct mass is indeed at the low end of our
uncertainties. Figure~\ref{mass} plots the masses in ranked order,
along with the expected distribution of masses for a
\citet{salpeter1955} IMF with different upper-mass cutoffs. The large
uncertainties on the high progenitor masses severely limit our ability
to determine the existence of such a cutoff. Thus, our current sample
and data quality does not provide any conclusive evidence that
high-mass stars produce core-collapse supernovae. This lack of
conclusive $>$20~M$_{\odot}$ progenitors is consistent with findings
of several other studies \citep{smartt09a,jennings2012}, hinting that
there could be a ceiling to SN production or a mass range that
under-produces SNe. However, if we can measure a single progenitor
mass $>$20~M$_{\odot}$ with even 20\% precision, constraints on the
progenitor mass distribution would be greatly improved.
\section{Conclusions}
We have constrained the progenitor masses of 17 historic SNe using CMD
fitting of stellar populations measured from HST archival data.
Eleven of these are new constraints, making the total number of
historic SN progenitor masses 28. Even with this dramatic increase in
mass measurements, there is still not a single high-precision
measurement of a progenitor $>$20~M$_{\odot}$, making characterization
of the progenitor mass distribution difficult.
This work represents all that is possible with the current state of
the HST archive. The power of the technique is clear, and we hope
that future studies will be made possible by more and deeper HST
imaging of nearby galaxies containing historic SNe.
Support for this work was provided by NASA through grants AR-13277,
GO-10915, and Hubble Fellowship grant 51273.01 from the Space
Telescope Science Institute, which is operated by the Association of
Universities for Research in Astronomy, Inc., for NASA, under contract
NAS 5-26555. Z.G.J. is supported in part by a National Science
Foundation Graduate Research Fellowship.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 952 |
Q: orbeon calculated fields in a repeat break after first itiration In my Orbeon form, I am using a repeat with calculated fields within the repeat. For example, my repeat includes two integer fields, 1) total number of crayons and 2) number of clue crayons. The third field is a calculated field showing the the percentage of blue crayons. using the following Xpath expression,
if ($LMI-Bene ne 0)
then $LMI-Bene div $Total-Bene * 100 else 0
I am able to calculate the percentage. The problem comes when I add a new iteration to my repeat and even the first line stops working. I think this could be because the control names of each iteration are the same, but I'm not sure how to account for that. Any ideas?
A: Use the relative XPath to the value rather than the binding variable.
In example try
if (../LMI-Bene ne 0)
then ../LMI-Bene div ../Total-Bene * 100 else 0
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,270 |
Eileen Bull talks about setting up her new home.
I called it marvellous, it was very good, we never had nothing, we didn't have a lot of furniture to bring here, cos we never had none. We gradually got on, got everything we wanted, took a long time but we did get everything you know. We bought some in Maulden, like a put-you-up, a dressing table and a dining room suite, that was it until we got going. It didn't worry us that we didn't have anything, but that was alright. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,860 |
{"url":"https:\/\/optimization.cbe.cornell.edu\/index.php?title=Eight_step_procedures&oldid=1630","text":"# Eight step procedures\n\nAuthor: Eljona Pushaj, Diana Bogdanowich, Stephanie Keomany\nSteward: Fengqi You\n\n# Introduction\n\nThe eight-step procedure is an approach in dynamic programming used to determine optimal solutions in mathematical optimization. Dynamic programming is used for problems requiring maximization or minimization of the objective function and can be solved by enumerating all the different possible solutions and finding the best one.\n\nIn the eight-step procedure, a problem can be broken down into subproblems to solve. Using the solutions from the subproblems in a recursive manner, the solution can be determined after all the solutions of the subproblems are calculated to find the optimal solution. Such a standard framework is used so that dynamic programming store the values of the subproblems to avoid recomputing, and thus, reduce time to solve the problem.\n\n# Theory, Methodology, and\/or Algorithmic Discussion\n\n### Methodology\n\nTo solve a problem using the 8-step procedure, one must use the following steps:\n\nStep 1: Specify the stages of the problem\nThe stages of a dynamic programming problem can be defined as points where decisions are made. These are often denoted with the variable ${\\displaystyle n}$.\n\nStep 2: Specify the states for each stage\nThe states of a problem are defined as the knowledge necessary to make a decision, or ${\\displaystyle s}$. We set ${\\displaystyle C}$ equal to the maximum value of ${\\displaystyle s}$.\n\nStep 3: Specify the allowable actions for each state in each stage\nThis can be defined as:\n${\\displaystyle U_{n}(s)\\,or\\,j\\,=\\,0,1,...,min\\left\\{a[n],\\left\\lfloor {\\frac {s}{w[n]}}\\right\\rfloor \\right\\}}$\n\nStep 4: Describe the optimization function using an English-language description.\nIn this sentence, we describe the optimization function for each state, or ${\\displaystyle s}$, and each stage, or ${\\displaystyle n}$. This can also be called ${\\displaystyle f_{n}^{*}(s)}$\n\nStep 5: Define the boundary conditions\nThis helps create a starting point to finding a solution to the problem. First, we set ${\\displaystyle f_{n+1}^{*}(s)=0}$ for all values of ${\\displaystyle s}$. Here, we can note that ${\\displaystyle s=0,...,C}$\n\nStep 6: Define the recurrence relation\nDuring this step, we make an allowable decision involving ${\\displaystyle j}$ items for the remaining capacity ${\\displaystyle s}$ for items ${\\displaystyle n}$. We can write this statement as:\n${\\displaystyle f_{n}^{*}(s)={\\overset {max}{j=0,1,...,min\\left\\{a[n],\\left\\lfloor {\\frac {s}{w[n]}}\\right\\rfloor \\right\\}}}\\left\\{b[n,j]+f_{n+1}^{*}(s-j*w[n])\\right\\}}$\n\nStep 7: Compute the optimal value from the bottom-up\nIn this step, a table is made containing all ${\\displaystyle s}$, ${\\displaystyle f_{n}^{*}(s)}$, and optimal values for all ${\\displaystyle n}$ variables. This step can be done manually or by using programming.\n\nStep 8: Arrive at the optimal solution\nOnce the value for ${\\displaystyle f_{n}^{*}(s)}$ is computed, we would look at the optimal decision that corresponds to the table entry for that value. We start with the optimal value for our first ${\\displaystyle n}$, calculate our remaining space ${\\displaystyle s}$, and use that value to arrive at an optimal value for all ${\\displaystyle n}$.\n\n# Numerical Example\n\nWeight capacity of C=5 and N=2\n\nItem types are stages: n=1,2\n\nRemaining capacity s= 1,2,3,4,5\n\nBoundary Conditions:\n\n${\\displaystyle f_{n+1}^{*}(s)=0}$, s=0,1,2,3,4,5 C=5\n\n${\\displaystyle U_{2}(5)\\,=\\,0,1,...,min\\left\\{a[2],\\left\\lfloor {\\frac {5}{w[2]}}\\right\\rfloor \\right\\}}$= {0,1,2}\n\n${\\displaystyle f_{2}^{*}(5)=max\\left\\{b[2,j]+f_{3}^{*}(5-j*w[2])\\right\\}}$=\n\nUnused Capacity s ${\\displaystyle f_{1}^{*}(s)}$ Type 1 opt ${\\displaystyle U_{1}^{*}(s)}$ ${\\displaystyle f_{2}^{*}(s)}$ Type 2 opt ${\\displaystyle U_{2}^{*}(s)}$ ${\\displaystyle f_{3}^{*}(s)}$\n5 9 0 9 2 0\n4 9 0 9 2 0\n3 4 0 4 1 0\n2 4 0 4 1 0\n1 0 0 0 0 0\n0 0 0 0 0 0\n\n# Applications\n\nThe following are some applications where dynamic programming is used. The criteria for applying dynamic programming to an optimization problem are if the objective function involves maximization, minimization, or counting and if the problem is determined by finding all the solutions to find the optimal solution.\n\nShortest\/ Longest Path Problem\n\nIn the shortest path problem, the path with the least amount of cost or value must be determined in a problem with multiple nodes in between the beginning node s to the final node e. Travelling from one node to another incurs a value or cost c(p, q), and the objective is to reach t with the smallest cost possible. The eight-step procedure can be used to determine the possible solutions which the optimal solution can be determined from.\n\nLikewise, but in a maximization function, the longest path can be determined in a problem by determining the solution with the highest cost involved to travel from node s to node e.\n\nKnapsack problem\n\nThe knapsack problem is an example of determining the distribution of effort or when there are limited resources to be shared with competing entities and the goal is to maximize the benefit of the distribution. Oftentimes dynamic programming is used when the increase in benefit in regard to increasing the quantity of resources is not linearly proportional.\n\nInventory planning problem","date":"2022-12-08 21:15:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 30, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6927914023399353, \"perplexity\": 384.2962586318719}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446711360.27\/warc\/CC-MAIN-20221208183130-20221208213130-00683.warc.gz\"}"} | null | null |
{"url":"https:\/\/www.snapxam.com\/problems\/34980411\/x-5-0-3x-10","text":"# Solve the inequality x-5#3x+10+\n\n## x-5\\leq +3x+10\n\nGo!\n1\n2\n3\n4\n5\n6\n7\n8\n9\n0\nx\ny\n(\u25fb)\n\u25fb\/\u25fb\n2\n\ne\n\u03c0\nln\nlog\nlim\nd\/dx\nd\/dx\n>\n<\n>=\n<=\nsin\ncos\ntan\ncot\nsec\ncsc\n\nasin\nacos\natan\nacot\nasec\nacsc\n\nsinh\ncosh\ntanh\ncoth\nsech\ncsch\n\nasinh\nacosh\natanh\nacoth\nasech\nacsch\n\n$x\\geq \\frac{1}{2}-\\frac{15}{2}$\n\n## Step by step solution\n\nProblem\n\n$x-5\\leq +3x+10$\n1\n\nGrouping terms\n\n$-3x-5+x\\leq 10+$\n2\n\nAdding $-3x$ and $x$\n\n$-2x-5\\leq 10+$\n3\n\nMoving the term $-5$ to the other side of the inequation with opposite sign\n\n$-2x\\leq 5+10+$\n4\n\nAdd the values $10$ and $5$\n\n$-2x\\leq 15$\n5\n\nMultiply both sides of the inequality by $-1$, reversing the sign\n\n$2x\\geq \\left(15\\right)\\left(-1\\right)$\n6\n\nMultiply $\\left(15+\\right)$ by $-1$\n\n$2x\\geq -15-1$\n7\n\nDivide both sides of the inequation by $2$\n\n$x\\geq \\frac{-15-1}{2}$\n8\n\nSplit the fraction $\\frac{-15+-1}{2}$ in two terms with same denominator\n\n$x\\geq \\frac{-1}{2}+\\frac{-15}{2}$\n9\n\nDivide $-15$ by $2$\n\n$x\\geq \\frac{-1}{2}-\\frac{15}{2}$\n10\n\nApply the formula: $\\frac{b\\cdot a}{c}$$=b\\frac{a}{c}$, where $a=-1$, $b=$ and $c=2$\n\n$x\\geq -\\frac{15}{2}-1\\cdot \\left(-\\frac{1}{2}\\right)$\n11\n\nMultiply $-\\frac{1}{2}$ times $-1$\n\n$x\\geq \\frac{1}{2}-\\frac{15}{2}$\n\n$x\\geq \\frac{1}{2}-\\frac{15}{2}$\n\nPolynomials\n\n0.23 seconds\n\n74","date":"2018-11-17 14:51:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8804969787597656, \"perplexity\": 3796.34949876172}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-47\/segments\/1542039743714.57\/warc\/CC-MAIN-20181117144031-20181117170031-00068.warc.gz\"}"} | null | null |
\section{Introduction}
Let $R$ be a finite-dimensional algebra over a field $k$. The $k$-dual $\Rd := \Hom{k}(R,k)$ has a natural structure as an \bimod. We say $R$ is a {\em Frobenius algebra} if $R \isom \Rd$ as \leftmods, and $R$ is a {\em symmetric $k$-algebra} if $R \isom \Rd$ as \bimods. It is well-known that $\Rd$ is isomorphic to the injective hull of $\Rrad$ as \leftmods, so $R$ is Frobenius iff $R \isom E(\R(\Rrad))$ as \leftmods. This purely ring-theoretic criterion shows that the property of $R$ being Frobenius is independent of the field $k$ over which we are considering $R$ as an algebra. Motivated by this property, an arbitrary artinian ring $S$ is defined to be a {\em Frobenius ring} if \(S \isom E(_S(S/\mbox{rad }S))\) as left $S$-modules, and this definition has led to a rich theory of Frobenius rings (see, for example, Section 16 in \lomar) that is not dependent on the framework of linear algebra.
The facts above naturally raise several questions. Is the property of $R$ being a symmetric $k$-algebra independent of $k$? If $R$ is symmetric, we know by Brauer's Equivalence Theorem (16.70 in \lomar) that the $k$-dual functor $\Hom{k}(-,k)$ from \leftmods\ to \rightmods\ is independent of $k$, i.e. the two functors defined by different fields are naturally equivalent. On the level of modules, this means that the \rightmod\ isomorphism type of the $k$-dual of any left module $\R X$ is independent of $k$. Do these facts remain true if $R$ is only Frobenius? The result above shows only that the isomorphism type of the dual of the left regular module $\R R$ is independent of $k$.
The key to all of these questions is the Nakayama automorphism, a distinguished $k$-algebra automorphism of a Frobenius algebra $R$ that measures how far $R$ is from being a symmetric algebra. (The automorphism is the identity iff $R$ is symmetric.) We will show that the Nakayama automorphism is independent of $k$ and derive affirmative answers to the questions above as corollaries. We will give a purely ring-theoretic condition that is equivalent to the property of $R$ being symmetric at least in the case when $k$ is infinite. We hope that this will promote a ring-theoretic development of properties of symmetric algebras that parallels the theory of Frobenius rings.
F. G. Frobenius himself pioneered the idea of comparing an algebra with its dual in \cite{frobenius}. The main properties of Frobenius algebras and symmetric algebras were developed by Nakayama in \cite{nak1}, \cite{nak2}, and \cite{nak3}. They have been the subject of continued interest because of connections to such diverse areas as group representations, topological quantum field theories, Gorenstein rings in commutative algebra, Hopf algebras, coding theory, and the Yang-Baxter Equation. For an excellent reference on the subject, see \lomar.
\section{The Nakayama automorphism}
In this section we show that the \nakaut\ of a Frobenius algebra is independent of the ground field. As a corollary to the proof, we derive a simple ring-theoretic characterization of local symmetric algebras.
Let $R$ be a finite-dimensional algebra over a field $k$. In \lomar, Theorem 3.15, we have:
\begin{theorem} \label{Frobdef}The following are equivalent:
\begin{description}
\item[1.] $R$ is a Frobenius algebra, i.e.\ \(R \isom \Rd\) as \leftmods.
\item[2.] There exists a linear functional \(\lambda: R \rightarrow k\) whose kernel contains no nonzero
left ideals.
\item[3.] There exists a hyperplane $H \subset R$ (i.e. a subspace of codimension 1) containing no nonzero left ideals.
\item[4.] There exists a nondegenerate associative bilinear form \(B: R \times R \rightarrow k\). (``Associative'' means \(B(rs,t) = B(r,st)\).)
\end{description}
\end{theorem}
The equivalence of the first two conditions follows from taking \(\lambda\) to be the image of 1 under the module isomorphism and vice versa. The equivalence of the second and fourth condition follows from defining \(B(r,s):=\lambda(rs)\) and \(\lambda(r):= B(r,1)\). Since the last condition is right-left symmetric, we could also include the right-handed analogues of the other conditions above.
Given one isomorphism \(\hi:R \isommap \Rd\), any other isomorphism $\hi'$ is obtained by composition with an automorphism of the left regular module $_R R$, which corresponds to right multiplication by a unit $u \in U(R)$. This affects the other conditions above as follows: the new functional is \(\lambda' = u\lambda: r \mapsto \lambda(ru)\); the new hyperplane is \(H' = \ker \lambda' = Hu\inv\); and the new form is \(B'(r,s) = B(r,su)\).
A similar theorem (\lomar, Theorem 16.54) applies to symmetric $k$-algebras:
\begin{theorem} \label{symdef}The following are equivalent:
\begin{description}
\item[1.] $R$ is a symmetric algebra, i.e.\ \(R \isom \Rd\) as \bimods.
\item[2.] There exists a functional \(\lambda: R \rightarrow k\) such that \(\ker \lambda\) contains no nonzero left ideals and \(\lambda(rs) = \lambda(sr) \; \forall r,s \in R\).
\item[3.] There exists a hyperplane $H \subset R$ containing the commutators \([R,R] = \{\sum_i (r_is_i - s_ir_i):r_i, s_i \in R\}\) and containing no nonzero left ideals.
\item[4.] There exists a nondegenerate associative symmetric bilinear form \(B: R \times R \rightarrow k\).
\end{description}
\end{theorem}
If the conditions of Theorem~\ref{Frobdef} hold, the nondegeneracy of the form $B$ implies that there is a unique $k$-linear map \(\sigma: R \rightarrow R\) defined by \(B(r,s)=B(s,\sigma(r)) \; \forall r,s \in R\). It is easy to check that $\sigma$ is actually a $k$-algebra automorphism of $R$; we call it the {\em \nakaut} of $R$. Replacing $B$ with a new form $B'$ defined by the unit $u$ gives us the new automorphism \(\sigma':r \mapsto u\sigma(r)u\inv\). So the \nakaut\ is determined up to composition with inner automorphisms; equivalently, it is a well-defined element of the group of outer automorphisms of $R$. The algebra is symmetric iff \sig\ can be taken to be the identity, iff the \nakaut\ determined by an arbitrary nondegenerate associative bilinear form is an inner automorphism.
If we use the linear functional $\lambda$ to define \sig\ instead of the form $B$, then \sig\ is defined by the equation
\[
\lambda(rs) = \lambda(s\sigma(r)) \all r,s \in R).
\]
We are now ready to prove that the \nakaut\ is independent of the base field. We warm up with the local case. The argument is similar to that for the general case but much easier, and it gives us a criterion for a local algebra to be symmetric.
\begin{theorem}\label{theorem-local}
If $R$ is a local Frobenius k-algebra then $\sigma$ is independent of $k$.
\end{theorem}
\textit{Proof}.
Let $k_1$ and $k_2$ be two fields over which $R$ is a finite-dimensional algebra, and suppose $\sigma_1$ is a Nakayama automorphism of $R$ as a $k_1$-algebra. Then $\sigma_1$ arises from a $k_1$-linear functional \(\lambda_1:R \rightarrow k_1\) via the equation
\[
\lambda_1(rs) = \lambda_1(s \sigma_1(r)) \all r,s \in R).
\]
Thus \(C:= \{\sum(r_is_i - s_i\sigma_1(r_i)) :\; r_i, s_i \in R\} \subseteq \ker \lambda_1\). Note that $C$ is closed under multiplication by any element from the center $Z(R)$, and in particular that $C$ is a subspace with respect to both $k_1$ and $k_2$.
Now since $R$ is local Frobenius, $\R R$ is the only principal indecomposable \leftmod, and so $\R R$ has a simple socle $S$ by Theorem 16.4 in \lomar. Then \(S \not\subset \ker \lambda_1\), so
\(
S \not\subset C.
\)
Since $S$ and $C$ are both $k_2$-subspaces, we can define a $k_2$-linear functional \(\lambda_2:R \rightarrow k_2\) that is $0$ on $C$ but not on $S$. Then since \(S \not\subset \ker\lambda_2\), \(\ker\lambda_2\) contains no nonzero left ideals, and the Nakayama automorphism $\sigma_2$ of $R$ as a Frobenius $k_2$-algebra is defined by
\[
\lambda_2(rs) = \lambda_2(s \sigma_2(r)) \all r,s \in R).
\]
In other words, $\sigma_2(r)$ is uniquely defined by
\[
rs - s \sigma_2(r) \in \ker\lambda_2 \all s \in R).
\]
But \( rs - s \sigma_1(r) \in C \subseteq \mbox{ ker}\lambda_2 \; (\forall s)\), so \(\sigma_2(r) = \sigma_1(r) \; \forall r \in R\), as desired.
\qed
The proof above gives us the promised ring-theoretic characterization of local symmetric algebras. Recall that the property of $R$ being Frobenius over $k$ is independent of $k$, and in fact, is equivalent to a ring-theoretic property.
\begin{corollary}\label{corollary-localsym}
Let $R$ be a local $k$-algebra. Then $R$ is a symmetric $k$-algebra iff $R$ is a Frobenius $k$-algebra and \( \lsoc \not\subset [R,R]\). In particular, the truth of $R$ being a symmetric $k$-algebra is independent of $k$.
\end{corollary}
\textit{Proof}.
This follows from the proof of the theorem above. If $R$ is symmetric, then we can take $\sigma_1$ to be the identity, so \(S \not\subset C = [R,R]\). Conversely, if \(S \not\subset [R,R]\), then we can define $\lambda_2$ as we did above to be $0$ on $[R,R]$ but not on $S$. The resulting $\sigma_2$ will be the identity, proving that $R$ is a symmetric algebra.
\qed
We now pass to the general case and show that the \nakaut\ with respect to the two fields remains the same. This turns out to be easy if the fields are both finite-dimensional over their intersection (necessarily a field). The case in which there is no convenient intersection is harder and uses the assumption that the fields be infinite, so we do not have a single proof to cover both cases.
Let $R$ be a Frobenius ring with Jacobson radical $J$ and \(\Rbar = R/J\). Suppose, as above, that $R$ can be considered as a finite-dimensional algebra over two different fields $k_1$ and $k_2$, with respective \nakauts\ $\sigma_1$ and $\sigma_2$.
\begin{theorem} \label{theorem-nakind}The Nakayama automorphism of $R$ is independent of the ground field.
\end{theorem}
\textit{Proof of Part I}.
Assume that the two fields are both finite-dimensional over some common ground field. This case can be handled by a transfer-type argument, as suggested to me by T.Y. Lam. By passing down to the common ground field and then up again, we can reduce to the case in which \(k_2 \subseteq k_1\).
Let \(\mbox{Tr}:k_1 \rightarrow k_2\) be any nonzero $k_2$-linear map. Considering $R$ as a Frobenius $k_1$-algebra, we have a $k_1$-linear functional \(\lambda_1:R \rightarrow k_1\) whose kernel contains no nonzero left ideals. Then \(\lambda_2 := \Tr \circ \lambda_1: R \rightarrow k_2\) is a $k_2$-linear map, and we claim that \(\ker \lambda_2\) also contains no nonzero left ideals. Indeed, if $r \in R\setminus\{0\}$, then \(\exists s \in R\) such that \(\lambda_1 (sr) \neq 0\), so \(\exists \alpha \in k_1\) such that \(0 \neq \Tr (\alpha \lambda_1(sr)) = \Tr (\lambda_1 (\alpha sr)) = \lambda_2 (\alpha sr).\)
Now $\sigma_i(r)$ is defined (\(\forall r \in R\)) by the equation
\[
rs - s \sigma_i(r) \in \ker \lambda_i \all s \in R).
\]
But \(\ker \lambda_1 \subseteq \ker \lambda_2\), so (\(\forall r \in R\)),
\[
rs - s \sigma_1(r) \in \ker \lambda_1 \subseteq \ker \lambda_2 \all s \in R),
\]
showing that $\sigma_2(r)$ must be equal to $\sigma_1(r)$. This finishes Part I\@.
\qed
For Part II we first need two facts from linear algebra.
\begin{lemma}\label{hyperplane}
Let $U \subsetneq V$ be finite-dimensional vector spaces over a field $k$ and suppose $V$ decomposes into subspaces \(V = V_1 \oplus V_2 \oplus \cdots \oplus V_n\) with each \(V_i \not\subset U\). Suppose that $|k| \geq n$. Then $U$ can be enlarged to a hyperplane $U'$ such that \(V_i \not\subset U'\) for \(i = 1,2,\dots, n\).
\end{lemma}
\textit{Proof}.
By enlarging $U$ one dimension at a time, we may assume that $U$ is maximal with respect to the property that no \(V_i \subseteq U\). We claim that now \(\dim_k V/U = 1\). If not, there exist
at least $|k| + 1$ linear subspaces of $V/U$ corresponding to one-dimensional extensions $U_i \supset U$. By the maximality of $U$ and the Pigeonhole Principle, some $V_i$ is contained in two different extensions, say $U_1$ and $U_2$. But this implies \(V_i \subseteq U_1 \cap U_2 = U\), a contradiction.
\qed
(The assumption that $|k| \geq n$ cannot be omitted. A three-dimensional vector space over the field of two elements contains the subspace \(U = \{0, (1,1,1)\}\), which cannot be extended to a hyperplane without including one of the three coordinate axes.)
\begin{lemma}\label{lemma-commutators}
Let $D$ be a division ring, $n$ a positive integer, and $S = \mathbb M_n(D)$. If $I \subseteq S$ is any nonzero left ideal, then $I + [S,S] = S$.
\end{lemma}
\textit{Proof}.
Let $U = I + [S,S]$ and let $E_{ij}$ denote the matrix units in $S$. Using a nonzero element of $I$, we can obtain a matrix in $U$ that is nonzero in the $(i,i)$ position and $0$ off the $i$-th row. For all $d \in D$ and $i \neq j$, \(dE_{ij} = (dE_{ii})(E_{ij}) - (E_{ij})(dE_{ii}) \in U\), and \(d(E_{ii} - E_{jj}) = (dE_{ij})(E_{ji}) - (E_{ji})(dE_{ij}) \in U\). Repeated use of these identities shows that an arbitrary matrix in $S$ is a sum of matrices in $U$.
\qed
\textit{Proof of Theorem~\ref{theorem-nakind}, Part II}. We assume that there is no common ground field over which $k_1$ and $k_2$ are both finite-dimensional. We need this assumption only because we will need to assume that both fields are infinite so that we can apply Lemma~\ref{hyperplane}.
Fix a $k_1$-linear functional \(\lambda_1: R \rightarrow k_1\) with kernel $H_1$ containing no nonzero left ideals. Then the \nakaut\ of $R$ as a $k_1$-algebra is defined (\(\forall r \in R\)) by
\[
rs - s \sigma_1(r) \in H_1 \all s \in R).
\]
As in the proof of Theorem~\ref{theorem-local}, we set
\[
C:= \left\{\sum_i(r_is_i - s_i\sigma_1(r_i)) :\; r_i, s_i \in R\right\} \subseteq H_1
\]
and note that $C$ is closed under multiplication from the center $Z(R)$. In particular, $C$ is a subspace over both $k_1$ and $k_2$. Let \(S:= \lsoc\) and note that since \(C \subseteq H_1\), $S \cap C$ contains no nonzero left ideals. Also \(S \cap C \subseteq S \cap H_1\), which is a $k_1$-subspace of $S$ of codimension 1. (This is because \(\dim_{k_1} R/H_1 = 1\) and \(S \cap H_1 \neq S\) because $H_1$ contains no nonzero left ideals.)
By Theorem 16.14 in \lomar, we have an isomorphism \(\hi: \R S \isommap \R \Rbar\), which is also an isomorphism of left \Rbar-modules. Now \(S \cap H_1 \subset S\) contains no nonzero left ideals of $R$, hence no minimal left ideals, hence no nonzero \Rbar-submodules. So \(\hi(S \cap H_1)\) is a $k_1$-hyperplane in \Rbar\ containing no nonzero left ideals.
Since $R$ is a finite-dimensional algebra (over either field), \Rbar\ is semisimple (by Theorem 4.14 in \cite{fc}), hence a symmetric algebra by Example 16.59 in \lomar. We consider \Rbar\ now as a symmetric $k_1$-algebra. By Theorem~\ref{symdef}, \Rbar\ contains another $k_1$-hyperplane $H$ that contains no nonzero left ideals and contains the commutator subspace \([\Rbar, \Rbar]\). Now by the discussion following Theorem~\ref{Frobdef}, we know that \(H = (\hi(S \cap H_1))u\) for some \(u \in U(\Rbar).\)
Now \((\hi(S \cap C))u \subseteq \hi(S \cap H_1))u = H\), so \(U:=(\hi(S \cap C))u + [\Rbar, \Rbar] \subseteq H\). Since $H$ contains no nonzero left ideals in \Rbar, $U$ also contains no nonzero left ideals. But since both \(\hi(S \cap C))u\) and \([\Rbar, \Rbar]\) are $k_2$-subspaces of $\Rbar$, $U$ is a $k_2$-subspace of \Rbar. Our goal is to enlarge $U$ to a $k_2$-hyperplane containing no nonzero left ideals.
Let \Rbar\ have Artin-Wedderburn decomposition
\(
\mathbb M_{n_1} (D_1) \times \cdots \times \mathbb
M_{n_r} (D_r),
\)
where the $D_i$'s are division rings. We decompose each \(R_i:= \mathbb M_{n_i} (D_i)\) into a sum of simple left ideals $V_{i,j}$, where $V_{i,j}$ consists of matrices that are 0 except in the $j$-th column. This gives a decomposition of \Rbar\ into simple left ideals:
\[
\Rbar = V_{1,1} \oplus \cdots \oplus V_{1,n_1} \oplus \cdots \oplus V_{r,1} \oplus \cdots \oplus V_{r,n_r}.
\]
Now we know that for all $i,j$, \(V_{i,j} \not\subset U\). So by Lemma~\ref{hyperplane} we can enlarge $U$ to a $k_2$-hyperplane \(U' \subset \Rbar\) while preserving \(V_{i,j} \not\subset U' \forall i,j\). We claim that $U'$ still contains no nonzero left ideal of \Rbar. Indeed, assume that $U'$ does contain a nonzero left ideal of \Rbar; then it contains a minimal left ideal of one of the $R_i$'s, say $R_1$. But $U'$ also contains the commutators $[R_1,R_1]$ since \([R_1,R_1] \subseteq [\Rbar, \Rbar] \subseteq U \subseteq U'\). Then by Lemma~\ref{lemma-commutators}, $U'$ contains all of $R_1$, hence all the $V_{1,j}$'s, a contradiction. So $U'$ is indeed a $k_2$-hyperplane of \Rbar\ containing no nonzero left ideals.
We now consider the $k_2$-hyperplane \(U'u\inv \subset \Rbar\), which also contains no nonzero left ideals of \Rbar. Moreover, since \((\hi(S \cap C))u \subseteq U \subseteq U'\), we have \(\hi(S \cap C) \subseteq U'u\inv\). We now pull \(U'u\inv\) back through the isomorphism \(\hi: \R S \isommap \R \Rbar\) to get a $k_2$-hyperplane \(H_2':= \hi\inv(U'u\inv) \subset S\) containing no nonzero left \Rbar-submodules of $S$, hence no nonzero left ideals of $R$. Also, since \(\hi(S \cap C) \subseteq U'u\inv\), \(H_2'\) contains $S \cap C$.
To finish the proof, we will extend $H_2'$ to a $k_2$-hyperplane $H_2 \subset R$ that contains $C$ and still contains no nonzero left ideals. We can then use $H_2$ to define the \nakaut\ with respect to $k_2$.
\setlength{\unitlength}{.1in}
\begin{figure}
\begin{center}
\begin{picture}(23,18)(0,0)
\put(0,0){\framebox(23,18)[tl]{}}
\put(5,5){\oval(6,6)[bl]}
\put(6,5){\oval(6,6)[br]}
\put(6,13){\oval(6,6)[tr]}
\put(5,13){\oval(6,6)[tl]}
\put(5,2){\line(1,0){1}}
\put(5,16){\line(1,0){1}}
\put(2,5){\line(0,1){8}}
\put(9,5){\line(0,1){8}}
\put(5,5){\oval(4,4)[bl]}
\put(7,5){\oval(4,4)[br]}
\put(7,13){\oval(4,4)[tr]}
\put(5,13){\oval(4,4)[tl]}
\put(5,3){\line(1,0){2}}
\put(5,15){\line(1,0){2}}
\put(3,5){\line(0,1){8}}
\put(9,5){\line(0,1){8}}
\put(10,9){\oval(10,8)}
\put(12,9){\oval(6,4)}
\put(15,9){\oval(12,12)}
\put(2,15.5){$S$}
\put(3.2,13){$H_2'$}
\put(5.2,10){\small $S \cap C$}
\put(12,9.5){$C'$}
\put(12,11.5){$C$}
\put(17,13.5){$S'$}
\put(.2,16.7){$R$}
\end{picture}
\end{center}
\caption{$k_2$-subspaces of $R$.}\label{figure-subspaces}
\end{figure}
To extend $H_2'$, consider $S, C, H_2',$ and $R$ just as $k_2$-vector spaces as in Fig. ~\ref{figure-subspaces}. As in the picture below, decompose $C$ as a $k_2$-vector space into \(C = (S \cap C) \oplus C'\). Then since \(C' \cap S = 0\), we can extend $C'$ to a $k_2$-vector space \(S' \supseteq C'\) such that \(R = S \oplus S'\). Define \(H_2 := H_2' \oplus S'\), a $k_2$-hyperplane of $R$ since \(\dim_{k_2}(S/H_2') = 1\). Moreover, $H_2$ contains no nonzero left ideals, since any nonzero left ideal \({_R} L \subseteq H_2\) would contain a minimal left ideal \({_R} L' \subseteq H_2 \cap S = H_2'\). Most importantly, $H_2$ contains $C$.
We now define a $k_2$-functional \(\lambda_2: R \rightarrow k_2\) with \(\ker \lambda_2 = H_2\). Then the \nakaut\ $\sigma_2$ of $R$ as a $k_2$-algebra is defined (\(\forall r \in R\)) by
\[
rs - s \sigma_2(r) \in \ker \lambda_2 = H_2 \all s \in R).
\]
But since \(rs - s\sigma_1(r) \in C \subseteq H_2\), we have \(\sigma_2(r) = \sigma_1(r)\) for all $r \in R$. This concludes the proof of Part II\@.
\qed
\section{Corollaries}
We can now answer the questions posed in the introduction. We begin with a theorem that does not require the Frobenius assumption. Let $R$ be a ring that is a finite-dimensional algebra over two fields $k_1$ and $k_2$. We denote by \lcat\ and \rcat\ the categories of \leftmods\ and \rightmods\ respectively.
Let $\F_i: \lcat \rightarrow \rcat$ be the $k_i$-dual functor: \(\F_i(\R X) = \rXdi := \Hom{k_i}(X, k_i)\). Let \biRdi\ be the bimodule \(\Hom{k_i}(R, k_i)\).
\begin{theorem}\label{theorem-bifun}
\(\biRda \simeq \biRdb\) as bimodules iff the functors $\F_1$ and $\F_2$ are naturally equivalent.
\end{theorem}
\textit{Proof}.
By Brauer's Equivalence Theorem (16.70 in \lomar), the functor $\F_i$ is naturally equivalent to the functor \(\G_i:= \Hom{R}(-,\lRdi)\) on \leftmods, proving the forward direction. The converse is essentially identical to Theorem 16.71 in \lomar. We apply the equivalence \(\G_1 \simeq \G_2\)
to the \leftmod\ homomorphism \(\rho_r: \R R \rightarrow \R R\), where \(\rho_r\) is right multiplication by some fixed $r \in R$, as in Fig. ~\ref{figure-brauer}. Then the map
\[
\G_i(\rho_r) : \Hom{R}(\R R,\lRdi) \rightarrow \Hom{R}(\R R,\lRdi)
\]
takes $\alpha$ to the map \( (s \mapsto \alpha (sr)) = r\alpha\), so \(\G_i (\rho_r)\) is {\em left} multiplication by $r$ on \(\Hom{R}(\R R,\lRdi)\). This gives us a commutative diagram of \rightmods\ as in Fig.~\ref{figure-brauer}.
\setlength{\unitlength}{1in}
\begin{figure}
\begin{center}
\begin{picture}(4.0,1.0)(0,0)
\put(0,0){\(\G_1(\R R)=\Hom{R}(\R R,\lRda)\)}
\put(0,.85){\(\G_1(\R R)=\Hom{R}(\R R,\lRda)\)}
\put(2.2,0){\(\G_2(\R R)=\Hom{R}(\R R,\lRdb)\)}
\put(2.2,.85){\(\G_2(\R R)=\Hom{R}(\R R,\lRdb)\)}
\put(1.8,.05){\vector(1,0){.3}}
\put(1.9,.05){$\sim$}
\put(1.8,.9){\vector(1,0){.3}}
\put(1.9,.9){$\sim$}
\put(.6,.75){\vector(0,-1){.5}}
\put(.7,.45){\(\G_1(\rho_r) = r \cdot\)}
\put(2.8,.75){\vector(0,-1){.5}}
\put(2.9,.45){\(\G_2(\rho_r) = r \cdot\)}
\end{picture}
\end{center}
\caption{The equivalence \(\G_1 \simeq \G_2\) applied to \(\rho_r: \R R \rightarrow \R R\).}\label{figure-brauer}
\end{figure}
However, \(\Hom{R}(\R R,\lRdi) \simeq \rRdi\) as \rightmods\ under the isomorphism \(\alpha \mapsto \alpha(1) \), so the isomorphism on the top and bottom rows is \(\rRda \simeq \rRdb\). The commutativity of the diagram shows that this isomorphism respects the left $R$-action as well, so we have \(\biRda \simeq \biRdb\) as bimodules, as desired.
\qed
To apply this theorem, let \sig\ be any automorphism of $R$ and let $M_R$ be a \rightmod. We define the twisted \rightmod\ $M_{R^\sigma}$ to be the same abelian group as $M$ with the $R$-action defined by
\[
m*r := m\sigma(r) \all r \in R, m \in M).
\]
(Thanks to Mark Davis for suggesting this
definition.) Now let $\R X$ be a \leftmod\ with $k$-dual \(\rXd := \Hom{k} (X,k)\). Let $(X^*)_R$ denote the $R$-dual $\Hom{R}(\R X,\R R),$ the isomorphism type of which is, of course, independent of $k$.
\begin{theorem}\label{theorem-twistmodule} Let $R$ be a Frobenius $k$-algebra with Nakayama automorphism \sig. Then there is a natural \rightmod\ isomorphism \(\rXd \isom (X^*)_{R^\sigma}\).
\end{theorem}
\textit{Proof}.
We have an isomorphism \(\R R \simeq \lRd\), say given by \(1 \mapsto \lambda\).
Then \(\forall r \in R\),
\[
(\lambda r)(s) = \lambda(rs) = \lambda(s \sigma(r)) = (\sigma(r)\lambda)(s)\all s \in R),
\]
so \(\lambda r = \sigma(r) \lambda\) in \Rd. Now By Brauer's Theorem, we have a natural isomorphism \(\rXd \simeq \Hom{R}(\R X, \lRd) \) of \rightmods.
The isomorphism \(\R R \simeq \lRd\) of \leftmods\ then gives us an abelian group isomorphism
\(
\Hom{R}(\R X, \R R) \simeq \Hom{R}(\R X, \lRd),
\)
which we denote by \(\alpha \mapsto \widehat{\alpha}\). Then \(\widehat{\alpha}\) is given by
\begin{equation}\label{alphahat}
\widehat{\alpha}(x) = (\alpha(x))\lambda \in \Hat{R}.
\end{equation}
We claim that although `` $\widehat{}$ '' is not in general an isomorphism of \rightmods, it satisfies \(\widehat{\alpha r} = \widehat{\alpha}\sigma\inv(r)\). The theorem then follows by identifying \Xd\ with \(\Hom{R}(\R X, \lRd)\) and taking \(f: \Xd \rightarrow \Hom{R}(\R X, \R R)\) to be the inverse of `` $\widehat{}$ ''.
To prove the claim, let \(x \in X, r \in R\). Then in \Rd, we have
\begin{eqnarray*}
\widehat{\alpha r}(x) & = & ((\alpha r) (x)) \lambda \mbox{ by Eq. \ref{alphahat}} \\
& = & (\alpha(x) r) \lambda \mbox{ by the $R$-action on } \Hom{R}(\R X, \R R) \\
& = & \alpha (x) (r \lambda) \mbox{ by the associativity of the $R$-action on } \lRd \\
& = & \alpha (x) (\lambda \sigma\inv(r)) \mbox{ as shown above} \\
& = & (\alpha (x) \lambda) \sigma\inv(r) \mbox{ by associativity again} \\
& = & (\widehat{\alpha}(x)) \sigma\inv(r) \mbox{ by Eq. \ref{alphahat}} \\
& = & (\widehat{\alpha} \sigma\inv(r))(x) \mbox{ by the $R$-action on }\Hom{R}(\R X, \lRd).
\end{eqnarray*}
So
\(
\widehat{\alpha r} = \widehat{\alpha}\sigma\inv(r),
\)
proving our claim and the theorem.
\qed
\begin{corollary}\label{corollary-funind}
If $R$ is a Frobenius $k$-algebra, then the $k$-dual functor \(\F:=\Hom{k}(-,k): \lcat \rightarrow \rcat\) is independent of $k$.
\end{corollary}
\textit{Proof}.
Apply Theorems~\ref{theorem-nakind} and ~\ref{theorem-twistmodule}.
\qed
\begin{corollary}\label{corollary-biind}
If $R$ is a Frobenius $k$-algebra, then the bimodule isomorphism type of \biRd\ is independent of $k$.
\end{corollary}
\textit{Proof}.
Apply Corollary~\ref{corollary-funind} and Theorem~\ref{theorem-bifun}.
\qed
Corollary~\ref{corollary-biind} suggests that there should be a ring-theoretic characterization of \biRd\ as a bimodule analogous to the fact that \(\rRd \isom E((\Rrad)_R)\) as \rightmods. We do not yet have such a characterization.
\begin{corollary}\label{corollary-symind}
If $R$ is any finite-dimensional $k$-algebra, then the property of $R$ being a symmetric $k$-algebra is independent of $k$.
\end{corollary}
\textit{Proof}.
We have seen that the question of whether $R$ is a Frobenius $k$-algebra is independent of $k$. Now apply Corollary~\ref{corollary-biind}.
\qed
\section {Ring-theoretic characterization of symmetric algebras}
We have seen in Corollary~\ref{corollary-symind} that the property of a $k$-algebra being symmetric is independent of $k$, suggesting that it should be equivalent to a ring-theoretic property. In the local case, we saw in Corollary~\ref{corollary-localsym} that an algebra is symmetric iff its left socle is not contained in the commutators. In the general case, we have ring-theoretic conditions for symmetry if we assume that the ground field $k$ is infinite.
We continue to assume that $R$ is a finite-dimensional algebra over a field $k$. As before, let \(J = \mbox{rad } R\) be the Jacobson radical and $\Rbar = R/J$. The following theorem is similar to Theorem 16.14 in \lomar, which states that $R$ is Frobenius iff \(\rsoc \isom \Rbar_R\) and \(\lsoc \isom \R \Rbar\). We use $S$ to denote $\lsoc$.
\begin{theorem}\label{theorem-symring}
Suppose $k$ is infinite. Then $R$ is a symmetric $k$-algebra iff \(\R S_R \simeq \R \Rbar_R\) as $(R,R)$-bimodules and $[R,R]$ contains no nonzero left ideals of $R$.
\end{theorem}
\textit{Proof}.
If $R$ is symmetric, then we have a bimodule isomorphism \(\hi: \R R_R \isommap \biRd\). Considering $\hi$ as an isomorphism of \leftmods\ and restricting it to $S$, we have an isomorphism \(\hi: S \isommap \mbox{soc}(\lRd)\). Note, however, that $\hi$ still respects the right action of $R$ on $S$ and $\mbox{soc}(\lRd)$. By Example 3.41 in \lomar, we have \(\mbox{soc}(\lRd) = \{f \in \Rd: f(J) = 0\}\), which is isomorphic as an $(R,R)$-bimodule to \(\Hom{k}(\Rbar, k)\). But since \Rbar\ is a semisimple $k$-algebra, hence symmetric, \(\Hom{k}(\Rbar, k) \isom \Rbar\) as $(\Rbar, \Rbar)$-bimodules and hence also as $(R,R)$-bimodules. Composing all these, we have an $(R,R)$-bimodule isomorphism $S \simeq \Rbar$.
The condition on $[R,R]$ follows from Theorem~\ref{symdef}, which gives us a $k$-linear functional \(\lambda: R \rightarrow k\) such that \([R,R] \subseteq \ker \lambda\), yet $\ker \lambda$ contains no nonzero left ideals of $R$.
Conversely, suppose \(\hi:\R S_R \isommap \R \Rbar_R\) as $(R,R)$-bimodules and $[R,R]$ contains no nonzero left ideals of $R$. We consider \(\hi ([R,R] \cap S) \subset \Rbar\), which contains no nonzero left ideals of \Rbar\ since $[R,R]$ contains no nonzero left ideals of $R$. Moreover we claim that \([\Rbar, \Rbar] \subseteq \hi ([R,R] \cap S) \). Indeed, let \(\bar x, \bar y \in \Rbar\) (where $x,y \in R$), and suppose that \(\bar{y} = \hi(b)\) for some \(b \in S\). Then using the fact that $\hi$ is a bimodule isomorphism, we have
\[
\bar x \bar y - \bar y \bar x = \bar x \hi(b) - \hi(b) \bar x = \hi (xb - bx) \in \hi([R,S]) \subseteq \hi ([R,R] \cap S).
\]
So \(\hi ([R,R] \cap S) \) is a $k$-subspace of \Rbar\ containing no nonzero left ideals and containing the commutators in \Rbar. By the same argument used in the proof of Theorem~\ref{theorem-nakind}, Part II, we can enlarge \(\hi ([R,R] \cap S) \) to a $k$-hyperplane $U' \subset \Rbar$ containing no nonzero left ideals. (Here we use the fact that $k$ is infinite.) Then we can pull back to \(H':= \hi^{-1} (U')\), a $k$-hyperplane of $S$ containing \([R,R] \cap S\) but containing no nonzero left ideals of $R$. Then, again by the same argument used in Theorem~\ref{theorem-nakind} (using $[R,R]$ in place of the $C$ that was used there), we can extend $H'$ to $H$, a $k$-hyperplane of $R$ containing $[R,R]$ but containing no nonzero left ideals. Then by Theorem~\ref{symdef}, $R$ is a symmetric algebra.
\qed
I do not know if Theorem~\ref{theorem-symring} holds without the assumption that $k$ is infinite. The proof of the forward implication did not use this assumption, so that half certainly remains true. Conversely, an old result by Nakayama (\cite{nak4}) states that for a finite-dimensional algebra $R$ over a field, \(\lsoc \isom \R\Rbar\) iff \(\rsoc \isom \Rbar_R\). So if \(\R S_R \simeq \R \Rbar_R\) as $(R,R)$-bimodules, then $R$ is certainly Frobenius, but it does not seem obvious whether $R$ must be symmetric.
\small
\noindent {\bf Acknowledgements} I would like to thank T.Y.\ Lam, Greg Marks, and Florence Newberger for valuable conversations about this work.
\normalsize
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,883 |
Q: Programmatically get the first row from a GridView after changing pages Can anyone tell me how I would get the data from a row in a DataGrid after a page changes. I am currently trying the following:
protected void MissionariesGrid_PageIndexChanged(object sender, EventArgs e)
{
string missionaryID = MissionariesGrid.Rows[0].Cells[0].Text;
TestLabel.Text = missionaryID;
}
The problem is, I do not get the value for the first row in the new page. I am getting the value for the first row in the old page.
A: Your logic for that should be in the RowCreated or RowDataBound event handler... The PageIndexChanged happens too soon I think.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,937 |
\section{Introduction}
\label{sec:intro}
The goal of keyword spotting is to detect a relatively small set of
predefined keywords in a stream of user utterances, usually in the
context of an intelligent agent on
a mobile phone or a consumer ``smart home'' device. Such a
capability complements full automatic speech recognition, which is
typically performed in the cloud. Because cloud-based interpretation
of speech input requires transferring audio recordings from the user's
device, there are significant privacy implications. Therefore,
on-device keyword spotting has two main uses:\ First, recognition of
common commands such as ``on'' and ``off'' as well as other frequent
words such as ``yes'' and ``no'' can be accomplished directly on the
user's device, thereby sidestepping any potential privacy
concerns. Second, keyword spotting can be used to detect ``command
triggers'' such as ``hey Siri'', which provide explicit cues for
interactions directed at the device.
It is additionally desirable that such models have a small footprint
(for example, measured in the number of model parameters) so
they can be deployed on low power and performance-limited
devices.
In recent years, neural networks have been shown to provide effective
solutions to the small-footprint keyword spotting problem. Research
typically focuses on a tradeoff between achieving high detection
accuracy and having a small footprint. Compact models are usually
variants derived from a full model that sacrifice accuracy for
a smaller model footprint, often via some form of sparsification.
In this work, we focus on convolutional neural networks (CNNs), a
class of models that has been successfully applied to small-footprint
keyword spotting in recent years. In particular, we explore the use of
residual learning techniques and dilated convolutions. On the
recently-released Google Speech Commands Dataset, which provides a
common benchmark for keyword spotting, our full residual network model
outperforms Google's previously-best CNN~\cite{keywordcnn}
(95.8\% vs.\ 91.7\% in accuracy). We can tune the depth
and width of our networks to target a desired tradeoff between model
footprint and accuracy:\ one variant is able to achieve accuracy only
slightly below Google's best CNN with a 50$\times$ reduction in model
parameters and an 18$\times$ reduction in the number of multiplies in
a feedforward inference pass. This model far outperforms previous compact CNN
variants.
\section{Related Work}
\label{sec:rel_work}
Deep residual networks (ResNets)~\cite{resnet} represent a groundbreaking
advance in deep learning that has allowed researchers to successfully
train deeper networks. They were first applied to image
recognition, where they contributed to a significant
jump in state-of-the-art performance~\cite{resnet}. ResNets have subsequently been
applied to speaker identification~\cite{resnetsv} and automatic speech
recognition~\cite{resnetasr, resnetasr2}. This paper explores the
application of deep residual learning techniques to the keyword spotting task.
The application of neural networks to keyword spotting, of
course, is not new. Chen et al.~\cite{keyworddnn} applied a standard
multi-layer perceptron to achieve significant improvements over
previous HMM-based approaches. Sainath and Parada~\cite{keywordcnn} built on
that work and achieved better results using convolutional neural
networks (CNNs). They specifically cited reduced model footprints (for
low-power applications) as a major motivation in moving to CNNs.
Despite more recent work in applying recurrent neural networks to the
keyword spotting task~\cite{keywordrnn,SunMing_etal_2017}, we
focus on the family of CNN models for several reasons. CNNs
today remain the standard baseline for small-footprint keyword
spotting---they have a straightforward architecture, are relatively
easy to tune, and have implementations in multiple deep learning
frameworks (at least TensorFlow~\cite{dataset} and
PyTorch~\cite{honk}). We are not aware of any publicly-available
implementations of recurrent architectures to compare against. We
believe that residual learning techniques form a yet
unexplored direction for the keyword spotting task, and that our use
of dilated convolutions achieves the same goal that proponents
of recurrent architectures tout, the ability to capture long(er)-range
dependencies.
\section{Model Implementation}
\label{sec:impl}
This section describes our base model and its variants. All code
necessary to replicate our experiments has been made open source in
our GitHub repository.\footnote{https://github.com/castorini/honk/}
\subsection{Feature Extraction and Input Preprocessing}
For feature extraction, we first apply a band-pass filter of 20Hz/4kHz
to the input audio to reduce noise. Forty-dimensional Mel-Frequency
Cepstrum Coefficient (MFCC) frames are then constructed and stacked
using a 30ms window and a 10ms frame shift. All frames are stacked
across a 1s interval to form the two-dimensional input to our models.
\subsection{Model Architecture}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{full_figure_res.pdf}
\vspace{-0.5cm}
\caption{Our full architecture, with a magnified residual block.}
\label{fig:full_arch_res}
\end{figure}
Our architecture is similar to that of He et al.~\cite{resnet}, who postulated that it may be
easier to learn residuals than to learn the original mapping for deep
convolutional neural networks. They found that additional
layers in deep networks cannot be merely ``tacked on'' to shallower nets.
Specifically, He et al.~proposed that it may be easier to learn the
residual $H(\mathbf x) = F(\mathbf x) + \mathbf x$ instead of the true mapping
$F(\mathbf x)$, since it is empirically difficult to learn the identity mapping
for $F$ when the model has unnecessary depth. In residual networks (ResNets),
residuals are
expressed via connections between layers (see Figure \ref{fig:full_arch_res}), where an
input $\mathbf x$ to layer $i$ is added to the output of some downstream layer $i + k$,
enforcing the residual definition $H(\mathbf x) = F(\mathbf x) + \mathbf x$.
Following standard ResNet architectures, our residual block begins with a
bias-free convolution layer with weights $\mathbf{W} \in \mathbb{R}^{(m \times
r) \times n}$, where $m$ and $r$ are the width and height, respectively, and
$n$ the number of feature maps. After the convolution layer, there are ReLU
activation units and---instead of dropout---a batch normalization~\cite{bn}
layer. In addition to using residual blocks, we also use a $(d_w, d_h)$
convolution dilation~\cite{dilated_conv} to increase the receptive field of the
network, which allows us to consider the one-second input in its entirety using
a smaller number of layers. To expand our input for the residual blocks, which
requires inputs and outputs of equal size throughout, our entire architecture
starts with a convolution layer with weights $\mathbf{W} \in \mathbb{R}^{(m
\times r) \times n}$. A separate non-residual convolution layer and batch
normalization layer are further appended to the chain of residual blocks, as
shown in Figure~\ref{fig:full_arch_res} and Table~\ref{table:full_arch}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{dilated_conv.pdf}
\vspace{-0.75cm}
\caption{Exponentially increasing dilated convolutions; in this case, $k =
1$.}
\label{fig:dilated_conv}
\vspace{0.25cm}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{ r | c c c c c | c c}
\hline
type & $m$ & $r$ & $n$ & $d_w$ & $d_h$ & Par. & Mult.\\
\hline
conv & 3 & 3 & 45 & - & - & 405 & 1.52M\\
res $\times$ 6 & 3 & 3 & 45 & $2^{\lfloor\frac{i}{3}\rfloor}$ &
$2^{\lfloor\frac{i}{3}\rfloor}$ & 219K & 824M\\
conv & 3 & 3 & 45 & 16 & 16 & 18.2K & 68.6M\\
bn & - & - & 45 & - & - & - & 169K\\
avg-pool & - & - & 45 & - & - & - & 45\\
softmax & - & - & 12 & - & - & 540 & 540\\
\hline
\hline
Total & - & - & - & - & - & 238K & 894M\\
\hline
\end{tabular}
\end{center}
\vspace{-0.45cm}
\caption{Parameters used for \texttt{res15}, along with the number of
parameters and multiplies.}
\label{table:full_arch}
\vspace{0.1cm}
\end{table}
Our base model, which we refer to as \texttt{res15},
comprises six such residual blocks and $n = 45$ feature maps (see Figure
\ref{fig:full_arch_res}).
For dilation, as illustrated in Figure \ref{fig:dilated_conv}, an exponential sizing
schedule~\cite{dilated_conv} is used:\ at layer $i$, the dilation is $d_w = d_h
= 2^{\lfloor\frac{i}{3}\rfloor}$, resulting in a total receptive field of $125
\times 125$. As is standard in ResNet architectures, all output is zero-padded
at each layer and finally
average-pooled and fed into a fully-connected softmax layer.
Following previous work, we measure the ``footprint'' of a model in terms of
two quantities:\ the number of parameters in the model and the number
of multiplies that are required for a full feedforward inference pass.
Our architecture uses roughly 238K parameters and 894M multiplies
(see Table~\ref{table:full_arch} for the exact breakdown).
\begin{table}
\begin{center}
\begin{tabular}{ r | c c c | c c}
\hline
type & $m$ & $r$ & $n$ & Par. & Mult.\\
\hline
conv & 3 & 3 & 19 & 171 & 643K\\
avg-pool & 4 & 3 & 19 & - & 6.18K\\
res $\times$ 3 & 3 & 3 & 19 & 19.5K & 5.0M\\
avg-pool & - & - & 19 & - & 19\\
softmax & - & - & 12 & 228 & 228\\
\hline
\hline
Total & - & - & - & 19.9K & 5.65M\\
\hline
\end{tabular}
\end{center}
\vspace{-0.45cm}
\caption{Parameters used for \texttt{res8-narrow}.}
\label{table:compact_arch}
\vspace{0.1cm}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{ r | c c c | c c }
\hline
type & $m$ & $r$ & $n$ & Par. & Mult.\\
\hline
conv & 3 & 3 & 45 & 405 & 1.80M\\
avg-pool & 2 & 2 & 45 & - & 45K\\
res $\times$ 12 & 3 & 3 & 45 & 437K & 378M\\
avg-pool & - & - & 45 & - & 45\\
softmax & - & - & 12 & 540 & 540\\
\hline
\hline
Total & - & - & - & 438K & 380M\\
\hline
\end{tabular}
\end{center}
\vspace{-0.45cm}
\caption{Parameters used for \texttt{res26}.}
\label{table:deep_arch}
\vspace{0.1cm}
\end{table}
To derive a compact small-footprint model, one simple approach is to
reduce the depth of the network. We tried cutting the number of
residual blocks in half to three, yielding a model we call {\tt res8}.
Because the footprint of {\tt res15} arises from its
width as well as its depth, the compact model adds a $4 \times 3$ average-pooling layer
after the first convolutional layer, reducing the size of the time and
frequency dimensions by a factor of four and three,
respectively. Since the average pooling layer sufficiently reduces the
input dimension, we did not use dilated convolutions in this variant.
In the opposite direction, we explored the effects of deeper models.
We constructed a model with double the number of residual blocks (12)
with 26 layers, which we refer to as \texttt{res26}. To make
training tractable, we prepend a $2\times 2$ average-pooling layer to
the chain of residual blocks. Dilation is also not used, since the
receptive field of 25 3$\times$3 convolution filters is large enough
to cover our input size.
In addition to depth, we also varied model width. All
models described above used $n = 45$ feature maps, but we also considered
variants with $n = 19$ feature maps, denoted by \texttt{-narrow}
appended to the base model's name. A detailed breakdown of the
footprint of \texttt{res8-narrow}, our best compact model, is shown in
Table~\ref{table:compact_arch}; the same analysis for our deepest and
widest model, \texttt{res26}, is shown in Table~\ref{table:deep_arch}.
\section{Evaluation}
\subsection{Experimental Setup}
We evaluated our models using Google's Speech Commands
Dataset~\cite{dataset}, which was released in August 2017 under a
Creative Commons license.\footnote{\url{https://research.googleblog.com/2017/08/launching-speech-commands-dataset.html}}
The dataset contains 65,000 one-second long
utterances of 30 short words by thousands of different people, as well
as background noise samples such as pink noise, white noise, and human-made
sounds. The blog post announcing the data release also references
Google's TensorFlow implementation of Sainath and Parada's models,
which provide the basis of our comparisons.
Following Google's implementation, our task is to discriminate among 12
classes:\ ``yes,'' ``no,'' ``up,'' ``down,'' ``left,'' ``right,''
``on,'' ``off,'' ``stop,'' ``go'', unknown, or silence. Our
experiments followed exactly the same procedure as the TensorFlow reference.
The Speech Commands Dataset was split into training,
validation, and test sets, with 80\% training, 10\% validation,
and 10\% test. This results in roughly 22,000 examples for
training and 2,700 each for validation and testing. For consistency
across runs, the SHA1-hashed name of the audio file from the dataset
determines the split.
To generate training data, we followed Google's preprocessing
procedure by adding background noise to each sample with a probability
of $0.8$ at every epoch, where the noise is chosen randomly from the
background noises provided in the dataset. Our
implementation also performs a random time-shift of $Y$ milliseconds
before transforming the audio into MFCCs, where
$Y\sim\textsc{Uniform}[-100, 100]$. In order to accelerate the
training process, all preprocessed inputs are cached for reuse across
different training epochs. At each epoch, 30\% of the cache is evicted.
Accuracy is our main metric of quality, which is simply measured as
the fraction of classification decisions that are correct. For each instance,
the model outputs its most likely prediction, and is not given the option of ``don't know''.
We also plot receiver operating characteristic (ROC) curves, where the $x$
and $y$ axes show false alarm rate (FAR) and false reject
rate (FRR), respectively. For a given sensitivity threshold---defined
as the minimum probability at which a class is considered
positive during evaluation---FAR and FRR represent the probabilities
of obtaining false positives and false negatives, respectively. By
sweeping the sensitivity interval $[0.0, 1.0]$, curves for each of the
keywords are computed and then averaged vertically to produce the
overall curve for a particular model. Curves with less area under the
curve (AUC) are better.
\subsection{Model Training}
Mirroring the ResNet paper~\cite{resnet}, we used stochastic gradient
descent with a momentum of 0.9 and a starting learning rate of 0.1, which
is multiplied by 0.1 on plateaus. We also experimented with Nesterov momentum,
but we found slightly decreased learning performance in terms of cross entropy
loss and test accuracy. We used a mini-batch size of 64 and $L_2$ weight decay
of $10^{-5}$. Our models were trained for a total of 26 epochs, resulting in
roughly 9,000 training steps.
\subsection{Results}
\begin{table}[t]
\begin{center}
\begin{tabular}{ l c | c c}
\hline
Model & Test accuracy & Par. & Mult.\\
\hline
\texttt{trad-fpool3} & 90.5\% $\pm$ 0.297 & 1.37M & 125M\\
\texttt{tpool2} & 91.7\% $\pm$ 0.344 & 1.09M & 103M\\
\texttt{one-stride1} & 77.9\% $\pm$ 0.715 & 954K & 5.76M\\
\hline
\texttt{res15} & 95.8\% $\pm$ 0.484 & 238K & 894M\\
\texttt{res15-narrow} & 94.0\% $\pm$ 0.516 & 42.6K & 160M\\
\hline
\texttt{res26} & 95.2\% $\pm$ 0.184 & 438K & 380M\\
\texttt{res26-narrow} & 93.3\% $\pm$ 0.377 & 78.4K & 68.5M\\
\hline
\texttt{res8} & 94.1\% $\pm$ 0.351 & 110K & 30M\\
\texttt{res8-narrow} & 90.1\% $\pm$ 0.976 & 19.9K & 5.65M\\
\hline
\end{tabular}
\end{center}
\vspace{-0.45cm}
\caption{Test accuracy of each model with 95\% confidence
intervals (across five trials), as well as footprint size in
terms of number of parameters and multiplies.}
\label{table:results}
\vspace{-0.2cm}
\end{table}
Since our own networks are implemented in PyTorch, we used our PyTorch
reimplementations of Sainath and Parada's models as a point of
comparison. We have previously confirmed that our PyTorch
implementation achieves the same accuracy as the original TensorFlow
reference~\cite{honk}. Our ResNet models are compared against three CNN variants
proposed by Sainath and Parada:\ \texttt{trad-fpool3}, which is their
base model; \texttt{tpool2}, the most accurate variant of those they
explored; and \texttt{one-stride1}, their best compact variant. The
accuracies of these models are shown in
Table~\ref{table:results}, which also shows the 95\% confidence intervals
from five different optimization trials with different random
seeds. The table provides the number of model parameters as well as
the number of multiplies in an inference pass. We see that
\texttt{tpool2} is indeed the best performing model, slightly better
than \texttt{trad-fpool3}. The \texttt{one-stride1} model
substantially reduces the model footprint, but this comes at a steep price
in terms of accuracy.
The performance of our ResNet variants is also shown in
Table~\ref{table:results}. Our base \texttt{res15} model achieves
significantly better accuracy than any of the previous Google CNNs
(the confidence intervals do not overlap). This model requires fewer
parameters, but more multiplies, however. The ``narrow'' variant of
\texttt{res15} with fewer feature maps sacrifices accuracy, but
remains significantly better than the Google CNNs (although it still
uses $\sim$30\% more multiplies).
Looking at our compact \texttt{res8} architecture, we see that the ``wide''
version strictly dominates all the Google models---it achieves
significantly better accuracy with a smaller footprint. The ``narrow''
variant reduces the footprint even more, albeit with a small degradation in
performance compared to \texttt{tpool2}, but requires 50$\times$ fewer
model parameters and 18$\times$ fewer multiplies. Both models are
far superior to Google's compact variant, \texttt{one-stride1}.
\begin{figure}
\vspace{-0.4cm}
\centering
\includegraphics[width=0.49\textwidth]{roc.pdf}
\vspace{-0.7cm}
\caption{ROC curves for different models.}
\label{fig:roc}
\vspace{-0.2cm}
\end{figure}
Turning our attention to the deeper variants, we see that
\texttt{res26} has lower accuracy than \texttt{res15}, suggesting that
we have overstepped the network depth for which we can properly
optimize model parameters. Comparing the narrow vs.\ wide variants
overall, it appears that width (the number of feature maps)
has a larger impact on accuracy than depth.
We plot the ROC curves of selected models in Figure~\ref{fig:roc},
comparing the two competitive baselines to \texttt{res8},
\texttt{res8-narrow}, and \texttt{res15}. The remaining models were
less interesting and thus omitted for clarity. These curves are
consistent with the accuracy results presented in
Table~\ref{table:results}, and we see that \texttt{res15} dominates
the other models in performance at all operating points.
\section{Conclusions and Future Work}
This paper describes the application of deep residual learning
and dilated convolutions to the keyword spotting problem. Our work is
enabled by the recent release of Google's Speech Commands
Dataset, which provides a common benchmark for this task. Previously,
related work was mostly incomparable because papers relied on
private datasets. Our work establishes new,
state-of-the-art, open-source reference models on this dataset that we encourage others to
build on.
For future work, we plan to compare our CNN-based
approaches with an emerging family of models based on recurrent
architectures. We have not undertaken such a study because there do
not appear to be publicly-available reference implementations of such
models, and the lack of a common benchmark makes comparisons
difficult. The latter problem has been addressed,
and it would be interesting to see how recurrent neural
networks stack up against our approach.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,081 |
{"url":"http:\/\/server3.wikisky.org\/starview?object_type=1&object_id=525&object_name=HIP+33018&locale=EN","text":"WIKISKY.ORG\n Home Getting\u00a0Started To Survive in the Universe News@Sky Astro\u00a0Photo The\u00a0Collection Forum Blog\u00a0New! FAQ Press Login\n\n# \u03b8 Gem (Nageba)\n\nContents\n\n### Images\n\nDSS Images \u00a0 Other Images\n\n### Related articles\n\n CHARM2: An updated Catalog of High Angular Resolution MeasurementsWe present an update of the Catalog of High Angular ResolutionMeasurements (CHARM, Richichi & Percheron \\cite{CHARM}, A&A,386, 492), which includes results available until July 2004. CHARM2 is acompilation of direct measurements by high angular resolution methods,as well as indirect estimates of stellar diameters. Its main goal is toprovide a reference list of sources which can be used for calibrationand verification observations with long-baseline optical and near-IRinterferometers. Single and binary stars are included, as are complexobjects from circumstellar shells to extragalactic sources. The presentupdate provides an increase of almost a factor of two over the previousedition. Additionally, it includes several corrections and improvements,as well as a cross-check with the valuable public release observationsof the ESO Very Large Telescope Interferometer (VLTI). A total of 8231entries for 3238 unique sources are now present in CHARM2. Thisrepresents an increase of a factor of 3.4 and 2.0, respectively, overthe contents of the previous version of CHARM.The catalog is only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http:\/\/cdsweb.u-strasbg.fr\/cgi-bin\/qcat?J\/A+A\/431\/773 High-Precision Near-Infrared Photometry of a Large Sample of Bright Stars Visible from the Northern HemisphereWe present the results of 8 yr of infrared photometric monitoring of alarge sample of stars visible from Teide Observatory (Tenerife, CanaryIslands). The final archive is made up of 10,949 photometric measuresthrough a standard InSb single-channel photometer system, principally inJHK, although some stars have measures in L'. The core of this list ofstars is the standard-star list developed for the Carlos S\u00e1nchezTelescope. A total of 298 stars have been observed on at least twooccasions on a system carefully linked to the zero point defined byVega. We present high-precision photometry for these stars. The medianuncertainty in magnitude for stars with a minimum of four observationsand thus reliable statistics ranges from 0.0038 mag in J to 0.0033 magin K. Many of these stars are faint enough to be observable with arraydetectors (42 are K>8) and thus to permit a linkage of the bright andfaint infrared photometric systems. We also present photometry of anadditional 25 stars for which the original measures are no longeravailable, plus photometry in L' and\/or M of 36 stars from the mainlist. We calculate the mean infrared colors of main-sequence stars fromA0 V to K5 V and show that the locus of the H-K color is linearlycorrelated with J-H. The rms dispersion in the correlation between J-Hand H-K is 0.0073 mag. We use the relationship to interpolate colors forall subclasses from A0 V to K5 V. We find that K and M main-sequence andgiant stars can be separated on the color-color diagram withhigh-precision near-infrared photometry and thus that photometry canallow us to identify potential mistakes in luminosity classclassification. Rotational velocities of A-type stars in the northern hemisphere. II. Measurement of v sin iThis work is the second part of the set of measurements of v sin i forA-type stars, begun by Royer et al. (\\cite{Ror_02a}). Spectra of 249 B8to F2-type stars brighter than V=7 have been collected at Observatoirede Haute-Provence (OHP). Fourier transforms of several line profiles inthe range 4200-4600 \u00c5 are used to derive v sin i from thefrequency of the first zero. Statistical analysis of the sampleindicates that measurement error mainly depends on v sin i and thisrelative error of the rotational velocity is found to be about 5% onaverage. The systematic shift with respect to standard values fromSlettebak et al. (\\cite{Slk_75}), previously found in the first paper,is here confirmed. Comparisons with data from the literature agree withour findings: v sin i values from Slettebak et al. are underestimatedand the relation between both scales follows a linear law ensuremath vsin inew = 1.03 v sin iold+7.7. Finally, thesedata are combined with those from the previous paper (Royer et al.\\cite{Ror_02a}), together with the catalogue of Abt & Morrell(\\cite{AbtMol95}). The resulting sample includes some 2150 stars withhomogenized rotational velocities. Based on observations made atObservatoire de Haute Provence (CNRS), France. Tables \\ref{results} and\\ref{merging} are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or viahttp:\/\/cdsweb.u-strasbg.fr\/cgi-bin\/qcat?J\/A+A\/393\/897 CHARM: A Catalog of High Angular Resolution MeasurementsThe Catalog of High Angular Resolution Measurements (CHARM) includesmost of the measurements obtained by the techniques of lunaroccultations and long-baseline interferometry at visual and infraredwavelengths, which have appeared in the literature or have otherwisebeen made public until mid-2001. A total of 2432 measurements of 1625sources are included, along with extensive auxiliary information. Inparticular, visual and infrared photometry is included for almost allthe sources. This has been partly extracted from currently availablecatalogs, and partly obtained specifically for CHARM. The main aim is toprovide a compilation of sources which could be used as calibrators orfor science verification purposes by the new generation of largeground-based facilities such as the ESO Very Large Interferometer andthe Keck Interferometer. The Catalog is available in electronic form atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp:\/\/cdsweb.u-strasbg.fr\/cgi-bin\/qcat?J\/A+A\/386\/492, and from theauthors on CD-Rom. Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Donn\u00e9es Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp:\/\/cdsweb.u-strasbg.fr\/cgi-bin\/qcar?J\/A+A\/367\/521 The proper motions of fundamental stars. I. 1535 stars from the Basic FK5A direct combination of the positions given in the HIPPARCOS cataloguewith astrometric ground-based catalogues having epochs later than 1939allows us to obtain new proper motions for the 1535 stars of the BasicFK5. The results are presented as the catalogue Proper Motions ofFundamental Stars (PMFS), Part I. The median precision of the propermotions is 0.5 mas\/year for mu alpha cos delta and 0.7mas\/year for mu delta . The non-linear motions of thephotocentres of a few hundred astrometric binaries are separated intotheir linear and elliptic motions. Since the PMFS proper motions do notinclude the information given by the proper motions from othercatalogues (HIPPARCOS, FK5, FK6, etc.) this catalogue can be used as anindependent source of the proper motions of the fundamental stars.Catalogue (Table 3) is only available at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp:\/\/cdsweb.u-strastg.fr\/cgi-bin\/qcat?J\/A+A\/365\/222 The Normal Energy Distributions in Stellar Spectra: Giants and SupergiantsWe have derived the normal spectral energy distributions for thoseearly-type subgiants, giants, and supergiants that were not investigatedin our earlier studies, which were in most cases also not included inthe studies of Sviderskiene. Color indices computed using our normalenergy distributions are in good agreement with normal colors derivedfrom observations in the Vilnius photometric system. The reliability ofour distribution curves is also demonstrated by comparisons of observedand computed (W-B)-(B-V) two-color diagrams in the WBVR system. Normalcolor indices for the photometric WBVR system are derived. Sixth Catalogue of Fundamental Stars (FK6). Part I. Basic fundamental stars with direct solutionsThe FK6 is a suitable combination of the results of the HIPPARCOSastrometry satellite with ground-based data, measured over more than twocenturies and summarized in the FK5. Part I of the FK6 (abbreviatedFK6(I)) contains 878 basic fundamental stars with direct solutions. Suchdirect solutions are appropriate for single stars or for objects whichcan be treated like single stars. From the 878 stars in Part I, we haveselected 340 objects as \"astrometrically excellent stars\", since theirinstantaneous proper motions and mean (time-averaged) ones do not differsignificantly. Hence most of the astrometrically excellent stars arewell-behaving \"single-star candidates\" with good astrometric data. Thesestars are most suited for high-precision astrometry. On the other hand,199 of the stars in Part I are \u0394\u03bc binaries in the sense ofWielen et al. (1999). Many of them are newly discovered probablebinaries with no other hitherto known indication of binarity. The FK6gives, besides the classical \"single-star mode\" solutions (SI mode),other solutions which take into account the fact that hidden astrometricbinaries among \"apparently single-stars\" introduce sizable \"cosmicerrors\" into the quasi-instantaneously measured HIPPARCOS proper motionsand positions. The FK6 gives in addition to the SI mode the \"long-termprediction (LTP) mode\" and the \"short-term prediction (STP) mode\". TheseLTP and STP modes are on average the most precise solutions forapparently single stars, depending on the epoch difference with respectto the HIPPARCOS epoch of about 1991. The typical mean error of anFK6(I) proper motion in the single-star mode is 0.35 mas\/year. This isabout a factor of two better than the typical HIPPARCOS errors for thesestars of 0.67 mas\/year. In the long-term prediction mode, in whichcosmic errors are taken into account, the FK6(I) proper motions have atypical mean error of 0.50 mas\/year, which is by a factor of more than 4better than the corresponding error for the HIPPARCOS values of 2.21mas\/year (cosmic errors included). Stellar Angular Diameters of Late-Type Giants and Supergiants Measured with the Navy Prototype Optical InterferometerWe have measured the angular diameters of 50 F, G, K, and M giant andsupergiant stars using the Navy Prototype Optical Interferometer atwavelengths between 649 and 850 nm and using three baselines withlengths up to 37.5 m. Uniform-disk diameters, obtained from fits to thevisibility amplitude, were transformed to limb-darkened diametersthrough the use of limb-darkening coefficients for plane-parallelstellar atmosphere models. These limb-darkened diameters are comparedwith those measured with the Mark III optical interferometer and withthose computed by the infrared flux method. Sources of random andsystematic error in the observations are discussed. A Second Catalog of Orbiting Astronomical Observatory 2 Filter Photometry: Ultraviolet Photometry of 614 StarsUltraviolet photometry from the Wisconsin Experiment Package on theOrbiting Astronomical Observatory 2 (OAO 2) is presented for 614 stars.Previously unpublished magnitudes from 12 filter bandpasses withwavelengths ranging from 1330 to 4250 \u00c5 have been placed on thewhite dwarf model atmosphere absolute flux scale. The fluxes wereconverted to magnitudes using V=0 for F(V)=3.46x10^-9 ergs cm^-2 s^-1\u00c5^-1, or m_lambda=-2.5logF_lambda-21.15. This second catalogeffectively doubles the amount of OAO 2 photometry available in theliterature and includes many objects too bright to be observed withmodern space observatories. CCD spectra of MK standards and a preliminary extension of the MK classification to the yellow-red region.Not Available High resolution spectroscopy over lambda lambda 8500-8750 \u00c5 for GAIA. I. Mapping the MKK classification systemWe present an Echelle+CCD high resolution spectroscopic atlas (0.25\u00c4\/pix dispersion, 0.43 \u00c4 FWHM resolution and 20 000 resolvingpower) mapping the MKK classification system over the interval lambdalambda 8500-8750 \u00c4. The wavelength interval is remarkably free fromtelluric lines and it is centered on the near-IR triplet of Ca II, thehead of hydrogen Paschen series and several strong metallic lines. Thespectra of 131 stars of types between O4 and M8 and luminosity classes Ithrough V are included in the atlas. Special care was put in maintainingthe highest instrumental homogeneity over the whole set of data. Thecapability to derive accurate MKK spectral types from high resolutionobservations over the interval lambda lambda 8500-8750 \u00c4 isdiscussed. The observations have been performed as part of an evaluationstudy of possible spectroscopic performances for the astrometric missionGAIA planned by ESA. Tables~3 and 4 are only available in electronicform at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5)or via http:\/\/cdsweb.u-strasbg.fr\/ Abstract.html}\\fnmsep\\thanks{ Thespectra of the stars listed in Table~2 are also available in electronicform at the CDS or via the personal HomePagehttp:\/\/ulisse.pd.astro.it\/Astro\/Atlases\/}\\fnmsep\\thanks{ Figures 3--28are only available in electronic form at http:\/\/www.edpsciences.com Are metallic A-F giants evolved AM stars? Rotation and rate of binaries among giant F starsWe test the hypothesis of Berthet (1992) {be91} which foresees that Amstars become giant metallic A and F stars (defined by an enhanced valueof the blanketing parameter Delta m_2 of the Geneva photometry) whenthey evolve. If this hypothesis is right, Am and metallic A-FIII starsneed to have the same rate of binaries and a similar distribution ofvsin i. From our new spectroscopic data and from vsin i and radialvelocities in the literature, we show that it is not the case. Themetallic giant stars are often fast rotators with vsin i larger than 100kms(-1) , while the maximum rotational velocity for Am stars is about100 kms(-1) . The rate of tight binaries with periods less than 1000days is less than 30% among metallic giants, which is incompatible withthe value of 75% for Am stars - [Abt & Levy 1985] {ab85}).Therefore, the simplest way to explain the existence of giant metallic Fstars is to suggest that all normal A and early F stars might go througha short metallic\" phase when they are finishing their life on the mainsequence. Besides, it is shown that only giant stars with spectral typecomprised between F0 and F6 may have a really enhanced Delta m_2 value,while all A-type giants seem to be normal. Based on observationscollected at Observatoire de Haute Provence (OHP), France. Towards a fundamental calibration of stellar parameters of A, F, G, K dwarfs and giantsI report on the implementation of the empirical surface brightnesstechnique using the near-infrared Johnson broadband { (V-K)} colour assuitable sampling observable aimed at providing accurate effectivetemperatures of 537 dwarfs and giants of A-F-G-K spectral-type selectedfor a flux calibration of the Infrared Space Observatory (ISO). Thesurface brightness-colour correlation is carefully calibrated using aset of high-precision angular diameters measured by moderninterferometry techniques. The stellar sizes predicted by thiscorrelation are then combined with the bolometric flux measurementsavailable for a subset of 327 ISO standard stars in order to determineone-dimensional { (T, V-K)} temperature scales of dwarfs and giants. Theresulting very tight relationships show an intrinsic scatter induced byobservational photometry and bolometric flux measurements well below thetarget accuracy of +\/- 1 % required for temperature determinations ofthe ISO standards. Major improvements related to the actual directcalibration are the high-precision broadband { K} magnitudes obtainedfor this purpose and the use of Hipparcos parallaxes for dereddeningphotometric data. The temperature scale of F-G-K dwarfs shows thesmallest random errors closely consistent with those affecting theobservational photometry alone, indicating a negligible contributionfrom the component due to the bolometric flux measurements despite thewide range in metallicity for these stars. A more detailed analysisusing a subset of selected dwarfs with large metallicity gradientsstrongly supports the actual bolometric fluxes as being practicallyunaffected by the metallicity of field stars, in contrast with recentresults claiming somewhat significant effects. The temperature scale ofF-G-K giants is affected by random errors much larger than those ofdwarfs, indicating that most of the relevant component of the scattercomes from the bolometric flux measurements. Since the giants have smallmetallicities, only gravity effects become likely responsible for theincreased level of scatter. The empirical stellar temperatures withsmall model-dependent corrections are compared with the semiempiricaldata by the Infrared Flux Method (IRFM) using the large sample of 327comparison stars. One major achievement is that all empirical andsemiempirical temperature estimates of F-G-K giants and dwarfs are foundto be closely consistent between each other to within +\/- 1 %. However,there is also evidence for somewhat significant differential effects.These include an average systematic shift of (2.33 +\/- 0.13) % affectingthe A-type stars, the semiempirical estimates being too low by thisamount, and an additional component of scatter as significant as +\/- 1 %affecting all the comparison stars. The systematic effect confirms theresults from other investigations and indicates that previousdiscrepancies in applying the IRFM to A-type stars are not yet removedby using new LTE line-blanketed model atmospheres along with the updatedabsolute flux calibration, whereas the additional random component isfound to disappear in a broadband version of the IRFM using an infraredreference flux derived from wide rather than narrow band photometricdata. Table 1 and 2 are only available in the electronic form of thispaper The absolute magnitude of the early type MK standards from HIPPARCOS parallaxesWe analyse the standards of the MK system with the help of Hipparcosparallaxes, using only stars for which the error of the absolutemagnitude is <= 0.3 mag. We find that the main sequence is a wideband and that, although in general giants and dwarfs have differentabsolute magnitudes, the separation between luminosity classes V and IIIis not clear. Furthermore, there are a number of exceptions to thestrict relation between luminosity class and absolute magnitude. Weanalyse similarly the system of standards defined by Garrison & Gray(1994) separating low and high rotational velocity standards. We findsimilar effects as in the original MK system. We propose a revision ofthe MK standards, to eliminate the most deviant cases. Based on datafrom the ESA Hipparcos astrometry satellite The Tokyo PMC catalog 90-93: Catalog of positions of 6649 stars observed in 1990 through 1993 with Tokyo photoelectric meridian circleThe sixth annual catalog of the Tokyo Photoelectric Meridian Circle(PMC) is presented for 6649 stars which were observed at least two timesin January 1990 through March 1993. The mean positions of the starsobserved are given in the catalog at the corresponding mean epochs ofobservations of individual stars. The coordinates of the catalog arebased on the FK5 system, and referred to the equinox and equator ofJ2000.0. The mean local deviations of the observed positions from theFK5 catalog positions are constructed for the basic FK5 stars to comparewith those of the Tokyo PMC Catalog 89 and preliminary Hipparcos resultsof H30. The Angular Momentum of Main Sequence Stars and Its Relation to Stellar ActivityRotational velocities are reported for intermediate-mass main sequencestars it the field. The measurements are based on new, high S\/N CCDspectra from the Coud\u00e9 Feed Telescope of the Kitt Peak NationalObservatory. We analyze these rotation rates for a dependence on bothmass and age. We compare the average rotation speeds of the field starswith mean velocities for young stars in Orion, the Alpha Persei cluster,the Pleiades, and the Hyades. The average rotation speeds of stars moremassive than $\\sim1.6$ \\msun\\experience little or no change during theevolutionary lifetimes of these stars on the zero age main sequence orwithin the main sequence band. Less massive stars in the range betwee n1.6\\msun\\ and 1.3\\msun\\ also show little decline in mean rotation ratewhile they are on the main sequence, and at most a factor of 2 decreasein velocity as they evolve off the main sequence. The {\\it e}-foldingtime for the loss of angular momentum b y the latter group of stars isat least 1--2 billion years. This inferred characteristic time scale forspindown is far longer than the established rotational braking time forsolar-type stars with masses below $\\sim1.3$ \\msun. We conclude from acomparison of the trends in rotation with trends in chromospheric andcoronal activity that the overall decline in mean rotation speed alongthe main sequence, from $\\sim2$ \\msun\\ down to $\\sim1.3$ \\msun, isimposed during the pre-main sequence phase of evolution, and that thispattern changes little thereafter while the star resides on the mainsequence. The magnetic activity implicated in the rotational spindown ofthe Sun and of similar stars during their main sequence lifetimes mus ttherefore play only a minor role in determining the rotation rates ofthe intermediate mass stars, either because a solar-like dynamo is weakor absent, or else the geometry of the magnetic field is appreciablyless effective in removing angular momentu m from these stars. (SECTION:Stars) Notes on the convection in the ATLAS9 model atmospheres.The mixing-length theory for the convection, as it is used in the ATLAS9code (Kurucz, 1993a), is summarized and discussed. We investigated theeffect of the modification called approximate overshooting'' on themodel structure of the Sun and of stars with T_eff_ included between4000K and 8500K, logg included between 2.5 and 4.5, and metallicities[M\/H]=0.0 and [M\/H]=-3.0. We found that the Kurucz solar model (SUNK94)with the overshooting'' option switched on reproduces moreobservations than that without overshooting''. In theHgamma_ and Hbeta_ regions no solar model is ableto reproduce the level of the true continuum deduced fromhigh-resolution observations absolutely calibrated. At 486 nm thecomputed continuum is about 6.6% higher than that inferred from theobserved spectrum. We found that the largest effect of the approximateovershooting'' on the model structure occurs for models withT_eff_>6250K and it decreases with decreasing gravity. Thedifferences in (b-y), (B-V), and (V-K) indices computed from models withthe overshooting'' option switched on and off, correspond to T_eff_differences which may amount up to 180K, 100K, 60K respectively. Thedifferences in T_eff_ from Balmer profiles may amount up to 340K andthey occur also for T_eff_<6250K down to about 5000K. The c_1_ indexyields gravity differences {DELTA}logg as a function of logg which, foreach T_eff_, grow to a maximum value. The maximum {DELTA}logg decreaseswith increasing temperatures and ranges, for solar metallicity, from 0.7dex at logg=0.5 and T_eff_=5500K to 0.2dex at logg=4.5 and T_eff_=8000K.This behaviour does not change for [M\/H]=-3.0. Comparisons with theobservations indicate that model parameters derived with differentmethods are more consistent when the overshooting'' option is switchedoff (NOVER models), except for the Sun. In particular for Procyon,T_eff_ and logg from NOVER models are closer to the parameters derivedfrom model independent methods than are T_eff_ and logg derived from theKurucz (1995) grids. However, no model is able to explain the wholeobserved spectrum of either the Sun or Procyon with a unique T_eff_,regardless of whether the overshooting'' option is switched on or off.Independently of the convection option, the largest differences inT_eff_ derived with different methods are of the order of 200K forProcyon and 150K for the Sun. Synthetic Color Indices of Spectrophotometric StandardsSynthetic B--V color indices in the {\\it WBVR photometric system for 11stars of 3 -- 4 mag, proposed as spectrophotometric standards, arecalculated for the mean energy distribution data from the Moscow andAlma-Ata spectrophotometric catalogs. Also, synthetic B--V color indicesin the same photometric system are obtained for 16 stars of 6 -- 7 magfrom the set of 60 spectrophotometric standards observed at theSternberg Institute Crimean Station. Both sets of spectrophotometricstandards demonstrate a good agreement between the synthetic andobserved color indices. The energy distribution of Vega is compared withthe mean energy distribution for A0 V-type stars. A pecularity of theenergy distribution of Vega in the ultraviolet range is discussed. Transformations from Theoretical Hertzsprung-Russell Diagrams to Color-Magnitude Diagrams: Effective Temperatures, B-V Colors, and Bolometric CorrectionsAbstract image available at:http:\/\/adsabs.harvard.edu\/cgi-bin\/nph-bib_query?1996ApJ...469..355F&db_key=AST The Relation between Rotational Velocities and Spectral Peculiarities among A-Type StarsAbstract image available at:http:\/\/adsabs.harvard.edu\/cgi-bin\/nph-bib_query?1995ApJS...99..135A&db_key=AST Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with. SANTIAGO 91, a right ascension catalogue of 3387 stars (equinox J2000).The positions in right ascension of 3387 stars belonging to the Santiago67 Catalogue, observed with the Repsold Meridian Circle at Cerro Calan,National Astronomical Observatory, during the period 1989 to 1994, aregiven. The average mean square error of a position, for the wholeCatalogue, is +\/-0.009 s. The mean epoch of the catalogue is 1991.84. Radio continuum emission from stars: a catalogue update.An updated version of my catalogue of radio stars is presented. Somestatistics and availability are discussed. Stellar effective temperatures and angular diameters determined by the infrared flux method (IRFM): Revisions using improved Kurucz LTE stellar atmospheresInfrared flux method (IRFM) determinations of stellar effectivetemperatures and angular diameters are revised using new Kurucz localthermodynamic equilibrium (LTE) line-blanketed model atmospheres, whichmore accurately predict the emergent stellar radiation flux than modelsused previously. An improved method for deriving integrated stellarfluxes is described, together with polynomial coefficients forevaluating them from V and V-K. Tables were given for making smallcorrections appropiate to changes in log g, metallicity, fluxcalibration in J, K and L and interstellar extinction, to avoid the needfor additional tabular material. The determined temperatures areexpressed in terms of V and V-K, giving a mean absolute deviation of0.53%. A comparison of the derived angular diameters for three starswith determinations using Michelson interferometer shows an averageagreement to better than 1%. Corrections to the right ascension to be applied to the apparent places of 1217 stars given in \"The Chinese Astronomical Almanach\" for the year 1984 to 1992.Not Available Santiago Fundamental Catalogue - A catalogue of 1105 FK5 stars (equinox J2000.0)The positions in right ascension and declination of 1105 FK5 stars,observed with a Meridian Circle during the period 1979 to 1991, aregiven. The average mean square error of a position, for the wholecatalog, is +\/- 0.009 s in right ascension and +\/- 0.10 arcsec indeclination. The mean epoch of the catalog is 1983.148. Secondary spectrophotometric standardsEnergy distribution data on 238 secondary standard stars are presentedin the range 3200-7600 A with 50 A step. These stars are common to theCatalog of the Sternberg State Astronomical Institute and the FessenkovAstrophysical Institute. For these stars, the differences betweenspectral energy distribution data of the two catalogs do not exceed 5percent, while the mean internal accuracy of both catalogs data in thisrange are about 3.5 percent. For 99 stars energy distribution data inthe near infrared (6000-10,800 A) obtained at the Sternberg StateAstronomical Institute are also presented. The correction in right ascension of 508 stars determinated with PMO photoelectric transit instrument.Not Available The surface-brightness method and the dependence of the bolometric correction on star effective temperature.Not Available\nSubmit a new article\n\n\u2022 - No Links Found -\n\n### Member of following groups:\n\n#### Observation and Astrometry data\n\n Constellation: Gemini Right ascension: 06h52m47.30s Declination: +33\u00c2\u00b057'40.0\" Apparent magnitude: 3.6 Distance: 60.277\u00a0parsecs Proper motion RA: -1.9 Proper motion Dec: -47.6 B-T magnitude: 3.731 V-T magnitude: 3.605\n\nCatalogs and designations:\n Proper Names Nageba \u00a0 (Edit) Bayer \u03b8 Gem Flamsteed 34 Gem HD 1989 HD 50019 TYCHO-2 2000 TYC 2444-1113-1 USNO-A2.0 USNO-A2 1200-05216537 BSC 1991 HR 2540 HIP HIP 33018 \u2192 Request more catalogs and designations from VizieR","date":"2019-02-23 03:48:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.645091712474823, \"perplexity\": 7064.347956637722}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-09\/segments\/1550249434065.81\/warc\/CC-MAIN-20190223021219-20190223043219-00182.warc.gz\"}"} | null | null |
\section{Introduction}
A group $G$ is said to have restricted centralizers if for each $g \in G$ the centralizer $C_G(g)$ either is finite or has finite index in $G$. This notion was introduced by Shalev in \cite{shalev} where he showed that a profinite group with restricted centralizers is virtually abelian. As usual, we say that a profinite group has a property virtually if it has an open subgroup with that property. Recently, profinite groups with restricted centralizers of some specific elements were considered in \cite{as, DMSrestricted}.
The article \cite{DMSrestricted} handles profinite groups with restricted centralizers of $w$-values for a multilinear commutator word $w$. The theorem proved in \cite{DMSrestricted} says that if $w$ is a multilinear commutator word and $G$ is a profinite group in which the centralizer of any $w$-value is either finite or open, then the verbal subgroup $w(G)$ generated by all $w$-values is virtually abelian.
Recall that the lower central words $\gamma_k$ are recursively defined by $$\gamma_1=x_1,\qquad \gamma_k=[\gamma_{k-1}(x_1,\ldots,x_{k-1}),x_k].$$ Of course $\gamma_k(G)$ is the $k$-th term of the lower central series of $G$. Thus, if the $\gamma_k$-values have restricted centralizers in a profinite group $G$, then $\gamma_k(G)$ is virtually abelian.
In this paper we will show that if the $\gamma_k$-values have restricted centralizers, then the group $G$ is virtually nilpotent. In fact, we will establish a stronger result.
If $G$ is a profinite group, then $|G|$ denotes its order, which is a Steinitz number, and $\pi(G)$ denotes the set of prime divisors of $|G|$. Similarly, if $g$ is an element of $G$, then $|g|$ and $\pi(g)$ respectively denote the order of the procyclic subgroup generated by $g$ and the set of prime divisors of $|g|$.
We will say that an element $g$ of a profinite group $G$ is a {\it uniform $k$-step commutator } (u$_k$-commutator for short) if there are elements $x_1,x_2,\ldots,x_k\in G$ such that $\pi(x_1)=\dots=\pi(x_k)$ and $g=[x_1,x_2,\ldots,x_k]$.
When $k=2$, the element $g$ will be referred to simply as a {\it uniform commutator} (such elements were called anti-coprime commutators in \cite{dms2,dms}).
\begin{theorem}\label{main} Let $G$ be a profinite group in which the centralizers of uniform $k$-step commutators are either finite or open. Then $G$ is virtually nilpotent and $\gamma_k(G)$ is virtually abelian.
\end{theorem}
We do not know if there exists a constant, say $C$, depending only on $k$ such that any group $G$
satisfying the hypothesis of Theorem \ref{main} has an open nilpotent subgroup of class at most $C$. In the case where $k=2$ the affirmative answer is furnished by the next result. A key tool employed in the proof was established in \cite{ES} using probabilistic arguments.
\begin{theorem}\label{main2} Let $G$ be a profinite group in which the centralizers of uniform commutators are either finite or open. Then $G$ has an open subgroup which is nilpotent of class at most $3$.
\end{theorem}
A somewhat unexpected by-product of the proof of Theorem \ref{main} is related to the concept of strong conciseness in profinite groups introduced in \cite{dks}. A group-word $w$ is strongly concise if the verbal subgroup $w(G)$ is finite in any profinite group $G$ in which $w$ takes less than $2^{\aleph_0}$ values. A number of recent results on strong conciseness of group-words can be found in \cite{dks, joao, KS-strong}. The concept of strong conciseness can be applied in a wider context: suppose $\varphi(G)$ is a subset that can be naturally defined in every profinite group $G$, then one can ask whether the subgroup generated by $\varphi(G)$ is finite whenever $|\varphi(G)| < 2^{\aleph_0}$.
It was shown in \cite{dms} that this is the case if $\varphi(G)$ is the set of u$_2$-commutators; that is, a profinite group $G$ has finite commutator subgroup $G'$ if and only if the cardinality of the set of u$_2$-commutators in $G$ is less than $2^{\aleph_0}$. In view of this, it was natural to ask whether finite-by-nilpotent profinite groups
admit a similar characterisation. Now we are able to answer the question in the affirmative.
\begin{theorem}\label{strong} Let $k\geq1$, and let $G$ be a profinite group. Then $\gamma_k(G)$ is finite if and only if the cardinality of the set of uniform $k$-step commutators in $G$ is less than $2^{\aleph_0}$.
\end{theorem}
Note that in Theorem \ref{strong}
the order of $\gamma_k(G)$ is bounded in terms of the number of u$_k$-commutators (see Proposition \ref{boundedly}).
It would be interesting to see if also profinite groups $G$ in which the $k$-th term of the derived series is finite can be characterised in the same spirit. For now this remains an open problem.
\section{Preliminaries}
Results on finite groups often admit a
natural interpretation for profinite groups (see, for example, \cite{rib-zal} or \cite{wilson}). Throughout the paper we use certain profinite versions of facts on finite groups without explaining in detail how these can be deduced from the corresponding finite cases. On all such occasions the deduction can be performed via the routine inverse limit argument. Every homomorphism of profinite groups considered in this paper is continuous, and every subgroup of a profinite group is closed, unless otherwise specified.
If $x$ is an element of a group $G$, we write $x^G$ for the conjugacy class of $x$ in $G$. On the other hand, if $K$ is a subgroup of $G$, then $K^G$ denotes the normal closure of $K$ in $G$, that is, the subgroup generated by all conjugates of $K$ in $G$, with the usual convention that if $G$ is a topological group then $K^G$ is a closed subgroup.
We will denote by $\Delta (G)} %{\sc{FC}(G) $ the set of $FC$-elements of $G$, i.e.
$$\Delta (G)} %{\sc{FC}(G) =\{ x\in G \mid |x^G| < \infty\}.$$
Obviously $\Delta (G)} %{\sc{FC}(G)$ is a normal abstract subgroup of $G$. Note that if $G$ is a profinite group, $\Delta (G)} %{\sc{FC}(G)$ does not need to be closed.
Recall that a pro-$p$ group is an inverse limit of finite $p$-groups, a pro-$\pi$ group is an inverse limit of finite $\pi$-groups, a pronilpotent group is an inverse limit of finite nilpotent groups, a prosoluble group is an inverse limit of finite soluble groups.
It is well-known that a finite-by-prosoluble group is virtually prosoluble, and a profinite group $G$ that is an extension of a prosoluble group $N$ by
a prosoluble group $G/N$ is prosoluble (see e.g. \cite[Lemma 2.2]{KS}).
If $G$ is a profinite group and $\pi$ a set of primes, $O_{\pi} (G)$ stands for the maximal normal pro-$\pi$ subgroup of $G$.
Recall that $\gamma_\infty(G)$ stands for the intersection of the terms of the lower central series of a group $G$. A profinite group $G$ is pronilpotent if and only if $\gamma_\infty(G)=1$. Each profinite group $G$ has a maximal pronilpotent normal subgroup, its Fitting subgroup $F(G)$. Set $F_0(G)=1$ and $F_{i+1}(G)/F_i(G)=F(G/F_i(G))$ for every integer $i\ge 0$. We say that the profinite group $G$ has (finite) Fitting height $h=h(G)$ if $G=F_h(G)$ and $h$ is the least integer with this property. Obviously, $G$ has finite Fitting height at most $h$ if, and only if, $G$ is an inverse limit of finite soluble groups of Fitting height at most $h$. A profinite group $G$
is metapronilpotent if and only if it has Fitting height at most $2$ or, equivalently, if and only if $\gamma_\infty (G)$ is pronilpotent. As usual, $Z(G)$ denotes the centre of the group $G$.
We mention the following result about nilpotent profinite groups.
\begin{lemma}\label{center}
Let $G$ be an infinite nilpotent profinite group. Then $Z(G)$ is infinite.
\end{lemma}
\begin{proof} Assume that $Z(G)$ is finite. Then there exists an open normal subgroup $K$ of $G$ such that $K \cap Z(G)=1$. Since $G$ is nilpotent, this implies that $K=1$ and so $G$ is finite, a contradiction.
\end{proof}
Profinite groups have Sylow $p$-subgroups and satisfy analogues of the Sylow theorems.
Any prosoluble group $G$ has a Sylow basis (a family of pairwise permutable
Sylow $p_i$-subgroups $P_i$ of $G$, exactly one for each prime $p_i \in \pi(G)$), and any two Sylow bases are conjugate (see \cite[Proposition 2.3.9]{rib-zal}). The basis normalizer (also known as the system normalizer) of such a Sylow basis in $G$ is $T =\bigcap_{i} N_G(P_i).$ If $G$ is a prosoluble group and $T$ is a basis normalizer in $G,$ then $T$ is pronilpotent and $G = \gamma_\infty (G) T$ (see \cite[Lemma 5.6]{reid}).
The set of uniform $k$-step commutators of $G$ will be denoted by $\mathcal{U}_k(G)$ and we write $\mathcal{U}_k=\mathcal{U}_k(G)$ when it is clear which group we are referring to. Remark that $\mathcal{U}_k$ is symmetric, that is, it is closed under taking inverses, since $[x,y]^{-1}=[x^y, y^{-1}]$. Moreover, $\mathcal{U}_k(G/N)=\mathcal{U}_k(G)N/N$ whenever $N$ is a normal subgroup of $G$.
If $x,y$ are elements of a group $G$ and $k$ is a positive integer, $[x,{}_k y]$ is recursively defined by $[x,{}_1y]=[x,y]$ and $[x,{}_{k+1}y]=
[[x,{}_k y],y]$ for every $k\ge 1$. We will often use the fact that if $G$ is profinite, then the element $[x,y,y]=[y^{-xy},y]$ is a uniform commutator.
Note that if the profinite group $G$ is pronilpotent, then it is a direct product of its Sylow subgroups and so every $\gamma_k$-value in $G$ is a u$_k$-commutator.
\begin{lemma}\label{gen1}
Let $G$ be a profinite group. Then the set $\mathcal{U}_k$ generates $\gamma_k(G)$.
\end{lemma}
\begin{proof} If $k=1$, then $\mathcal{U}_k=G$ and we have nothing to prove. Therefore we assume that $k\geq2$. Let $N$ be the subgroup generated by $\mathcal{U}_k$. Obviously, $N\le \gamma_k(G)$ so we only need to show that $\gamma_k(G)\leq N$. Recall that $[x,y,y]=[y^{-xy},y]$ is a uniform commutator for any $x,y\in G$. It follows that if $\bar x=Nx$ and $\bar y=Ny$ are elements of $G/N$, then $[\bar x,_{k} \bar y]=1$, as $[y^{-xy},{}_{k-1}y]\in \mathcal{U}_k$. Since finite Engel groups are nilpotent (see \cite[12.3.4]{rob}), we deduce that $G/N$ is pronilpotent. We mentioned earlier that in a pronilpotent group every $\gamma_k$-value is a u$_k$-commutator. Therefore $G/N$
is nilpotent of class at most $k-1$, whence $\gamma_k(G)\le N$.
\end{proof}
If $\phi$ is an automorphism of a finite group $H$ of coprime order, that is, such
that $(|\phi |, |H|) = 1$, then we say for brevity that $\phi$ is a coprime automorphism
of $H.$ This definition is extended to profinite groups as follows. We say that $\phi$ is a
coprime automorphism of a profinite group $H$ meaning that the procyclic group $\langle\phi \rangle$
faithfully acts on $H$ by continuous automorphisms and $\pi(\langle\phi \rangle ) \cap \pi(H)=\emptyset$. Since the
semidirect product $H \langle\phi \rangle$ is also a profinite group, $\phi$ is a coprime automorphism of $H$
if and only if for every open $\phi$-invariant normal subgroup $N$ of $H$ the automorphism
(of finite order) induced by $\phi$ on $H/N$ is a coprime automorphism.
We will need some profinite equivalent of well-known results about coprime actions in finite groups (see e.g. \cite{KS}).
\begin{lemma} \label{bul-invariant}
If $\phi$ is a coprime automorphism of a profinite group $G,$ then for every
prime $q \in \pi(G)$ there is a $\phi$-invariant Sylow $q$-subgroup of $G.$
\end{lemma}
\begin{lemma}\cite[Lemma 4.6]{KS} \label{bul} Let $\phi$ be a coprime automorphism of a finite nilpotent group $G$. Then the set of elements of the form $[g,\phi]$, where $g\in G$, coincides with the set of elements of the form $[g,\phi,\phi]$.
\end{lemma}
A repeated application of the previous lemma yields the following result for profinite groups:
\begin{lemma}\label{bul2} Let $\phi$ be a coprime automorphism of a pronilpotent group $G$ and let $k$ be a positive integer. Then the set of elements of the form $[g,\phi]$, where $g\in G$, coincides with the set of elements of the form $[g,_{k}\phi]$.
\end{lemma}
\begin{proof}
For every positive integer $n$, let $\alpha_n$ be the continuous map defined on $G$ by $x \mapsto [x,_n \phi]$. We can repeatedly apply Lemma \ref{bul} to every finite image $\bar G$ of $G$, which is nilpotent, to get
$\alpha_1(\bar G)=\alpha_2(\bar G)$, and so also $\alpha_1(\bar G)=\alpha_k(\bar G)$. Since this holds for every finite image of $G$ and the maps are continuous, we deduce that $\alpha_1( G)=\alpha_k( G)$.
\end{proof}
\section{On groups in which centralizers of word values are either finite or open}
In this section we handle the important particular case of Theorem \ref{main} where the centralizers of all $\gamma_k$-values in $G$ are either finite or open. We will use some results from \cite{DMSrestricted}, which were proved for general multilinear commutator words.
Multilinear commutator words are words which are obtained by nesting commutators, but using always different variables. More formally, the word $w(x) = x$ in one variable is a multilinear commutator; if $u$ and $v$ are multilinear commutators on disjoint sets of variables then the word $w=[u,v]$ is a multilinear commutator, and all multilinear commutators are obtained in this way.
Clearly, the lower central words $\gamma_k$ are particular instances of multilinear commutator words.
Throughout the article, if $w$ is a word and $G$ is a group, $G_w$ will denote the set of all $w$-values in $G$.
We will need a combinatorial lemma proved in \cite{DMSrestricted}.
\begin{lemma}\cite[Lemma 3.4]{DMSrestricted}\label{comb1}
Let $w=w(x_1, \dots, x_n)$ be a multilinear commutator word.
Assume that $T$ is a normal subgroup of a group $G$ and
$a_1, \dots , a_n$ are elements of $G$ such that every element in the set
$\{w(a_1t_1,\dots,a_nt_n) \mid t_1,\dots,t_n\in T\}$ has at most $m$ conjugates in $G$.
Then every element in $T_w$ has at most $m^{2^n}$ conjugates in $G$.
\end{lemma}
We will first consider profinite groups where every $\gamma_k$-value is an $FC$-element. The following result was recently obtained in \cite{S-BFC}. Here $(k,m)$-bounded means bounded by a function of the parameters $k$ and $m$.
\begin{theorem}\label{dadada}
Let $k\geq1$ and $G$ be a group in which $|x^G|\leq m$ for any $\gamma_k$-value $x\in G$. Then $G$ has a nilpotent subgroup of $(k,m)$-bounded index and $(k,m)$-bounded class.
\end{theorem}
To deal with the case $k=2$ we require a more complicated result from \cite{ES} (see Theorem 1.2 and the preceding comments).
\begin{theorem}\label{ESgamma} If $G$ is a group in which $|x^G|\leq m$ for any commutator $x\in G$, then
$G$ has a subgroup $H$ of nilpotency class at most $4$ such that $[G : H]$ and $|\gamma_4(H)|$ are both finite and $m$-bounded.
\end{theorem}
\begin{proposition}\label{profinite-FC}
Let $k$ be a positive integer and $G$ a profinite group
in which every $\gamma_k$-value is an $FC$-element. Then $G$ is virtually nilpotent. If $k=2$ then $G$ has
an open subgroup $H$ of nilpotency class at most $3$.
\end{proposition}
\begin{proof}
For each positive integer $j$ consider the set $\Delta_j$ of elements $g\in G$ such that $|g^G|\le j$. Note that the sets $\Delta_j$ are closed (see for instance \cite[Lemma 5]{LP}). Set:
$$C_j=\{(g_1,\dots,g_k) \mid g_i\in G {\textrm{ and }} [g_1,\dots,g_k] \in \Delta_j\}.$$
Each set $C_j$ is closed in $G \times \cdots \times G$, being the inverse image of the closed set $\Delta_j$ under the continuous map $(g_1,\dots, g_k)\mapsto [g_1, \dots , g_k]$.
Moreover the union of the sets $C_j$ is the whole group $G \times \cdots \times G$. By the Baire category theorem (cf. \cite[p.\ 200]{Ke}) at least one of the sets $C_j$ has nonempty interior. Hence, there exists a positive integer $m$, some elements $z_i\in G$, and an open normal subgroup $T$ of $G$ such that
$$[z_1 T,\dots,z_nT]\subseteq \Delta_{m}.$$
By Lemma \ref{comb1}, every $\gamma_k$-value in elements of $T$ has at most $m^{2^k}$ conjugates in $G$. It follows from Theorem \ref{dadada} that
$T$ has a nilpotent abstract subgroup $B$ of $(k,m)$-bounded index and $(k,m)$-bounded class.
Then the topological closure of $B$ is an open nilpotent subgroup of $G$. Thus $G$ is virtually nilpotent, as claimed.
If $k=2$, by Theorem \ref{ESgamma} the subgroup $T$ contains a nilpotent abstract subgroup $B$ of $m$-bounded index and such that $\gamma_4(B)$ has $m$-bounded order. Then the topological closure $\overline B$ of $B$ is an open subgroup of $G$ such that $\gamma_4(\overline B)$ has finite order. Choose an open subgroup $K\le G$ such that $K\cap \gamma_4(\overline B)=1$. Observe that $K\cap \overline B$ is open in $G$ and nilpotent of class at most $3$, as required.
\end{proof}
We will require the following corollary of the main result in \cite{DMSrestricted}.
\begin{lemma}\label{openT} \cite[Corollary 1.2]{DMSrestricted}
Let $w$ be a multilinear commutator word and $G$ a profinite group in which the centralizers of $w$-values
are either finite or open. Then $G$ has an open subgroup $T$ such that $w(T)$ is abelian.
\end{lemma}
\begin{proposition}\label{maingamma}
Let $k$ be a positive integer and $G$ a profinite group in which the centralizers of $\gamma_k$-values
are either finite or open. Then $G$ is virtually nilpotent. If $k=2$, then $G$ has an open subgroup which is nilpotent of class at most $3$.
\end{proposition}
\begin{proof} By Lemma \ref{openT}, $G$ has an open subgroup $T$ such that $\gamma_k(T)$ is abelian. Without loss of generality, we can assume that $G=T$.
If all $\gamma_k$-values of $G$ are $FC$-elements, then by Proposition \ref{profinite-FC} we conclude that $G$ is virtually nilpotent. In particular, if $k=2$, then $G$ has an open subgroup which is nilpotent of class at most $3$.
So assume that there exists a $\gamma_k$-value whose centralizer in $G$ is finite. As $\gamma_k(G)$ is abelian, we conclude that $\gamma_k(G)$ is finite.
Therefore there exists an open normal subgroup $N$ of $G$ such that $N \cap \gamma_k(G)=1$. In particular, $\gamma_k(N)=1$ and $N$ is an open nilpotent subgroup of $G$. If $k=2$ then $N$ is abelian and $G$ is virtually abelian. This concludes the proof.
\end{proof}
\section{On groups in which centralizers of uniform commutators are finite}
In this section we will prove
Theorem \ref{main} in the special case where the elements of $\mathcal{U}_k$ have finite centralizers (see Proposition \ref{mainFinite}).
Throughout, it will be assumed that $k\geq2$. Note that if $G$ is a profinite group where all $\gamma_k$-values have finite centralizers, then $G$ is either finite or nilpotent of class at most $k-1$ \cite[Corollary 1.3]{DMSrestricted}.
We will repeatedly use the following observation:
\begin{lemma}\label{finite} Let $G$ be a profinite group in which the centralizers of {\rm u}$_k$-commutators are finite.
If $H$ is a pronilpotent subgroup of $G$, then either $H$ is finite or $H \cap \mathcal{U}_k =1$. In the latter case, $H$ is nilpotent of class at most $k-1$.
\end{lemma}
\begin{proof}
Since $H$ is a direct product of its Sylow subgroups, it follows that any $\gamma_k$-value of $H$ lies in $\mathcal{U}_k$ and, by \cite[Corollary 1.3]{DMSrestricted}, $H$ is either finite or nilpotent of class at most $k-1$. Assume $H \cap \mathcal{U}_k \neq 1$ and let $y$ be a nontrivial element of $H \cap \mathcal{U}_k $. Since $Z(H) \le C_G(y)$, it follows that $Z(H)$ is finite. In view of Lemma \ref{center} this implies that $H$ is finite.
\end{proof}
\begin{proposition}\label{mainFinite} Let $G$ be a profinite group in which the centralizers of {\rm u}$_k$-commutators are finite. Then $G$ is either finite or nilpotent of class at most $k-1$.
\end{proposition}
\begin{proof} If $G$ is pronilpotent, then the result is immediate from the previous lemma. Assume that $G$ is not pronilpotent. We want to prove that $G$ is finite. We will use the fact that the Sylow subgroups of $G$ are either finite or nilpotent of class at most $k-1$.
The profinite version of Burnside's theorem \cite[Theorem 3.3]{gilotti-ribes-serena} says that if $N_G(P)/C_G(P)$ is a pro-$p$ group for every nontrivial pro-$p$ subgroup $P$ of $G$, then $G$ has a normal $p$-complement. Note that a group is pronilpotent whenever it has normal $p$-complement for every prime $p$. As our group $G$ is not pronilpotent, there exists a prime $p$ and a pro-$p$ subgroup $P\leq G$ such that $N_G(P)/C_G(P)$ contains a nontrivial $p'$-element. So $N_G(P)\setminus C_G(P)$ contains a $p'$-element $a$ that induces a nontrivial coprime automorphism on $P$. We deduce from Lemma \ref{bul2} that there exists a nontrivial $p$-element $y=[x,_k a]\in\mathcal{U}_k \cap P$. It follows from Lemma \ref{finite} that the Sylow $p$-subgroups of $G$ are finite.
Let $N$ be an open normal pro-$p'$ subgroup of $G$ intersecting $C_G(y)$ trivially. Then $C_N(y)=1$ and $y$ acts coprimely on $N$.
Let $q$ be a prime in $\pi(N )$. By Lemma \ref{bul-invariant}, there is a $y$-invariant Sylow $q$-subgroup $Q$ of $N$.
As $C_N(y)=1$, the map $x\mapsto [x,y]$ is injective on $Q$. Hence, by Lemma \ref{bul2}, $\mathcal{U}_k \cap Q \neq 1$. Thus $Q$ is a finite $q$-group by Lemma \ref{finite}, and the map $x \mapsto [x,y]$ is also surjective.
Therefore $Q\subseteq \mathcal{U}_k$ by Lemma \ref{bul2}.
Choose an element $z\in Q$ of prime order $q$.
Since $C_G(z)$ is finite, there exists an open normal $q'$-subgroup $K$ of $G$ that has trivial intersection with $C_G(z)$. In particular, $z$ acts coprimely and fixed-point-freely on $K$. As $z$ has prime order, combining the well-known results of Thompson and Higman \cite{hi,tho} (see also \cite[Theorem 2.6.2]{wilson}) we conclude that $K$ is nilpotent. Again, by Lemma \ref{bul2}, we deduce that $K\subseteq \mathcal{U}_k$, whence $K$ is finite by Lemma \ref{finite}. It follows that $G$ is finite.
\end{proof}
\section{ On groups in which uniform commutators are FC-elements}
In this section we will prove the following proposition.
\begin{proposition}\label{mainFC} Let $G$ be a profinite group such that
$\mathcal{U}_k \subseteq \Delta(G)$. Then $G$ is virtually nilpotent.
\end{proposition}
We will need a technical results about commutators.
\begin{lemma}\label{lem2}
Let $x_1, \dots , x_n $ be elements of a group $G$ and let $y\in\{x_1, \dots , x_n\}$. Then $[x_1, \dots, x_n]$ is a product of $2^{n-1}$ conjugates of $y^{\pm 1}$.
\end{lemma}
\begin{proof}
Using basic commutator identities it can be easily seen that if $v$ is a product of $t$ conjugates of $y^{\pm 1}$, then both
$[v, x]= v^{-1} v^x$ and $[x,v] =v^{-x} v$ are products of $2t$ conjugates of $y^{\pm 1}$. Then, by induction on $n$, it follows that $[x_1, \dots, x_n]$ is a product of $2^{n-1} $ conjugates of $y^{\pm 1}$.
\end{proof}
Suppose that a set $\Sigma$ is a union of finitely many subsets $\Sigma_1,\dots,\Sigma_m$. Then $\Sigma$ admits a partition with blocks $\Omega_1,\dots,\Omega_s$ such that $s \le 2^m$ and each $\Sigma_i $ is a (disjoint) union of some of the blocks $\Omega_1, \dots,\Omega_s$. Formally, we can take $\Omega_1,\dots,\Omega_s$ to be all the nonempty sets of the form
\[
\Gamma_1\cap \Gamma_2\cap \dots\cap \Gamma_m, \quad \textrm{ where }
\Gamma_i \textrm{ is either } \Sigma_i \textrm{ or } \Sigma\setminus\Sigma_i \textrm{ for all } i.\]
This will be used in the next lemma.
\begin{lemma}\label{idea} Let $N$ be a normal pronilpotent subgroup of a profinite group G. Let $X\subseteq G$ be the set of commutators $[g_1,...,g_k]$ where at least one of the entries $g_1,...,g_k$ belongs to $N$. Then every element of $X$ is a product of $k$-boundedly many elements in $\mathcal{U}_k\cap N$.
\end{lemma}
\begin{proof}
Let us fix an element $ [g_1,\dots,g_k]\in X$, with $g_i \in N$.
The set
\[\pi (g_1) \cup \dots \cup \pi(g_k) \]
admits a finite partition with blocks $\pi_1,\dots, \pi_s$ with the property that each $\pi(g_t)$ is a union of some of these blocks. Note that $s\le 2^k$.
So each element $g_t$ is a product
\[ g_t=\prod_{j=1}^s y_{tj} \textrm{ where }y_{tj}\in\langle g_t\rangle,\,\, \pi(y_{tj})=\pi_j \textrm{ whenever } y_{tj}\ne 1. \]
Repeatedly using the commutator identity $[ab,c]=[a^b,c^b]\,[b,c]$ we obtain that $[g_1,\dots,g_k]$ is a product of $k$-boundedly many elements of the form: $y=[u_1,\dots ,u_k]$, where, for each $t$, $u_t$ is a conjugate of some $y_{tj}$. In particular $u_i\in N$, and for each $t,j=1,\dots, k$ the sets $\pi(u_t)$ and $\pi(u_j)$ are either equal or disjoint.
So, taking into account that $N$ is normal, we just need to show that every such element $y= [u_1,\dots,u_k]$ is a product of $k$-boundedly many elements in $\mathcal{U}_k \cap N$.
Let $\tilde \pi=\pi(u_i)$. As $N$ is pronilpotent and normal in $G$, the subgroup $O_{\tilde \pi}(N)$ is normal in $G$. In the sequel, we will use the following fact: If $a\in O_{\tilde \pi}(N)$
and $b$ induces a coprime automorphism on $O_{\tilde \pi}(N)$, then $[a,b]\in \mathcal{U}_k$. Indeed, by Lemma \ref{bul2}, there exists $d\in O_{\tilde \pi} (N)$ such that $[a, b]=[d,{}_{k}b] = [b^{-db},{}_{k-1}b]$.
If $\tilde \pi=\pi(u_1)=\dots=\pi(u_k)$ then $y\in \mathcal{U}_k \cap N$. Otherwise there exists $j$ such that
\[ \pi(u_j)\ne \tilde\pi. \]
Assume that $j>i$.
We have that $x=[u_1,\dots,u_{j-1}]\in O_{\tilde \pi}(N)$, and $u_j$ induces a coprime automorphism on $O_{\tilde \pi}(N)$. Therefore, $[u_1,\dots,u_j]=[x,u_j] \in \mathcal{U}_k \cap N$ and it follows from Lemma \ref{lem2}, that $y=[u_1,\dots,u_k]$ is a product of $k$-boundedly many elements in $\mathcal{U}_k \cap N$, which are all conjugates of $[u_1,\dots,u_j]$ or $[u_1,\dots,u_j]^{-1}$.
So we can assume that $j<i$. By Lemma \ref{lem2} the element $x=[u_1,\dots,u_{i-1}]$ is the product of $k$-boundedly many conjugates of $u_j$ or $u_j^{-1}$, so that $[x,u_i]^{-1}=[u_i,x]$ is the product of $k$-boundedly elements of the form
$[a,b]$, where $a\in O_{\tilde \pi}(N)$ and $b$ is a conjugate of $u_j$, so it acts
coprimely on $O_{\tilde \pi}(N)$. All elements $[a,b]$ belong to $\mathcal{U}_k\cap N$, thus $[x,u_i]^{-1}=[u_1,\dots,u_i]^{-1}$ is the product of $k$-boundedly many elements in $\mathcal{U}_k\cap N $. Since $\mathcal{U}_k$ is symmetric, the result follows.
\end{proof}
\begin{lemma}\label{conj}
Let $G$ be a metapronilpotent group.
Then every ${\gamma_k}$-value in $G$ is a product of $k$-boundedly many elements in $\mathcal{U}_k$.
\end{lemma}
\begin{proof}
Write $G =N T$, where $T$ is a system normalizer and $N= \gamma_\infty (G)$. Since $G$ is metapronilpotent, it follows that $N$ and $T$ are pronilpotent.
Let $x$ be a $\gamma_k$-value of $G$. Then we can write $x$ in the form
\[x=[n_1t_1,\dots,n_kt_k],\]
where $n_i\in N$ and $t_i\in T$ for all $i=1,\dots,k$.
Using the basic commutators identities $[ab,c]=[a,c]^b[b,c]$, $[a,bc]=[a,c][a,b]^c$ and $ab=b^{a^{-1}}a=b a^{b}$ we can write $x$ as
\[x=[t_1,\dots,t_k]\prod_{i=1}^{2^k-1}[x_{i1},\dots,x_{ik}]\]
where all $x_{ij}$ are conjugates of elements in $N\cup T$ and at least one entry in each $\gamma_k$-value $[x_{i1},\dots,x_{ik}]$ belongs to $N$.
As $T$ is pronilpotent, $[t_1,\dots,t_k]\in \mathcal{U}_k$. Moreover,
by Lemma \ref{idea}, all $[x_{i1},\dots,x_{ik}]$ are products of $k$-boundedly many elements in $\mathcal{U}_k\cap N$. This concludes the proof.
\end{proof}
Any finite group $H$ has a normal series each of whose factors either is soluble or is a direct product of nonabelian simple groups. The nonsoluble length of $H$, denoted by $\lambda(H)$, is defined as the minimum number of nonsoluble factors in a series of this kind. It is easy to see that the nonsoluble length $\lambda(H)$ is equal to the least positive integer $l$ such that there is a series of characteristic subgroups
$$1=L_0\le R_0 < L_1 \le R_1 < \dots\le R_{l}=H$$
in which each quotient $L_i/R_{i-1}$ is a (nontrivial) direct product of nonabelian simple groups, and each quotient $R_i/L_i$ is soluble (possibly trivial).
It is natural to say that a profinite group $G$ has finite nonprosoluble length at most $l$ if $G$ has a normal series
$$1=L_0\le R_0 < L_1 \le R_1 < \dots\le R_{l}=G$$
in which each quotient $L_i/R_{i-1}$ is a (nontrivial) Cartesian product of nonabelian
finite simple groups, and each quotient $R_i/L_i$ is prosoluble (possibly trivial).
In particular if, for some positive integer $m,$ all continuous
finite quotients of a profinite group $G$ have nonsoluble length at most $m,$ then $G$
has finite nonprosoluble length at most $m$ (see e.g. \cite[Lemma 2]{wilson-compact}).
In the rest
of this section $G$ will be a profinite group such that $\mathcal{U}_k\subseteq \Delta (G)$. Of course, in every quotient $G/N$ of $G$ the elements of $\mathcal{U}_k(G/N)$ are $FC$-elements.
\begin{lemma}\label{simple} Suppose $G$ is a direct product of nonabelian finite simple groups. Then $G$ is finite.
\end{lemma}
\begin{proof} Let $G=\Pi_{i\in I} S_i$, where each $S_i$ is a nonabelian finite simple group. In every factor $S_i$ choose a nontrivial element $a_i\in \mathcal{U}_{k}(S_i)$. Note that $a=\prod_{i\in I}a_i\in \mathcal{U}_k$. Further, observe that $C_G(a)=\prod_{i\in I}C_{S_i}(a_i)$ has finite index in $G$ if and only if $I$ is finite. This proves the result.
\end{proof}
\begin{lemma}\label{prosol} The group $G$ is virtually prosoluble.
\end{lemma}
\begin{proof} Suppose that $G$ is not prosoluble. Let $P$ be a Sylow $2$-subgroup of $G$. Since $\mathcal{U}_k(P)=P_{\gamma_k}$, it follows from Proposition \ref{maingamma} that $P$ is virtually nilpotent. Thus $P$ is soluble, say of derived length $d$. By \cite[Theorem 1.4]{KS2} every finite image of $G$ has nonsoluble length at most $d$ and so also $G$ has nonprosoluble length at most $d$.
Let
$$1=L_0\le R_0 < L_1 \le R_1 < \dots\le R_{s}=G$$
be a normal series of finite length in which each section $L_i/R_{i-1}$ is a (nontrivial) direct product of nonabelian finite simple groups, and each section $R_i/L_i$ is prosoluble (possibly trivial). By Lemma \ref{simple}, every section $L_i/R_{i-1}$ is finite. Let $$H=\{x\in G \mid \ [L_i,x]\leq R_{i-1}\text{ for all } i\}$$ be the centralizer in $G$ of the non-prosoluble sections of the series. This is an open prosoluble subgroup of $G$.
\end{proof}
\begin{lemma}\label{F(G)} If $G$ is prosoluble, then $F(G)\ne 1$.
\end{lemma}
\begin{proof}
Assume by contradiction that $F(G)=1$ and let $x$ be a nontrivial element in $\mathcal{U}_k$. Then $K=\langle x^G\rangle$ is generated by finitely many conjugates of $x$ and so its centralizer has finite index in $G$. It follows that the centre $Z(K)$ of $K$ has finite index in $K$. Moreover $Z(K)\le F(G)=1$. Therefore $Z(K)=1$ and consequently $K$ is finite. As $G$ is prosoluble, $K$ is soluble and $F(K)\ne 1$. As $F(K)\le F(G)$, this contradicts the assumption that $F(G)=1$.
\end{proof}
Now we are ready to prove Proposition \ref{mainFC}.
\begin{proof}[Proof of Proposition \ref{mainFC}]
By Lemma \ref{prosol} we may assume that $G$ is prosoluble and so $F(G)\ne 1$ by Lemma \ref{F(G)}. If $F(G)=G$, then all $\gamma_k$-values of $G$ are contained in $\mathcal{U}_k$ and the result is immediate from Proposition \ref{maingamma}.
Assume that $G$ is not pronilpotent and suppose first that $G=F_2(G)$. Then $G=NT$ with $N=\gamma_{\infty}(G)$ and $T$ a system normalizer. In this situation $N$ and $T$ are pronilpotent and in view of Lemma \ref{conj} every $\gamma_k$-value in $G$ is a product of finitely many elements from $\mathcal{U}_k$. Thus all $\gamma_k$-values belong to the $FC$-centre $\Delta (G)} %{\sc{FC}(G)$ of $G$ and it follows from Proposition \ref{maingamma} that $G$ is virtually nilpotent.
This proves that $F_2(G)$ is virtually nilpotent, whence $F_2(G)/F(G)$ is finite. Let $\bar G=G/F(G)$, and let $\bar K$ be an open normal subgroup of $\bar G$ such that $\bar K\cap F(\bar G)=1$. Then $F(\bar K)=1$ which, because of Lemma \ref{F(G)}, implies that $\bar K=1$. Hence $\bar G$ is finite and $G$ is virtually nilpotent. This concludes the proof.
\end{proof}
\section{Proof of Theorem 1}
We start with the case where $G$ is a metapronilpotent group.
\begin{lemma}\label{General case}
Let $G$ be a metapronilpotent group in which the centralizers of {\rm u}$_k$-commutators are either finite or open.
Then $G$ is virtually nilpotent.
\end{lemma}
\begin{proof}
Write $G =N T$, where $T$ is a system normalizer and $N= \gamma_\infty (G)$. Since $G$ is metapronilpotent, it follows that $N$ and $T$ are pronilpotent. Hence, by Proposition \ref{maingamma} both $N$ and $T$ are virtually nilpotent. Let $N_0\leq N$ and $T_0\leq T$ be nilpotent open subgroups of $N$ and $T$ respectively. Remark that $N_0$ can be chosen characteristic in $G$.
Notice that $L=N_0T_0$ is open in $G$ and it is sufficient to show that $L$ is virtually nilpotent. Obviously, $L$ is soluble. Arguing by induction on the derived length of $L$ we can assume that $L'$ is virtually nilpotent. Then $L$ has an open subgroup whose commutator subgroup is nilpotent and so without loss of generality we can assume that $L'$ is nilpotent.
If $L'$ is finite, then $L$ is virtually abelian. Assume that $L'$ is infinite. Lemma \ref{center} says that $Z(L')$ is infinite, too. Observe that every element of $\mathcal{U}_k(L)$ centralizes $Z(L')$. Hence, every element of $\mathcal{U}_k(L)$ is an $FC$-element, and therefore $L$ is virtually nilpotent by Proposition \ref{mainFC}.
\end{proof}
Recall that a group is locally finite if every finite subset is contained in a finite subgroup.
\begin{lemma}\label{centralizer} Let $G$ be a profinite group and $H$ a locally finite normal abstract subgroup of $G$. Let $\hat H$ be the closure of $H$. Suppose that
$C_H(a) = 1$ for some torsion element $a \in G.$ Then $C_{G/ \hat H} (a)$ is the image of $C_G(a)$ in $G/ \hat H.$
\end{lemma}
\begin{proof}
Let $h \in H$. Then $J=\langle h \rangle^{\langle a \rangle}$ is a finite $a$-invariant subgroup of $H$. Consider the map $f : J \mapsto J$ defined by: $x\mapsto [x,a]$. Since $C_H(a) = 1$, the map is injective and hence surjective. So $h = [x,a]$ for some $x \in J$. This holds for every $h\in H$ and therefore the natural extension of $f$ to
$\hat H$ is also surjective.
Clearly, $C_{G/ \hat H} (a)$ contains the image of $C_G(a)$ in $G/\hat H$. Conversely, if $x\hat H \in C_{G/ \hat H} (a)$, then $[x,a] \in \hat H$, whence
$[x,a]=[h,a]$ for some $h \in \hat H$. We deduce that $x h^{-1} \in C_G(a)$ and $x \in C_G(a)\hat H$. Thus $C_{G/ \hat H} (a)=C_G (a)\hat H /\hat H $, as required.
\end{proof}
Now we are ready to prove the result on virtually nilpotency.
\begin{proposition}\label{mainfirst} Let $G$ be a profinite group in which the centralizers of uniform $k$-step commutators are either finite or open. Then $G$ is virtually nilpotent.
\end{proposition}
\begin{proof}
Suppose that an element $x \in \mathcal{U}_k$ has infinite order. Then the centralizer $C=C_G(x)$ is infinite and therefore open. Every {\rm u}$_k$-commutator lying in $C$ is centralized by $x$, which has infinite order. Therefore every {\rm u}$_k$-commutator in $C$ is an $FC$-element. Apply Proposition \ref{mainFC} to deduce that $C$ is virtually nilpotent, and the same holds for $G$, as $C$ is open in $G$. Hence, without loss of generality, we will assume that every element of $\mathcal{U}_k$ has finite order.
Let $Y\subseteq \mathcal{U}_k$ be the set of elements of $\mathcal{U}_k$ that have open centralizers, and let $H$ be the abstract subgroup generated by $Y$. Note that $H$ is contained in the $FC$-centre of $G$. In particular, $H$ is locally finite. If $Y=\mathcal{U}_k$, the result is immediate from Proposition \ref{mainFC} so assume that $Y\neq \mathcal{U}_k$.
Let $a \in \mathcal{U}_k \setminus Y$. As the centralizer $C_G(a)$ is finite and $H$ is residually finite, there exists a normal subgroup $K$ of finite index in $H$ such that $C_K(a)=1$. Let $\hat H$ and $\hat K$ be the topological closures of $H$ and $K$, respectively. It follows from Lemma \ref{centralizer} that $C_{G/ \hat K} (a)=C_G (a)\hat K /\hat K $ and therefore $C_{G/ \hat K} (a)$ is finite. Since $\hat H/\hat K$ is finite, also $C_{G/ \hat H} (a)$ is finite (see e.g. \cite[Lemma 2.1]{shalev}).
Thus every nontrivial element of $\mathcal{U}_k$ has finite centralizer in $G/ \hat H$. In view of Proposition \ref{mainFinite} we conclude that $G/ \hat H$ is virtually nilpotent.
Let us now examine the action of $a$ on $H$. Again, $K$ is a finite index $a$-invariant subgroup of $H$ such that $C_K(a)=1$. A well-known corollary of the classification of finite simple groups is that a finite group admitting a fixed-point-free automorphism is soluble (see e.g. \cite{rowley}). Thus $K$ is locally soluble.
Recall that a Carter subgroup is a self-normalizing nilpotent subgroup and note
that $\langle a \rangle$ is a Carter subgroup in every finite subgroup $T$ of $K\langle a \rangle$ such that $a\in T$.
The main result in Dade's paper \cite{Dade} implies that the Fitting height $h(T)$ of $T$ is bounded by a function depending only on the order of $a$.
We deduce that $K$ has a characteristic series of finite length all of whose factors are locally nilpotent. Therefore $\hat K$ has a finite characteristic series with pronilpotent factors, that is, $\hat K$ has finite Fitting height. As $\hat K$ is open in $\hat H$ and $G/ \hat H$ is virtually nilpotent, we conclude that also $G$ has an open prosoluble subgroup whose Fitting height is finite. So, without loss of generality we can assume that $G$ is prosoluble and $h(G)$ is finite.
If $h(G) \le 2$, Lemma \ref{General case} implies that $G$ is virtually nilpotent. Assume that $h(G) > 2$ and argue by induction of $h(G)$. It follows that $G/F(G)$ is virtually nilpotent. Hence, $G$ has an open subgroup $M$ such that $h(M)\leq2$. In view of Lemma \ref{General case} $M$ (and therefore $G$) is virtually nilpotent. The proof is now complete.
\end{proof}
Now Theorem \ref{main2} follows.
\begin{proof}[Proof of Theorem \ref{main2}] Let $G$ be a profinite group in which the centralizers of elements of $\mathcal{U}_2$ are either finite or open. It follows from Proposition \ref{mainfirst} that $G$ is virtually nilpotent.
So we can assume that $G$ is nilpotent. In that case every commutator in $G$ is a uniform commutator. In view of Proposition \ref{maingamma} we conclude that $G$ has an open subgroup which is nilpotent of class at most $3$.
\end{proof}
It remains to prove the part of Theorem \ref{main} which states that $\gamma_k(G)$ is virtually abelian.
Let G be a group and $w=w(x_1,\dots,x_n)$ a word. The marginal subgroup $w^*(G)$ of
$G$ corresponding to the word $w$ is defined as the set of all $x \in G$ such that
$$w(g_1,\dots, x g_i,\dots,g_n)= w(g_1,\dots, g_i x ,\dots,g_n)=w(g_1,\dots,g_i,\dots,g_n)$$
for all $g_1,\dots,g_n \in G$ and $1 \le i \le n$.
It is well known that $w^*(G)$ is a characteristic subgroup of $G$ and $[w^*(G), w(G)]=1$.
Note that marginal subgroups in profinite groups are closed.
Let $S$ be a subset of a group $G$. Following \cite{DMSrestricted}
define the $w^*$-residual of $S$ of $G$ to be the intersection of all normal subgroups $N$ such that $SN/N$ is contained in the marginal subgroup $w^*(G/N)$.
For multilinear commutator words the $w^*$-residual of a normal subgroup has the following properties.
\begin{lemma}\label{ts}\cite[Lemma 4.1]{DMSrestricted}
Let $w$ be a multilinear
commutator word, $G$ a group and $N$ a normal subgroup of $G$.
Then the $w^*$-residual of $N$ in $G$ is the subgroup generated by the elements $w(g_1, \dots, g_n)$ where at least one of $g_1, \dots, g_n$ belongs to $N$. \end{lemma}
\begin{lemma}\label{concise}\cite[Lemma 4.2]{DMSrestricted}
Let $w$ be a multilinear commutator word, $G$ a profinite group and $N$ an open normal subgroup of $G$. Then the $w^*$-residual of $N$ is open in $w(G)$.
\end{lemma}
The following result is a particular case of \cite[Proposition 4.5]{DMSrestricted}:
\begin{proposition}\label{X}
Let $G$ be a profinite group and $N$ a normal subgroup of $G$. Let $H$ be the topological closure of $\Delta(G)$ in $G$.
Fix $i\in \{1,\dots,k\}$ and consider the set
$X_i=\{[g_1,\dots,g_k] \mid g_i\in N,g_j\in G \}$.
If
\[ X_i \subseteq \Delta(G), \]
then $[H , \langle X_i\rangle ]$ is finite.
\end{proposition}
Following the lines of \cite[Theorem 4.3]{DMSrestricted}, we have:
\begin{theorem}\label{genN}
Assume that $G$ is a profinite group in which the centralizers of u$_k$-commutators are either finite or open and $N$ is an infinite normal nilpotent subgroup of $G$.
Then the $\gamma_k^*$-residual of $N$ has finite commutator subgroup.
\end{theorem}
\begin{proof}
For $i=1, \dots n$, let $X_i$ be the set of $\gamma_k$-values $[g_1,...,g_k]$ such that $g_i$ belongs to $N$.
It follows from Lemma \ref{idea} that every element in $X_i$ is a product of $k$-boundedly many elements in $\mathcal{U}_k\cap N$.
As $N$ is infinite, the center $Z(N)$ of $N$ is infinite as well, thus every element in $\mathcal{U}_k\cap N$ is an $FC$-element.
Therefore also
the set $X_i$ consists of $FC$-elements.
It follows from Proposition \ref{X} that $[H, \langle X_i \rangle]$ is finite for every $i$.
By Lemma \ref{ts}, the $\gamma_k^*$-residual of $N$ is the subgroup $R$ generated by the set $X= X_1 \cup \dots \cup X_k $.
Thus $[H, R]= \prod_{i=1}^{k}[H,\langle X_i \rangle]$ is finite. Finally, note that $R\le H$ and so $R'\le [H,R]$ is also finite.
\end{proof}
Now Theorem \ref{main} follows.
\begin{proof} [Proof of Theorem \ref{main}]
Assume that $G$ is a profinite group in which the centralizers of uniform $k$-step commutators are either finite or open.
We proved in Proposition \ref{mainfirst} that $G$ is virtually nilpotent. Now we will show that $\gamma_k(G)$ is abelian-by-finite.
Let $N$ be an open nilpotent subgroup of $G$. Of course we can assume that $N$ is infinite. By Theorem \ref{genN}, the $\gamma_k^*$-residual $R$ of $N$ has finite commutator subgroup, thus it is virtually abelian. Moreover, by Lemma \ref{concise} $R$ is open in $\gamma_k(G)$. Thus $\gamma_k(G)$ is virtually abelian and the proof is complete.
\end{proof}
\section{Strong conciseness of uniform commutators}
A word $w$ is said to be concise in a class of groups $\mathcal X$ if $w(G)$ is finite whenever the set of $w$-values in $G$ is finite for a group $G\in\mathcal X$. In the sixties Hall raised the problem whether all words are concise,
but in 1989 S. Ivanov \cite{ivanov} solved the problem in the negative (see also \cite[p.\ 439]{ols}). On the other hand, the problem for residually finite groups remains open (cf. Segal \cite[p.\ 15]{Segal} or Jaikin-Zapirain \cite{jaikin}). In recent years several positive results with respect to this problem were obtained (see \cite{AS1, gushu, fealcobershu, dms-conciseness, dms-bounded, DMS-engel-II}).
A word $w$ is called boundedly concise in a class of groups $\mathcal X$ if whenever the set of its values is finite of size at most $m$ in a group $G\in \mathcal X$, it always follows that the
subgroup $w(G)$ is finite of order bounded by a function of $m$ and $w$.
In \cite{fernandez-morigi} it is shown that every word which is concise in the class of all groups is actually boundedly concise.
There is a conjecture that every word which is concise in residually finite groups is boundedly concise (cf. \cite{fealcobershu}) but this probably will remain open for some time.
On the other hand, the multilinear commutator words and words implying virtual nilpotency are known to have this property \cite{dms-conciseness}. Recall that a word $w$ is said to imply virtual nilpotency if every finitely generated metabelian group $G$ where $w$ is a law
has a nilpotent subgroup of finite index.
It follows from Gruenberg's result \cite{gruen} that the Engel words imply virtual nilpotency, so they are boundedly concise.
The main result in \cite{fernandez-morigi} states that if $w$ is a multilinear commutator word and the set of $w$-values in a group $G$ has size at most $m$, then the verbal subgroup $w(G)$ is finite of order bounded by a function of $m$, independently of $w$. We will show that in the class of profinite groups the set $\mathcal{U}_k$ has the same property.
Recall that the $i$-th centre $Z_i(G)$ of a group $G$ is defined inductively by
$Z_0(G)=1$ and $Z_i(G)/Z_{i-1}(G)=Z(G/Z_{i-1}(G))$ for $i\ge 1$. The last term of the upper central series of a finite group $G$ will be denoted by $Z_\infty(G)$. A classical result, due to Baer, states that if $Z_{i}(G)$ has finite index $t$ in $G$, then $\gamma_{i+1}(G)$ is finite, and its order is bounded by a function of $i$ and $t$ (see the proof of \cite[14.5.1]{rob}). Similarly, if $G$ is a finite group such that $[G:Z_\infty(G)]=t$, then the order of $\gamma_\infty(G)$ is $t$-bounded (see \cite{kos}).
\begin{proposition}\label{boundedly}
Let $G$ be a profinite group in which $|\mathcal{U}_k(G)|\le m$ for some positive integer $m$. Then $|\gamma_k(G)|$ is $m$-bounded.
\end{proposition}
\begin{proof} As $\gamma_k(G)$ is generated by the set $\mathcal{U}_k(G)$ on which $G$ acts by conjugation, it follows that $[G:C_G(\gamma_k(G)]\le m!$. Thus $Z(\gamma_k(G))$ has $m$-bounded index in $\gamma_k(G)$ and, by Schur Theorem, $\gamma_k(G)'$ has $m$-bounded order (see \cite[4.12]{robinson2}). We may pass to the quotient $G/\gamma_k(G)'$ and, without loss of generality, assume that $\gamma_k(G)$ is abelian. Now we will prove that $|\gamma_k(G)|$ is finite and $m$-bounded by induction on $m$, the case $m=1$ being trivial. If $G$ is pronilpotent, then every $\gamma_k$-value is contained in $\mathcal{U}_k(G)$, so we can conclude by \cite[Theorem A]{fernandez-morigi}.
Suppose that $G$ is not pronilpotent. Then there exists a Sylow $p$-subgroup $P$ of $\gamma_k(G)$, for some prime $p$, and a $p'$-element $a$ such that $[P,a]\ne 1$.
Since $P$ is abelian $[zy,a]=[y,a][z,a]$ for each $y,z\in P$, thus $[P,a]=\{[y,a]\mid y\in P\}$. It follows from Lemma \ref{bul2} that $[P,a]\subseteq \mathcal{U}_k(G)$ and so it has order at most $m$. Choose a nontrivial element $x\in [P,a]$. Then $x$ has order at most $m$ and $[G:C_G(x)]\le m$. Hence $|\langle x^G\rangle|$ has order at most $m^m$. Now we can pass to the quotient $G/\langle x^G\rangle$ and conclude by induction on $m$.
\end{proof}
As mentioned in the introduction
Theorem \ref{main} enables us to prove that if $G$ is a profinite group such that the cardinality of the set $\mathcal{U}_k(G)$ is less than $2^{\aleph_0}$, then $\gamma_k(G)$ is finite.
Our proof relies on the fact that multilinear commutator words are strongly concise \cite{dks}.
The next lemma will be useful.
\begin{lemma}\cite[Lemma 2.2]{dks}\label{conjfinite} Let $G$ be a profinite group and let $x\in G$. If the conjugacy
class $x^G$ contains less than $2^{\aleph_0}$ elements, then it is finite.\end{lemma}
\begin{proof}[Proof of Theorem \ref{strong}]
It is enough to prove that if $G$ is a profinite group such that the cardinality of the set of {\rm u}$_k$-commutators in $G$ is less than $2^{\aleph_0}$ then $\gamma_k(G)$ is finite. Under this assumption, the conjugacy class $x^G$ of every {\rm u}$_k$-commutator $x$ is finite, by Lemma \ref{conjfinite}. Thus all {\rm u}$_k$-commutators are $FC$-elements and $G$ is virtually nilpotent, by Proposition \ref{mainFC}.
Let $N$ be an open nilpotent normal subgroup of $G$.
If $g\in G$, Lemma \ref{conj} implies that the cardinality of the set of $\gamma_k$-values in $N\langle g\rangle$ is less than $2^{\aleph_0}$.
Taking into account that multilinear commutator words are strongly concise we conclude that $\gamma_k(N\langle g\rangle)$ is finite.
Choose a transversal $g_1,\dots,g_s$ of $N$ in $G$.
As each $\gamma_k(N\langle g_i\rangle)$ is normalized by $N$, its normal closure $N_i$ is finite. Thus we can pass to the quotient over the finite normal subgroup $N_1\cdots N_s$ and assume that $\gamma_k(N\langle g_i\rangle)=1$ for $i=1,\dots,s$. Now, if $x\in N$, the commutator $[x,{}_{k-1} g]$ is trivial for each $g\in G$, that is, $x$ is a right Engel element. Therefore for every finite quotient $\bar G$ of $G$ the image $\bar N$ of $N$ is contained in $Z_{\infty}(\bar G)$ (see for instance \cite[12.3.7]{rob}). It follows that $\gamma_\infty(\bar G)$ has $s$-bounded order. As this happens for every finite quotient of $G$, $\gamma_\infty(G)$ is finite. Without loss of generality, we can pass to the quotient over $\gamma_{\infty}(G)$ and assume that $G$ is pronilpotent. In this case, every $\gamma_k$-value is a {\rm u}$_k$-commutator. Since multilinear commutator words are strongly concise, it follows
that $\gamma_k(G)$ is finite. This concludes the proof.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 144 |
The World Is Our Field of Practice
This prophetic conversation, which Rev. angel Kyodo williams had with Krista in 2018, is an invitation to imagine and nourish the transformative potential of this moment — toward human wholeness. Rev. angel is an esteemed Zen priest and the second Black woman recognized as a teacher in the Japanese Zen lineage. She is one of our wisest voices on social evolution and the spiritual aspect of social healing.
Science of Mindlessness and Mindfulness
Her unconventional studies have long suggested what neuroscience is now revealing: Our experiences are formed by the words and ideas we attach to them. Naming something play rather than work — or exercise rather than labor — can mean the difference between delight and drudgery, fatigue or weight loss. What makes a vacation a vacation is not only a change of scenery, but the fact that we let go of the mindless everyday illusion that we are in control. Ellen Langer says mindfulness is achievable without meditation or yoga. She defines it as "the simple act of actively noticing things."
Rev. Otis Moss III
The Sound of the Genuine: Traversing 2020 with 'the Mystic of the Movement' Howard Thurman
An hour to sit with, and be filled. Two voices — one from the last century, one from ours — who inspire inward contemplation as an essential part of meeting the challenges in the world. Howard Thurman's book Jesus and the Disinherited, it was said, was carried by Martin Luther King Jr. alongside the Bible and the U.S. Constitution. Thurman is remembered as a philosopher and theologian, a moral anchor, a contemplative, a prophet, and pastor to the civil rights leaders. Rev. Otis Moss III, himself the son of one of those leaders, is a bridge to Thurman's resonance in the present day, and between the Black freedom movements then and now.
'When Things Fall Apart'
In this "spiritual book club" edition of the show, Krista and musician/artist Devendra Banhart read favorite passages and discuss When Things Fall Apart, a small book of great beauty by the Tibetan Buddhist teacher Pema Chödrön. It's a work — like all works of spiritual genius — that speaks from the nooks and crannies and depths of a particular tradition, while conveying truths about humanity writ large. Their conversation speaks with special force to what it means to be alive and looking for meaning right now.
Finding Ease in Aloneness
One of the great challenges of life is to learn to be alone peaceably, at home in oneself. And now, by way of a virus, we have been sent inside physically and emotionally, even if we're not home on our own. We're forced to work out the difference between isolation and loneliness or solitude. With teachers across the ages and drawing on his life from monasticism to marriage, Buddhist writer and scholar Stephen Batchelor teaches how to approach solitude as a graceful and life-giving practice.
Jerry Colonna
Can You Really Bring Your Whole Self to Work?
We still work with the old idea that we should check the messy parts of ourselves at the door of our professional lives. But Jerry Colonna says doing so cuts us off from the source of our creativity. "The result is that our organizations are actually less productive, less imaginative; not just poor workplaces for individuals to be, but poor places for collaboration … and spontaneity and laughter and humor." Colonna is a former venture capitalist who now coaches CEOs. He says undoing the old model starts with radical self-inquiry and asking ourselves questions like "Who is the person I've been all my life?" — and that it's only after we sort through the material of our personal lives that we can become better leaders.
What We Nurture
Sylvia Boorstein says spirituality doesn't have to look like sitting down and meditating. A Jewish-Buddhist teacher and psychotherapist, Boorstein says spirituality can be as simple as "folding the towels in a sweet way and talking kindly to the people in [your] family even though you've had a long day." And she insists that nurturing our inner lives in this way is not a luxury but something we can do in the service of others — from our children to strangers in the checkout line at the grocery store.
Life Beyond the Mind
"There is a place inside me that is far more powerful than the continuous mental noise," says Eckhart Tolle. The spiritual teacher began to gain attention with his 1997 book, The Power of Now. Millions of people around the world have found pragmatic tools in his vision that fundamentally complicates the notion, "I think, therefore I am."
Pico Iyer
The Urgency of Slowing Down
Pico Iyer is one of our most eloquent explorers of what he calls the "inner world" — in himself and in the 21st century world at large. The journalist and novelist travels the globe from Ethiopia to North Korea and lives in Japan. But he also experiences a remote Benedictine hermitage as his second home, retreating there many times each year. In this intimate conversation, we explore the discoveries he's making and his practice of "the art of stillness."
Mirabai Bush
Contemplation, Life, and Work
Mirabai Bush works at an emerging 21st century intersection of industry, social healing, and diverse contemplative practices. Raised Catholic with Joan of Arc as her hero, she is one of the people who brought Buddhism to the West from India in the 1970s. She is called in to work with educators and judges, social activists and soldiers. She helped create Google's popular employee program, Search Inside Yourself. Her life tells a fascinating narrative of our time: the rediscovery of contemplative practices, in many forms and from many traditions, in the secular thick of modern culture.
Finding Buoyancy Amidst Despair
It's easy to despair at all the bad news and horrific pictures that come at us daily. But Roshi Joan Halifax says this is a form of empathy that works against us. There's such a thing as pathological altruism. This zen abbot and medical anthropologist has nourishing wisdom as we face suffering in the world.
Happiness as Human Flourishing
A French-born Tibetan Buddhist monk and a central figure in the Dalai Lama's dialogue with scientists, Matthieu Ricard was dubbed "The Happiest Man in the World" after his brain was imaged. But he resists this label. In his writing and in his life, he explores happiness not as pleasurable feeling but as a way of being that gives you the resources to deal with the ups and downs of life and that encompasses many emotional states, including sadness. We take in Matthieu Ricard's practical teachings for cultivating inner strength, joy, and direction.
A Lovingkindness Meditation
Written by Sylvia Boorstein
The celebrated Jewish-Buddhist teacher and psychotherapist offers a metta, or lovingkindness meditation for ourselves, our loved ones and strangers far and near.
Thich Nhat Hanh, Cheri Maples, and Larry Ward
Being Peace in a World of Trauma
The Vietnamese Zen master, whom Martin Luther King nominated for a Nobel Peace Prize, is a voice of power and wisdom in this time of tumult in the world. We visited Thich Nhat Hanh at a retreat attended by police officers and other members of the criminal justice system; they offer stark gentle wisdom for finding buoyancy and "being peace" in a world of conflict, anger, and violence.
Some Things Just Hurt
Working through discomfort doesn't mean denying our suffering. Instead, Sharon Salzberg suggests a better way to move forward: allowing ourselves to feel pain without judgment, and accepting the validity of our own emotions.
Seane Corn
Yoga, Meditation in Action
Yoga has infiltrated law schools and strip malls, churches and hospitals. This 5,000-year-old spiritual technology is converging with 21st-century medical science and with many religious and philosophical perspectives. Seane Corn takes us inside the practicalities and power of yoga. She describes how it helps her face the darkness in herself and the world, and how she's come to see yoga as a form of body prayer.
Arthur Zajonc
Holding Life Consciously
What happens when you bring together science and poetry on something like color or light? Arthur Zajonc is a physicist and contemplative. And he says we can all investigate life as vigorously from the inside as from the outside.
Tami Simon
Inner Life at Work: Business, Meditation, and Technology
You might call Tami Simon a spiritual entrepreneur. She's built a successful multimedia publishing company with a mission to disseminate "spiritual wisdom" by diverse teachers and thinkers like Pema Chödrön and Eckhart Tolle, Daniel Goleman and Brené Brown. She offers compelling lessons on joining inner life with life in the workplace — and advice on spiritual practice with a mobile device.
The Pause
Step away from the week with us.
The Pause is our Saturday morning newsletter, a gathering of threads from the far-flung, ongoing conversation that is The On Being Project. Stay up to date with our latest podcasts, writings, live events, and more. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,746 |
\section{Introduction}
In a remarkable series of papers spanning four decades, Arveson developed a non-commutative analogue of boundary theory for nonselfadjoint operator algebras \cite{Arv69, Arv72, Arv98, Arv08}, which constitutes one of the most fundamental and fruitful areas of interaction between C*-algebras and nonselfadjoint operator algebras. One specific noncommutative boundary is the noncommutative analogue of the Shilov boundary, called the C*-envelope, which can be thought of as the smallest C*-algebra containing the given operator algebra in a reasonable sense. Computing the C*-envelope in various cases has been of interest and use to many authors over the years \cite{DFK17, DK11, DOM16, DS18, Hum20, KR16, MS98}. C*-envelopes have also had recent applications in classification of nonselfadjoint operator algebras \cite{DEG20}, finite dimensional approximation \cite{CR19}, crossed products \cite{KR16},
group theory \cite{BKKO17,KK17}, noncommutative geometry \cite{CvS+}, and noncommutative convexity \cite{EH19}.
In the seminal work of Pimsner \cite{Pim95}, many operator algebra constructions were generalized and unified by associating them to a single C*-correspondence.
This allowed Pimsner to generalize the work of Cuntz and Krieger \cite{CK80}, as well as many others. Pimsner's work was refined by Katsura~\cite{Kat04}, who removed all conditions on the C*-correspondence to obtain an in-depth study of what we now call Cuntz-Pimsner algebras. A natural context for further generalization, unification and insight was introduced by Fowler in his work on discrete product systems of C*-correspondences over quasi-lattice ordered semigroups \cite{Fow02}.
Fowler's Toeplitz-Nica-Pimsner algebras generalize Nica's Wiener-Hopf algebras from \cite{Nic92}, as well as Pimsner's Toeplitz algebras. Although Fowler provided a Cuntz-type algebra for regular product systems when the semigroup is directed, for many years it was unclear what the right notion of a Cuntz algebra of a product system should be.
Eventually, Sims and Yeend \cite{SY10} were able to give a definition of a Cuntz-Nica-Pimsner algebra ${\mathcal{N}}\O(X)$ for many new product systems. In further work, Carlsen, Larsen, Sims and Vitadello \cite{CLSV11} introduced a co-universal Cuntz-Nica-Pimsner type of algebra, which they denoted as ${\mathcal{N}}\O^r(X)$,
that satisfies an appropriate uniqueness theorem for equivariant homomorphisms. Their co-universal algebra was shown to exist under additional hypothesis on the product system; it generalizes reduced crossed products by quasi-lattice ordered groups,
Crisp-Laca boundary quotients \cite{CL07}, and higher-rank graph C*-algebras \cite{KuP}.
The tensor algebra $\mathcal{T}_{\lambda}(X)^+$ is the canonical nonselfadjoint subalgebra of the reduced Toeplitz C*-algebra, or Fock C*-algebra, $\mathcal{T}_{\lambda}(X)$ generated by the left-creation operators of the C*-correspondences that comprise the product system. It models many of the nonselfadjoint operator algebras that were previously investigated in the multivariable setting \cite{DK11,KK12}.
As with any nonselfadjoint
algebra, a fundamental problem regarding $\mathcal{T}_{\lambda}(X)^+$ is the identification of its C*-envelope $\mathrm{C}^*_{\textup{env}}(\mathcal{T}_{\lambda}(X)^+)$. In the case of a single $\mathrm{C}^*$-correspondence this was done by Katsoulis and Kribs \cite{KatK}, following earlier work of Muhly and Solel \cite{MS98}, who pioneered the study of tensor algebras. In \cite{DFK17}, Davidson, Fuller and Kakariadis identified the C*-envelope for tensor algebras of product systems associated with $\mathbb{Z}^n$-dynamical systems. In that paper dilation theoretic techniques merged with uniqueness theorems for the images of equivariant homomorphisms and gave strong motivating evidence that one can use C*-envelope techniques in order to prove the existence of a co-universal object for more general product systems over abelian orders.
This approach was fully materialized by Dor-On and Katsoulis \cite{DK20} who proved that for a compactly aligned product system $X$ over any abelian lattice ordered semigroup, $\mathrm{C}^*_{\textup{env}}(\mathcal{T}_{\lambda}(X)^+)$ has the co-universal property proposed in \cite{CLSV11}, thus showing in particular that the co-universal algebra ${\mathcal{N}}\O^r(X)$ of \cite{CLSV11} exists without the injectivity assumption when the semigroup is abelian lattice ordered. This result strengthened the important connection between nonselfadjoint and selfadjoint operator algebra theory and raised the tantalising possibility of proving directly the existence of an appropriate notion of C*-envelope that satisfies the desired co-universal property automatically beyond abelian orders. Even though some of the techniques of \cite{DK20} are indeed applicable to more general settings, it soon became clear
that significant progress would require new ideas. The purpose of the present paper is to realize this possibility through the use of an equivariant version of the C*-envelope.
The turning point in our investigation is the realization that if $X$ is a product system over
a subsemigroup $P$ of a group $G$, then the tensor algebra $\mathcal{T}_{\lambda}(X)^+$ comes equipped with a natural (normal) coaction of $G$, forming what we call a cosystem. A cosystem $({\mathcal{A}}, G, \delta)$ consists of an operator algebra ${\mathcal{A}}$ and a coaction $\delta \colon {\mathcal{A}} \to {\mathcal{A}} \otimes \mathrm{C}^*(G)$ by a discrete group $G$. In Section~\ref{S;cosystem} we develop a boundary theory for cosystems that parallels the corresponding theory for operator algebras. In particular, given a cosystem $({\mathcal{A}}, G, \delta)$, we define notions of $\mathrm{C}^*$-cover and C*-envelope for $({\mathcal{A}}, G, \delta)$. Both notions are equivariant analogues of the classical definitions. In Theorem~\ref{T:co-env}, which is the main result of Section~\ref{S;cosystem}, we show that the C*-envelope $\mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta)$ of a cosystem $({\mathcal{A}}, G, \delta)$ always exists. Furthermore we give a picture of $\mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta)$ that connects it with the C*-envelope of ${\mathcal{A}}$. Specifically, $\mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta)$ is the $\mathrm{C}^*$-subalgebra of $\mathrm{C}^*_{\textup{env}}({\mathcal{A}})\otimes \mathrm{C}^*(G)$ generated by $\delta({\mathcal{A}})$, equipped with the coaction ${\operatorname{id}}\otimes \Delta$, where $\Delta$ is the comultiplication on $G$.
Having developed a satisfactory theory of C*-envelopes for cosystems, we then move to applications. In Section~\ref{S;main} we investigate various C*-algebras associated with a product system over a right LCM subsemigroup of a group. Right LCM semigroups that embed in a group include quasi-lattice orders as well as several other important classes of semigroups. In Theorem~\ref{T:co-univ} we show that for every compactly aligned product system $X$ over a right LCM subsemigroup $P$ of a group $G$, the C*-envelope of the cosystem $({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+) $ has the co-universal property with respect to injective, gauge equivariant representations of $X$. This resolves the problem of existence of a co-universal C*-algebra,
which is one of the central problems raised by Carlsen, Larsen, Sims and Vitadello in \cite{CLSV11}.
Specifically, our Theorem~\ref{T:co-univ} removes all the injectivity assumptions on $X$ from \cite[Theorem 4.1]{CLSV11} and at the same time generalizes the results for abelian semigroups of \cite{DK20} to the realm of right LCM semigroups that embed in a group. We remark that all the necessary facts from the theory of product systems over these semigroups are developed here from scratch, so, in particular, the proof of Theorem~\ref{T:co-univ} is essentially self-contained as it requires only some additional basic facts regarding the cross sectional C*-algebra of a Fell bundle \cite{Exe97}.
In \cite{Seh18} Sehnem introduced a covariance C*-algebra $A\times_X P$ associated to a product system $X$ over a general subsemigroup $P$ of a group $G$ with coefficients in a C*-algebra $A$. There is a natural coaction of $G$ on $A\times_X P$ giving a grading
$\S{\mathcal{C}} X := \{ [A \times_X P]_g \}_{g \in G}$.
Sehnem's covariance algebra satisfies an important property: any representation of $A\times_X P$ that is injective on $A$ is automatically injective on the fiber $[A \times_X P]_e$ over the identity. In Theorem~\ref{T:co-un is Fell} we show that if the product system $X$ is compactly aligned over a right LCM subsemigroup of a group, then our C*-envelope is naturally isomorphic to the reduced cross-sectional algebra of $\S{\mathcal{C}} X$, while Sehnem's algebra $A \times_X P$ is isomorphic to the full cross-sectional algebra of $\S{\mathcal{C}} X$. We also consider the quotient of ${\mathcal{T}}_\lambda(X) $ by the image of Sehnem's ideal ${\mathcal{I}}_\infty$, and in Corollary~\ref{C:exa Seh} we show that under a mild assumption (which is satisfied by all right LCM subsemigroups of exact groups), our C*-envelope is canonically isomorphic to this quotient: $ \mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+) \simeq \T_\la(X)/ q_\la(\I_\infty)$. When combined with our main result, Theorem~\ref{T:co-univ}, these results give a very detailed picture of $\mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+) $.
In the final section of the paper we give an application of our theory to Hao-Ng type isomorphisms, much in the spirit of earlier works \cite{DK20, Kat17, KR16}.
\section{Preliminaries}
If ${\mathcal{X}}$ and ${\mathcal{Y}}$ are subspaces of some ${\mathcal{B}}(H)$ then we write $\overline{{\mathcal{X}} {\mathcal{Y}}}:= \overline{\operatorname{span}}\{x y \mid x \in {\mathcal{X}}, y \in {\mathcal{Y}}\}$.
We denote the spatial tensor product by $\otimes$.
All the semigroups considered in this paper are assumed to embed in a group and to contain the identity.
\subsection{Operator algebras}
We begin by establishing terminology and recalling some fundamental facts in the theory of operator algebras.
For additional details and proofs, the monographs \cite{BL04, Pau02} provide an excellent introduction to the subject.
By an \emph{operator algebra} ${\mathcal{A}}$ we mean a norm-closed subalgebra of some ${\mathcal{B}}(H)$ for a Hilbert space $H$.
By a \emph{representation} of ${\mathcal{A}}$ we mean a completely contractive homomorphism $\phi \colon {\mathcal{A}} \to {\mathcal{B}}(H)$.
When $\phi \colon {\mathcal{A}} \to {\mathcal{B}}(H)$ is a representation, we will always assume it is non-degenerate in the sense that $\phi({\mathcal{A}})H$ is dense in $H$.
Meyer \cite{Mey01} has established the passage from the unital to the non-unital theory, which we now explain.
Suppose that ${\mathcal{A}} \subseteq {\mathcal{B}}(H)$ and $I_H \notin {\mathcal{A}}$.
Meyer shows that if $\phi \colon {\mathcal{A}} \to {\mathcal{B}}(K)$ is a (completely isometric) representation then the extension $\phi^1 \colon {\mathcal{A}}^1 \to {\mathcal{B}}(K)$ given by
\[
\phi^1(a + \lambda I_H) = \phi(a) + \lambda I_K \quad\text{for}\quad {\mathcal{A}}^1 = \operatorname{span}\{{\mathcal{A}}, I_H\}
\]
is also a (completely isometric resp.) representation.
Hence ${\mathcal{A}}^1$ is independent of the completely isometric representation of ${\mathcal{A}}$ and thus constitutes \emph{the unique} ``one-point" unitization of ${\mathcal{A}}$.
A \emph{dilation} of a representation $\phi \colon {\mathcal{A}} \to {\mathcal{B}}(H)$ is a representation $\phi' \colon {\mathcal{A}} \to {\mathcal{B}}(H')$ such that $H \subseteq H'$ and $\phi(a) = P_H \phi'(a) |_H$ for all $a \in {\mathcal{A}}$.
A representation $\phi \colon {\mathcal{A}} \to {\mathcal{B}}(H)$ is called \emph{maximal} if every dilation $\phi' \colon {\mathcal{A}} \to {\mathcal{B}}(H')$ of $\phi$ is trivial, in the sense that $H$ is reducing for $\phi({\mathcal{A}})$.
The existence of maximal dilations in the unital case was first established by Dritschel and McCullough \cite{DM05}, and later simplified by Arveson \cite{Arv08}.
Dor-On and Salomon \cite{DS18} have shown that a representation $\phi$ is maximal if and only if its unitization $\phi^1$ is so. Hence, maximal dilations exist for possibly non-unital operator algebras, as arising from maximal dilations of their unitizations.
Now consider ${\mathcal{A}}$ inside the C*-algebra $\mathrm{C}^*({\mathcal{A}})$ it generates.
By passing to the unitization and applying Arveson's Extension Theorem we see that every representation $\phi \colon {\mathcal{A}} \to {\mathcal{B}}(H)$ admits an extension $\widetilde{\phi} \colon \mathrm{C}^*({\mathcal{A}}) \to {\mathcal{B}}(H)$ to a completely contractive and completely positive map (ccp).
A representation $\phi \colon {\mathcal{A}} \to {\mathcal{B}}(H)$ is said to have the \emph{unique extension property (UEP)} if every ccp extension to $\mathrm{C}^*({\mathcal{A}})$ is a $*$-representation, and thus $\phi$ has a unique extension to a $*$-representation of $\mathrm{C}^*({\mathcal{A}})$.
Arveson \cite{Arv08} shows that a representation is maximal if and only if it has the UEP in the unital case.
Dor-On and Salomon \cite{DS18} have extended this to the non-unital case as well.
The existence of maximal dilations leads naturally to the concept of the C*-envelope for an operator algebra, which we now discuss.
We say that $(C, \iota)$ is a \emph{C*-cover} of ${\mathcal{A}}$ if $\iota \colon {\mathcal{A}} \to C$ is a completely isometric representation with $C = \mathrm{C}^*(\iota({\mathcal{A}}))$.
The \emph{C*-envelope} $\mathrm{C}^*_{\textup{env}}({\mathcal{A}})$ of ${\mathcal{A}}$ is a C*-cover $(\mathrm{C}^*_{\textup{env}}({\mathcal{A}}), \iota)$ with the following universal property:
if $(C', \iota')$ is a C*-cover of ${\mathcal{A}}$ then there exists a (necessarily unique) $*$-epimorphism $\phi \colon C' \to \mathrm{C}^*_{\textup{env}}({\mathcal{A}})$ such that the following diagram
\begin{equation} \label{eq;env}
\xymatrix{
& & C' \ar@{.>}[d]^{\phi} \\
{\mathcal{A}} \ar[rru]^{\iota'} \ar[rr]^{\iota} & & \mathrm{C}^*_{\textup{env}}({\mathcal{A}})
}
\end{equation}
commutes.
Arveson predicted the existence of the C*-envelope, which he computed for a variety of operator algebras \cite{Arv69}, but the existence problem was open for a decade until
Hamana solved it for unital algebras by proving the existence of injective envelopes \cite{Ham79}.
It follows from \cite[Subsection 2.2]{DS18} that the C*-envelope of an operator algebra ${\mathcal{A}}$ is the C*-algebra generated by a maximal completely isometric representation, even when ${\mathcal{A}}$ is non-unital.
\subsection{C*-correspondences}
A \emph{$\mathrm{C}^*$-correspondence} $X$ over $A$ is a right Hilbert module over a C*-algebra $A$ with a left action of $A$
given by a $*$-homomorphism $\varphi_X$ of $ A $ into the adjointable operators on $X$.
We write $\L X$ for the adjointable operators on $X$ and ${\mathcal{K}} X$ for the norm closure of the generalized finite rank operators on $X$.
For two C*-corresponden\-ces $X, Y$ over the same $A$ we write $X \otimes_A Y$ for the balanced tensor product over $A$.
We say that $X$ is unitarily equivalent to $Y$ (symb. $X \simeq Y$) if there is a surjective adjointable operator $U \in \L(X,Y)$ such that $\sca{U \xi, U \eta} = \sca{\xi, \eta}$ and $U (a \xi b) = a U(\xi) b$ for all $\xi, \eta \in X$ and $a,b \in A$.
A \emph{Toeplitz representation} $(\pi,t)$ of a C*-correspondence $X$ over $A$ is a pair $(\pi,t)$ such that $\pi : A \rightarrow B(\H)$ is a $*$-representation and $t$ is a left module map implemented by $\pi$ which satisfies $\pi (\langle \xi , \eta \rangle) = t(\xi)^* t(\eta)$. Then $t$ is automatically a bimodule map via $\pi$. When $(\pi,t)$ is a Toeplitz representation of the C*-correspondence $X$, there exists an induced $*$-representation of ${\mathcal{K}} X$ denoted by $t$ as well and determined by $t(\theta_{\xi, \eta}) = t(\xi) t(\eta)^*$ for all rank-one operators $\theta_{\xi, \eta} \in {\mathcal{K}} X$.
\subsection{Product systems and their representations}
Let $P$ be a unital discrete subsemigroup of a discrete group $G$.
We will write $P^* = P\cap P^{-1}$ for the set of invertible elements in $P$.
We write $\mathrm{C}^*_\lambda(P)$ for the C*-algebra generated by the left regular representation of $P$, i.e., the shift operators $V_p$ on $\ell^2(P)$ given by
\[
V_p e_r = e_{pr} \text{ for all } r \in P.
\]
\begin{definition}
A \emph{product system $X$ over $P$ with coefficients in a C*-algebra $A$} is a family $\{X_p \mid p \in P\}$ of C*-correspondences over $A$ together with multiplication maps
$M_{p,q} : X_p \otimes_A X_q \to X_{pq}$ such that
\begin{enumerate}
\item $X_e$ is the standard bimodule ${}_A A_A$, and $M_{e,e} : A \otimes_A A \xrightarrow{\cong} A $ is simply multiplication on $A$;
\item if $p =e$, then $M_{e, q}: A \otimes_A X_q \xrightarrow{\cong} \overline{A \cdot X_p}$ is the left action of $A$ on $X_q$;
\item if $q = e$ then $M_{p, e}: X_p \otimes_A A \xrightarrow{\cong} X_p$ is the right action of $A$ on $X_p$;
\item if $p, q \in P \setminus \{e\}$, then
$M_{p,q} : X_p \otimes_A X_q \xrightarrow{\cong} X_{pq}$;
\smallskip\item the multiplication maps are associative in the sense that
\[
M_{pq, r} (M_{p,q} \otimes {\operatorname{id}}_{X_r}) = M_{p, qr} ({\operatorname{id}}_{X_p} \otimes M_{q,r}) \text{ for all } p,q,r \in P.
\]
\end{enumerate}
Throughout this work we will also assume that the left action of $A$ on $X_q$ is nondegenerate (or essential) in the sense that $\overline{A \cdot X_q} =X_q$ for every $q\in P$, and hence the multiplication map $M_{e,q}$ in \textup{(ii)} is an isomorphism of $X_e\otimes_A X_q$ onto $X_q$.
\end{definition}
\begin{remark}
We assume that the left action of $A$ is nondegenerate in order to be able to freely use the results from \cite{Seh18}. Observe that, as pointed out in \cite[Remark~1.3]{KL19b}, this assumption is automatically satisfied when $P$ has a nontrivial unit $u$ because then
\[
X_q = X_u \otimes_A X_{u^{-1}q} = X_u \otimes_A X_{u^{-1}} \otimes_A X_q= X_e\otimes X_q.
\]
It is plausible that one could extend the main results from \cite{Seh18} to product systems with degenerate left actions, and that this
would allow us to include such product systems in our results.
\end{remark}
Henceforth we will be suppressing the use of the symbols $M_{p,q}$,
thus writing $\xi_p \xi_q$ for the image of $\xi_p \otimes \xi_q$ under $M_{p,q}$, and so
\[
\varphi_{pq}(a)(\xi_p \xi_q) = (\varphi_p(a) \xi_p) \xi_q \text{ for all } a \in A \text{ and } \xi_p \in X_p, \xi_q \in X_q.
\]
The product system structure gives rise to maps
\[
i_{p}^{pq} \colon \L X_{p} \to \L X_{pq}
\; \textup{ such that } \;
i_{p}^{pq}(S) (\xi_{p} \xi_{q})
=
(S \xi_{p}) \xi_{q}.
\]
If $x \in P^*$ then $i_{r}^{rx} \colon \L X_r \to \L X_{rx}$ is a $*$-isomorphism with inverse $i_{rx}^{rxx^{-1}} \colon \L X_{rx} \to \L X_{r}$.
\begin{definition}
Let $P$ be a subsemigroup of a group $G$ and let $X$ be a product system over $P$. A \emph{Toeplitz representation} $t = \{t_p\}_{p\in P}$ of the product system $\{X_p \mid {p\in P}\}$ is a family of maps $t_p : X_p \rightarrow {\mathcal{B}}(H)$ such that $(t_e,t_p)$ is a representation of $X_p$ and
\[
t_p(\xi_p) t_q(\xi_q) = t_{pq}(\xi_p \xi_q) \text{ for all } \xi_p \in X_p, \xi_q \in X_q.
\]
The representation $t$ is said to be \emph{injective} if the homomorphism $t_e: X_e \to {\mathcal{B}}(H)$ is injective, in which case $t_p$ is isometric for each $p\in P$.
The \emph{Toeplitz algebra ${\mathcal{T}}(X)$ of $X$} is the universal C*-algebra generated by $X$ with respect to the Toeplitz representations of $X$.
The \emph{Toeplitz tensor algebra ${\mathcal{T}}(X)^+$ of $X$} is the norm-closed nonselfadjoint subalgebra of ${\mathcal{T}}(X)$ generated by $X$.
\end{definition}
A Toeplitz representation $t = \{t_p\}_{p\in P}$ induces a representation $t_{r,s}$ of ${\mathcal{K}}(X_s, X_r)$ on the same Hilbert space, determined by
$t_{r,s}(\theta_{\xi_r, \xi_s}) = t_r(\xi_r) t_s(\xi_s)^*$. In the case $s = r$ we slightly abuse the notation, as already indicated above for a single correspondence, and write $t_s$ in place of $t_{s,s}$.
This gives a representation triple $(t_r, t_{r,s}, t_s)$ of the bimodule $({\mathcal{K}} X_r, {\mathcal{K}}(X_s, X_r), {\mathcal{K}} X_s)$.
\begin{proposition}\label{P:star inv LCM}
Let $P$ be a subsemigroup of a group $G$ and let $X$ product system over $P$.
Let $t = \{t_p\}_{p\in P}$ be a Toeplitz representation of $X$.
If $r \in P^*$ then
\[
t_{r}(X_r)^* = t_{r^{-1}}(X_{r^{-1}}).
\]
If $w \in P$ and $r \in P^*$ then
\[
i_w^{wr}(k_w) \in {\mathcal{K}} X_{wr}
\text{ and }
t_{wr}(i_{w}^{wr}(k_w)) = t_w(k_w) \text{ for all } k_w \in {\mathcal{K}} X_w.
\]
\end{proposition}
\begin{proof}
Since $r \in P^*$ we have that $X_e \simeq X_r \otimes X_{r^{-1}}$.
Since $r^{-1} \in P^*$ we also have that $X_e \otimes X_{r^{-1}} \simeq X_{r^{-1}}$.
Hence we get that
\[
t_e(X_e) = \overline{t_r(X_r) t_{r^{-1}}(X_{r^{-1}})}
\quad\text{and}\quad
\overline{t_e(X_e) t_{r^{-1}}(X)} = t_{r^{-1}}(X).
\]
By multiplying the first equation on the left with $t_{r}(X)^*$, and by using the second equation, we get
\[
t_r(X_r)^* \subseteq \overline{t_r(X_r)^* t_r(X_r) t_{r^{-1}}(X_{r^{-1}})} \subseteq \overline{t_e(X_e) t_{r^{-1}}(X_{r^{-1}})} = t_{r^{-1}}(X_{r^{-1}}).
\]
Taking adjoints gives $t_r(X_r) \subseteq t_{r^{-1}}(X_{r^{-1}})^*$.
As this holds for arbitrary $r \in P^*$ it also holds for $r^{-1} \in P^*$ and so
\[
t_{r^{-1}}(X_{r^{-1}})^* \subseteq t_{r}(X_r).
\]
By applying adjoints we get the required reverse inclusion.
Since $r \in P^*$ we have that $\iota_w^{wr} \colon \L X_w \to \L X_{wr}$ is a $*$-isomorphism and thus it preserves the compact operators.
Therefore $\iota_w^{wr}(k_w) \in {\mathcal{K}} X_{wr}$ for $k_w \in {\mathcal{K}} X_w$.
By applying on elementary tensors we see that $t_{wr}(i_w^{wr}(k_w))$ coincides with $t_w(k_w)$ restricted on $\overline{t_{wr}({\mathcal{K}} X_{wr}) H}$, where $H$ is the Hilbert space where $t = \{t_p\}_{p\in P}$ acts on.
On the other hand $t_w(k_w)$ is completely defined by its representation on $\overline{t_w({\mathcal{K}} X_w) H}$.
However by the first part we have that
\[
t_{wr}({\mathcal{K}} _{wr}) = \overline{t_w(X_w) t_r(X_r) t_r(X_r)^* t_w(X_w)^*} = \overline{t_w(X_w) t_e(X_e) t_w(X_w)^*} = t_w({\mathcal{K}} X_w),
\]
and so $t_{wr}(i_{w}^{wr}(k_w)) = t_w(k_w)$.
\end{proof}
The Fock space representation $\overline{t}$ of Fowler \cite{Fow02} ensures that $X$, embeds isometrically in ${\mathcal{T}}(X)$. It is given as follows. Let ${\mathcal{F}}(X) = \oplus_{r \in P} X_r$ and for $\xi_p \in X_p$ define
\[
\overline{t}_p(\xi_p) \eta_r = \xi_p \eta_r
\quad\text{for all}\
\eta_r \in X_r.
\]
Then $\overline{t}:=\{\overline{t}_p\}$ defines a representation of $X_p$ for every $p \in P$, and induces a representation of ${\mathcal{T}}(X)$. By taking the compression of $\overline{t}_e$ at the $(e, e)$-entry we see that $\overline{t}_e$ is injective, and hence $\overline{t}_p$ is injective for each $p\in P$.
\begin{definition}
Let $P$ be a subsemigroup of a group $G$ and let $X$ a product system over $P$.
The \emph{Fock algebra} ${\mathcal{T}}_\lambda(X)$ is the C*-algebra generated by the Fock representation.
The \emph{Fock tensor algebra} ${\mathcal{T}}_\lambda(X)^+$ is the subalgebra of ${\mathcal{T}}_\lambda(X)$ generated by $X$.
\end{definition}
\subsection{Product systems over right LCM semigroups}
A semigroup $P$ is said to be a \emph{right LCM semigroup} if it is left cancellative and satisfies \emph{Clifford's condition} \cite{Law12, Nor14}:
\begin{center}
for every $p, q \in P$ with $p P \cap q P \neq \emptyset$ there exists a $w \in P$ such that $p P \cap q P = w P$.
\end{center}
In other words, if $p, q \in P$ have a right common multiple then they have a right Least Common Multiple.
We always assume that the semigroup $P$ is contained in a group, and that it contains the identity element. It follows that $P$ is by default cancellative, and we will refer to $P$ simply as a \emph{right LCM subsemigroup of a group}.
It is clear that {if an element $w\in P$} is a right Least Common Multiple for $p, q \in P$ then so is $w x$ for every unit
$x \in P^*:= P\cap P^{-1}$.
Right LCM semigroups that embed in a group and have no nontrivial units, so that least common multiples are unique whenever they exist,
have been called \emph{weak quasi-lattice ordered} semigroups in \cite{ABCD2019}.
\begin{example}
Right LCM subsemigroups of groups include as primary examples the quasi-lattice orders defined in \cite{Nic92}. Several noteworthy examples beyond quasi-lattice orders have been considered recently in the context of isometric representations. We would like to list a few here in order to illustrate the variety. Since we do not require any specialized knowledge of these examples in this paper, we limit ourselves to giving references where details can be found.
The inclusion of an Artin monoid in its corresponding Artin group is always a right LCM subsemigroup of a group
\cite{Brieskorn-Saito}. Artin monoids have trivial unit groups so they actually determine weak quasi-lattice orders in their respective groups. At this point, only the particular cases of Artin monoids of spherical and rectangular type are actually known to be
true quasi-lattice orders, \cite{Brieskorn-Saito} and \cite{Crisp-Laca2002}.
Another important class of right LCM subsemigroups of groups are the
inclusions of Baumslag-Solitar monoids $B(m,n)^+$ in their corresponding Baumslag-Solitar groups
$B(m,n):= \langle a,b \mid ab^m = b^n a\rangle$. These monoids always have trivial unit groups, and they give quasi-lattice orders in their groups if and only if either $mn>0$ \cite[Theorem 2.11]{Spi12}
or else $mn < 0$ and $|m|=1$ \cite[Lemma 2.12]{Spi12}, see also
\cite[Example~3.5]{ABCD2019}. The remaining case is particularly interesting because
when $mn<0$ and $|m| \neq 1$, no group embedding of $B(m,n)^+$ can be a quasi-lattice order. This is proved directly in \cite[Proposition 3.10]{ABCD2019} under the extra assumption that $m$ does not divide $n$; a proof without this assumption can be derived from the failure of the Toeplitz condition in this case, as shown in \cite[Section~4.2]{Li2020}.
There are also right LCM semigroups such as $\mathbb Z\rtimes \mathbb N^{\times}$,
in which the whole additive part $\mathbb Z \times \{1\}$
consists of units. A sizable general class of examples like this arises from considering the semigroup $R\rtimes R^\times$ of affine
transformations of an integral domain $R$ \cite{Li13}. The hypothesis of integral domain is necessary for
these semigroups to embed in groups; which can be taken to be groups of affine
transformations of the corresponding fields of fractions. The semigroup $R\rtimes R^{\times}$ is a right LCM
semigroup if and only if $R$ satisfies the GCD condition \cite[Proposition 2.23]{Nor14}. The best known examples are the $ax+b$ semigroups
of the rings of algebraic integers in number fields of class number $1$. In these, the groups of units are nonabelian and consist
of the semidirect products of the additive group of those rings by the multiplicative action of the units.
Partly inspired by these, more right LCM semigroups that embed in a group have been constructed
as semidirect products associated to certain algebraic actions of $\mathbb N^k$ on an abelian group
that respect the order in the sense of \cite[Definition 8.1]{BLS18}.
\end{example}
It will be convenient for us to work with finite subsets $F$ of $P$
on which a `local' right LCM operation can be defined that reflects the structure of upper bounds in $P$, thus generalizing the notion of
$\vee$-closed sets used in the case of a quasi-lattice order.
The problem is that expressions like $p\vee q$ or $ \operatorname{lcm}(p,q)$ do not have an intrinsic meaning for a general right LCM semigroup
because of the nonuniqueness caused by the presence of nontrivial units in $P$. So we need to impose restrictions on $F$ to ensure uniqueness.
\begin{definition}\label{def:veeclosed}
Let $P$ be a right LCM subsemigroup of a group $G$.
A finite subset $F$ of $P$ is said to be \emph{{$\vee$-closed}} if for every $p,q \in F$ with $p P \cap q P \neq \emptyset$ there exists a unique $w \in F$ such that $p P \cap q P = w P$, equivalently, $F$ contains exactly one right LCM for any two of its elements that have a right LCM in $P$.
\end{definition}
Another way to approach this is to realize that a finite subset $F\subseteq P$ is
\emph{{$\vee$-closed}} iff the restriction of the `right ideal map' ${\mathcal{I}}:p\mapsto pP$
to $F$ gives a bijection whose image ${\mathcal{I}}(F) := \{p P \mid p \in F\}$ is closed under intersection.
It is then easy to see that if $F$ is $\vee$-closed, then the familiar relation
\[
p \leq q \Leftrightarrow q^{-1}p \in P
\]
actually defines a partial order on $F$, and hence, being finite, each {$\vee$-closed} set has maximal and minimal elements.
Following Fowler's work \cite{Fow02}, Brownlowe, Larsen and Stammeier \cite{BLS18}, and Kwasniewski and Larsen \cite{KL19a, KL19b} considered product systems over right LCM semigroups.
\begin{definition}
A product system $X$ over a right LCM semigroup $P$ with coefficients from $A$ is called \emph{compactly aligned} if for $p, q \in P$ with $p P \cap q P = w P$ we have that
\[
i_{p}^{w}(k_p) i_{q}^{w}(k_q) \in {\mathcal{K}} X_{w} \text{ whenever } k_p \in {\mathcal{K}} X_{p}, k_q \in {\mathcal{K}} X_{q}.
\]
\end{definition}
A note is in order for clarifying that this is independent of the choice of $w$.
Recall that if $w'$ is a right LCM of $p, q$ then $w' = wx$ for some $x \in P^*$.
Since $\L X_{w} \simeq \L X_{wx}$ we have that $i_{p}^{w}(k_p) i_{q}^{w}(k_q) \in {\mathcal{K}} X_w$ if and only if $i_{p}^{wx}(k_p) i_{q}^{wx}(k_q) = i^{wx}_w (i_{p}^{w}(k_p) i_{q}^{w}(k_q)) \in {\mathcal{K}} X_{wx}$ for all $x \in P^*$.
\begin{definition}
Let $P$ be a right LCM subsemigroup of a group $G$ and let $X$ be a compactly aligned product system over $P $ with coefficients in $A$.
A \emph{Nica-covariant representation $t = \{t_p\}_{p\in P}$} is a Toeplitz representation of $X$ that in addition satisfies the \emph{Nica-covariance condition}: for all $k_p \in {\mathcal{K}} X_{p}$ and $k_q \in {\mathcal{K}} X_{q}$, i.e.,
\[
t_{p}(k_p) t_{q}(k_q) =
\begin{cases}
t_{w} (i_{p}^{w}(k_p) i_{q}^{w}(k_q)) & \text{ if } p P \cap q P = w P, \\
0 & \text{ otherwise}.
\end{cases}
\]
The \emph{Nica-Toeplitz algebra ${\mathcal{N}}{\mathcal{T}}(X)$ of $X$} is the universal C*-algebra generated by $X$ with respect to the Nica-covariant representations of $X$.
The \emph{Nica-Toeplitz tensor algebra ${\mathcal{N}}{\mathcal{T}}(X)^+$ of $X$} is the norm closed nonselfadoint subalgebra of ${\mathcal{N}}{\mathcal{T}}(X)$ generated by $X$.
\end{definition}
Notice that in the definition of Nica-covariance, the choice of the least common multiple is arbitrary. This is because Proposition \ref{P:star inv LCM} implies that
\[
\psi_{w} (i_{p}^{w}(k_p) i_{q}^{w}(k_q))
=
\psi_{wx} (i_{p}^{wx}(k_p) i_{q}^{wx}(k_q)), \,\,
k_p \in {\mathcal{K}} X_p, k_q \in {\mathcal{K}} X_q,
\]
provided that $pP \cap qP = wP$ and $x \in P^*$.
Under the assumption of compact alignment, one can check that the Fock representation is automatically Nica-covariant.
Thus ${\mathcal{N}}{\mathcal{T}}(X)$ is non-trivial. In the case where $P = \mathbb{Z}_+$ we actually have that ${\mathcal{N}}{\mathcal{T}}(X) = {\mathcal{T}}(X)$. This is not necessarily true for other right LCM semigroups. Indeed in the case where $P = Z^n_+$, $n \geq 2$,
Dor-On and Katsoulis provide a counterexample to this effect in \cite[Example 5.2]{DK20}.
The same example further shows that ${\mathcal{T}}(X)^+$ need not be completely isometric to ${\mathcal{N}}{\mathcal{T}}(X)^+$.
Our next goal is to understand the cores of Nica-covariant representations of $X$. So let $t=\{t_p\}_{p \in P}$ be a Nica-covariant representation of $X$.
We compute
\[
t_p(X_p)^* t_p(X_p) \cdot t_p(\xi_p)^* t_q(\xi_q) \cdot t_q(X_q)^* t_q(X_q)
\subseteq
\overline{t_p(X_p)^* t_p({\mathcal{K}} X_p) t_q({\mathcal{K}} X_q) t_q(X_q)}
\]
and then take a limit by an approximate identity in $\overline{t_p(X_p)^* t_p(X_p)}$ and in $\overline{t_q(X_q)^* t_q(X_q)}$, to derive that
\[
t_p(\xi_p)^* t_q(\xi_q) \in \overline{t_{p'}(X_{p'}) t_{q'}(X_{q'})^*} \quad\text{for}\quad w P = pP \cap q P, p' = p^{-1} w, q' = q^{-1} w,
\]
and
\[
t_p(\xi_p)^* t_q(\xi_q) = 0 \quad\text{for}\quad pP \cap qP = \emptyset.
\]
Hence the C*-algebra $\mathrm{C}^*(t)$ generated by $\{t_p(X_p)\}_{p\in P}$ is given by
\[
\mathrm{C}^*(t) = \overline{\operatorname{span}}\{t_p(\xi_p) t_q(\xi_q)^* \mid \xi_p \in X_p, \xi_q \in X_q \textup{ and } p,q \in P\}.
\]
If $F\subseteq P$, then we write
\begin{equation}\label{eq:BFtdef}
B_{F, t}
:= \overline{\operatorname{span}} \{ t_{p}(k_p) \mid k_p \in {\mathcal{K}} X_p, p \in F \}
\end{equation}
By definition, the \emph{core} of the representation $t$ is the set $B_{P, t}$, which is clearly given by
\[
B_{P, t} = \overline{\bigcup \{ B_{F, t} \mid F \subseteq P \text{ finite}\}}.
\]
Notice that when ${\mathcal{I}}(F)$ is closed under intersections, Nica-covariance implies that $B_{F,t}$ is a $\mathrm{C}^*$-subalgebra of $\mathrm{C}^*(t)$.
We wish to show next that if we restrict the above union to $\vee$-closed sets $F$ we still obtain $B_{P, t}$ and that
for $\vee$-closed sets $F$ the linear spans in \eqref{eq:BFtdef} are automatically closed.
We begin with this last claim.
\begin{proposition}
Let $P$ be a right LCM subsemigroup of a group $G$, let $X$ be a compactly aligned product system over $P$ and let $t=\{t_p\}_{p \in P}$ be a Nica-covariant representation of $X$.
If $F \subseteq P$ is a {$\vee$-closed} set, then
\[
B_{F, t} = \operatorname{span}\{t_p(k_{p}) \mid k_p \in {\mathcal{K}} X_p, p \in F\}.
\]
\end{proposition}
\begin{proof}
It suffices to prove the above in the case where $t = \widehat{t}$ the universal representation $\widehat{t}$ of ${\mathcal{N}}{\mathcal{T}}(X)$.
Let $\overline{t}$ be the Fock representation of $X$ and $\Phi \colon {\mathcal{N}}{\mathcal{T}}(X) \to \mathrm{C}^*(\overline{t})$ be the canonical $*$-epimorphism.
Let $f = \lim_i f_i$ for a net $(f_i)$ with
\[
f_i = \sum_{p \in F} \widehat{t}_p(k_{p,i}) \in \operatorname{span}\{\widehat{t}_p(k_{p}) \mid k_p \in {\mathcal{K}} X_p, p \in F\}.
\]
Then we also have that
\[
\Phi(f) = \lim_i \sum_{p \in F} \overline{t}_p(k_{p,i}).
\]
Recall that $F$ is $\vee$-closed, so that $P$ induces a partial order on $F$, and let $p_0 \in F$ be a minimal element of $F$ in this partial order. Take the compression to the $(p_0, p_0)$-entry by the projection $Q_{p_0} \colon {\mathcal{F}}(X) \to X_{p_0}$.
Then we have that
\[
\lim_i k_{p_0,i} = \lim_i Q_{p_0} f_i Q_{p_0} = Q_{p_0} \Phi(f) Q_{p_0}.
\]
Therefore the net $(k_{p_0,i} )$ is convergent in ${\mathcal{K}} X_{p_0}$, say to some $k_{p_0}$, and so $\lim_i \widehat{t}_{p_0}(k_{p_0,i} ) = \widehat{t}_{p_0}(k_{p_0})$.
We repeat for $f - \widehat{t}_{p_0}(k_{p_0})$ and the net $(f_i - \widehat{t}_{p_0}(k_{p_0,i} ))$, and for the {$\vee$-closed} set $F' = F \setminus \{p_0\}$.
Proceeding inductively, we see that for every $p \in F$ there exists a $k_p$ such that $\lim_i k_{p, i}= k_p$ and this shows that $f \in
\operatorname{span}\{\widehat{t}_p(k_{p}) \mid k_p \in {\mathcal{K}} X_p, p \in F\}$, which completes the proof.
\end{proof}
Next we see that $\vee$-closed sets suffice to generate the core.
\begin{proposition}\label{P:purified}
Let $P$ be a right LCM subsemigroup of a group $G$. Let $X$ be a compactly aligned product system over $P$ and let $t=\{t_p\}_{p \in P}$ be a Nica-covariant representation of $X$.
Then
\[
B_{P, t}
=
\overline{\bigcup \{ B_{F, t} \mid F \subseteq P \text{ finite and {$\vee$-closed}} \}}.
\]
\end{proposition}
\begin{proof}
Suppose that $F$ is an arbitrary finite subset of $P$. We first saturate ${\mathcal{I}}(F):= \{pP: p\in F\}$ under intersections,
\[
{\mathcal{I}}(F)^\cap:= \big\{ \bigcap_{p\in F'}pP : \emptyset \neq F' \subset F\big\},
\]
and then choose a unique generator of each principal ideal in the resulting set ${\mathcal{I}}(F)^\cap$. This gives
a \emph{$\vee$-closed} set $F^\vee$ with the property that $ {\mathcal{I}}(F) \subseteq {\mathcal{I}}(F)^\cap ={\mathcal{I}}(F^\vee)$.
We note in passing that there is no uniqueness in this process because the choice of generators is arbitrary.
The result will follow once we show that
$$B_{F, t} \subseteq B_{F^{\vee}, t}.$$
This is not obvious because $F$ itself may not be contained in $F^\vee$, so we need to verify that we do not lose any part of $B_{F, t}$
when we restrict to unique generators for the ideals in ${\mathcal{I}}(F)^\cap$.
Towards this end, since ${\mathcal{I}}(F)^\cap ={\mathcal{I}}(F^\vee)$, it is enough to show that given $p, q \in P$ with $p P = q P$, then $t_{q}({\mathcal{K}} X_q) = t_p( {\mathcal{K}} X_p)$.
In that case there exists a unit $r \in P^*$ such that $q = pr$.
By Proposition \ref{P:star inv LCM} we have that $t_r(X_r)^* = t_{r^{-1}}(X_{r^{-1}})$ and $\overline{t_r(X_r) t_{r^{-1}}(X_{r^{-1}})} = t_e(X_e)$.
Hence
\[
t_{q}({\mathcal{K}} X_q) = \overline{t_p(X_p) t_r(X_r) t_{r}(X_r)^* t_p(X_p)^*} = t_p({\mathcal{K}} X_p).
\]
This completes the proof.
\end{proof}
\section{Cosystems and their C*-envelopes} \label{S;cosystem}
In what follows $G$ will always denote a fixed discrete group.
We write $u_g$ for the generators of the universal group C*-algebras $\mathrm{C}^*(G)$ and $\lambda_g$ for the generators of the left regular representation $\mathrm{C}^*_\lambda(G)$.
We write $\lambda \colon \mathrm{C}^*(G) \to \mathrm{C}^*_\lambda(G)$ for the canonical $*$-epimorphism.
Recall that $\mathrm{C}^*(G)$ admits a faithful $*$-homomorphism
\[
\Delta \colon \mathrm{C}^*(G) \to \mathrm{C}^*(G) \otimes \mathrm{C}^*(G)
\text{ such that }
\Delta(u_g) = u_g \otimes u_g
\]
given by the universal property of $\mathrm{C}^*(G)$; the left inverse of $\Delta$ is given by ${\operatorname{id}} \otimes \chi$ for the character $\chi$ of $\mathrm{C}^*(G)$, where we identify $ \mathrm{C}^*(G) \otimes \mathbb{C} $ with $\mathrm{C}^*(G) $.
\begin{definition}\label{D:cis coa}
Let ${\mathcal{A}}$ be an operator algebra.
A \emph{coaction of a discrete $G$ on ${\mathcal{A}}$} is a completely isometric representation $\delta \colon {\mathcal{A}} \to {\mathcal{A}} \otimes \mathrm{C}^*(G)$ such that $\sum_{g \in G} {\mathcal{A}}_g$ is norm-dense in ${\mathcal{A}}$ for the induced subspaces
\[
{\mathcal{A}}_g := \{a \in {\mathcal{A}} \mid \delta(a) = a \otimes u_g\}.
\]
Automatically (e.g. see the proof of Proposition \ref{P:Fell ind}) such a map $\delta$ satisfies the coaction identity
\[
(\delta \otimes {\operatorname{id}}_{\mathrm{C}^*(G)}) \delta = ({\operatorname{id}}_{{\mathcal{A}}} \otimes \Delta) \delta.
\]
If, in addition, the map $({\operatorname{id}} \otimes \lambda) \delta$ is injective then the coaction $\delta$ is called \emph{normal}.
If ${\mathcal{A}}$ is an operator algebra and $\delta \colon {\mathcal{A}} \to {\mathcal{A}} \otimes \mathrm{C}^*(G)$ is a coaction on ${\mathcal{A}}$, then we will refer to the triple $({\mathcal{A}}, G, \delta)$ as a \emph{cosystem}.
A map $\phi \colon {\mathcal{A}} \to {\mathcal{A}}'$ between two cosystems $({\mathcal{A}}, G, \delta)$ and $({\mathcal{A}}', G, \delta')$ is said to be \emph{$G$-equivariant}, or simply \emph{equivariant}, if $\delta' \phi=(\phi\otimes {\operatorname{id}})\delta$.
\end{definition}
It follows from the definition that if $({\mathcal{A}}, G, \delta)$ is a cosystem then
\[
{\mathcal{A}}_g \cdot {\mathcal{A}}_h \subseteq {\mathcal{A}}_{g h} \text{ for all } g, h \in G,
\]
because $\delta$ is a homomorphism.
Conversely, if there are subspaces $\{{\mathcal{A}}_g\}_{g \in G}$ such that $\sum_{g \in G} {\mathcal{A}}_g$ is norm-dense in ${\mathcal{A}}$ and a representation $\delta \colon {\mathcal{A}} \to {\mathcal{A}} \otimes \mathrm{C}^*(G)$ such that
\[
\delta(a_g) = a_g \otimes u_g \text{ for all } a_g \in {\mathcal{A}}_g, g \in G,
\]
then $\delta$ is a coaction of $G$ on ${\mathcal{A}}$.
Indeed $\delta$ satisfies the coaction identity and it is completely isometric since $({\operatorname{id}}_{{\mathcal{A}}} \otimes \chi) \delta = {\operatorname{id}}_{{\mathcal{A}}}$.
\begin{remark}\label{R:nd cis}
Let $({\mathcal{A}}, G, \delta)$ be a cosystem and assume that $\delta$ extends to a $*$-homomorphism $\delta \colon \mathrm{C}^*({\mathcal{A}}) \to \mathrm{C}^*({\mathcal{A}}) \otimes \mathrm{C}^*(G)$ that satisfies the coaction identity
\[
(\delta \otimes {\operatorname{id}}) \delta(c) = ({\operatorname{id}} \otimes \Delta) \delta(c) \text{ for all } c \in \mathrm{C}^*({\mathcal{A}}).
\]
Then $\delta$ is automatically non-degenerate on $\mathrm{C}^*({\mathcal{A}})$ in the sense that
\[
\overline{\delta(\mathrm{C}^*({\mathcal{A}})) \left[\mathrm{C}^*({\mathcal{A}}) \otimes \mathrm{C}^*(G)\right]} = \mathrm{C}^*({\mathcal{A}}) \otimes \mathrm{C}^*(G).
\]
Indeed if $(e_i)$ is a contractive approximate identity for $\mathrm{C}^*({\mathcal{A}})$ then we can write
\[
a_{g_1} a_{g_2}^*a_{g_3} \cdots a_{g_{n-1}} a_{g_n}^* \otimes u_h
=
\lim_i (a_{g_1} \otimes u_{g_1}) \cdots (a_{g_n} \otimes u_{g_n})^* \left(e_i \otimes u_{(g_1g_2^{-1} \cdots g_{n-1}^{-1} g_n)^{-1} h}\right) ,
\]
and likewise for all products of the form
\[
a_{g_2}^*a_{g_3} \cdots a_{g_{n-1}} a_{g_n}^*, \
a_{g_2}^* a_{g_3}\cdots a_{g_n}^* a_{g_{n+1}} \
\text{ and } \
a_{g_1} a_{g_2}^* \cdots a_{g_n}^* a_{g_{n+1}}
\]
in $\mathrm{C}^*({\mathcal{A}})$.
By definition of $\delta$ these products generate $\mathrm{C}^*({\mathcal{A}})$.
\end{remark}
\begin{remark}
Definition \ref{D:cis coa} coincides with that of Quigg \cite{Qui96} when ${\mathcal{A}}$ is a C*-algebra.
In this case $\delta$ is a faithful $*$-homomorphism and we have that
\[
({\mathcal{A}}_g)^* = \{a^* \in {\mathcal{A}} \mid \delta(a^*) = a^* \otimes u_{g^{-1}} \} = {\mathcal{A}}_{g^{-1}}.
\]
An argument as in Remark \ref{R:nd cis} gives that the coaction is then non-degenerate, i.e., it is a \emph{full} coaction.
\end{remark}
For our next result, we use Fell's absorption principle to give sufficient conditions for the existence a compatible normal coaction.
\begin{proposition}\label{P:Fell ind}
Let ${\mathcal{A}}$ be an operator algebra and let $G$ be a group.
Suppose there are subspaces $\{{\mathcal{A}}_g\}_{g \in G}$ such that $\sum_{g \in G} {\mathcal{A}}_g$ is norm-dense in ${\mathcal{A}}$, and there is a completely isometric homomorphism
\[
\delta_\lambda \colon {\mathcal{A}} \longrightarrow {\mathcal{A}} \otimes \mathrm{C}^*_\lambda(G)
\]
such that
\begin{equation}\label{E:reduced}
\delta_\lambda(a) = a \otimes \lambda_g \text{ for all } a \in {\mathcal{A}}_g, g \in G.
\end{equation}
Then ${\mathcal{A}}$ admits a normal coaction $\delta$ of $G$ satisfying $\delta_\lambda = ({\operatorname{id}} \otimes \lambda) \delta$.
\end{proposition}
\begin{proof} That $\delta_\lambda$ satisfies the coaction identity follows easily from \eqref{E:reduced} and the spectral subspaces
of $\delta_\lambda$ have dense linear span because they contain the corresponding ${\mathcal{A}}_g$ for every $g\in G$.
We need to show that there is a completely isometric homomorphism
\[
\delta \colon {\mathcal{A}} \longrightarrow {\mathcal{A}} \otimes \mathrm{C}^*(G)
\]
such that $\delta_\lambda = ({\operatorname{id}} \otimes \lambda) \delta$.
Let
\[
\phi \colon \mathrm{C}^*_\lambda(G) \longrightarrow \mathrm{C}^*_\lambda(G) \otimes I \longrightarrow \mathrm{C}^*_\lambda(G) \otimes \mathrm{C}^*(G) : \lambda_g \mapsto \lambda_g \otimes u_g
\]
be the $*$-isomorphism given by Fell's absorption principle.
We can then define
\[
\delta := (\delta_\lambda^{-1} \otimes {\operatorname{id}}_{\mathrm{C}^*(G)}) ({\operatorname{id}}_{{\mathcal{A}}} \otimes \phi) \delta_\lambda,
\]
which is the desired completely isometric homomorphism.
Indeed,
\[
\delta(a_g) = (\delta_\lambda^{-1} \otimes {\operatorname{id}}_{\mathrm{C}^*(G)})((a_g \otimes \lambda_g) \otimes u_g) = a_g \otimes u_g
\]
for every $a_g \in {\mathcal{A}}_g$, and thus $\delta$ satisfies the coaction identity and the spectral subspaces of $\delta$ have dense linear span because they contain the subspaces ${\mathcal{A}}_g$.
Since $\delta_\lambda = ({\operatorname{id}}_{{\mathcal{A}}} \otimes \lambda) \delta$ is faithful we have that $\delta$ is a normal coaction on ${\mathcal{A}}$, and the proof is complete.
\end{proof}
\begin{example} \label{E;normalco}
The reduced group C*-algebra $\mathrm{C}^*_\lambda(G)$ admits a faithful $*$-homomophism
\[
\Delta_\lambda \colon \mathrm{C}^*_\lambda(G) \longrightarrow \mathrm{C}^*_\lambda(G) \otimes \mathrm{C}^*_\lambda(G)
\text{ such that }
\Delta_\lambda(\lambda_g) = \lambda_g \otimes \lambda_g.
\]
Indeed consider the unitary
\[
U \colon \ell^2(G) \otimes \ell^2(G) \longrightarrow \ell^2(G) \otimes \ell^2(G)
\text{ with }
U (e_r \otimes e_s) = e_r \otimes e_{r s},
\]
and verify that
\[
( \lambda_g \otimes \lambda_g ) U = U (\lambda_g \otimes I)
\text{ for all }
g \in G.
\]
Therefore $\operatorname{ad}_U$ implements a faithful $*$-homomorphism
\[
\mathrm{C}^*_\lambda(G) \simeq \mathrm{C}^*(\lambda_g \otimes I \mid g \in G) \stackrel{\operatorname{ad}_U}{\longrightarrow} \mathrm{C}^*(\lambda_g \otimes \lambda_g \mid g \in G).
\]
Thus $\mathrm{C}^*_\lambda(G)$ admits a normal coaction of $G$.
\end{example}
\begin{definition}\label{D:coaction}
Let $({\mathcal{A}}, G, \delta)$ be a cosystem.
A triple $(C', \iota', \delta')$ is called a \emph{C*-cover} for $({\mathcal{A}}, G, \delta)$ if $(C', G, \delta')$ forms a cosystem and $(C', \iota')$ forms a C*-cover of ${\mathcal{A}}$ with $\iota : {\mathcal{A}}\rightarrow C'$ being equivariant.
\end{definition}
\begin{definition}
Let $({\mathcal{A}}, G, \delta)$ be a cosystem.
The \emph{C*-envelope of $({\mathcal{A}}, G, \delta)$} is a C*-cover for $({\mathcal{A}}, G, \delta)$, denoted by $( \mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta), \iota_{\textup{env}}, \delta_{\textup{env}})$,
that satisfies the following property:
for any other C*-cover $(C', \iota', \delta')$ of $({\mathcal{A}}, G, \delta)$ there exists an equivariant $*$-epimorphism $\phi \colon C' \to \mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta)$ that makes the following diagram
\[
\xymatrix{
& & C' \ar@{.>}[d]^{\phi} \\
{\mathcal{A}} \ar[rru]^{\iota'} \ar[rr]^{\iota_{\textup{env}}} & & \mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta)
}
\]
commutative. We will often omit the embedding $\iota_{\textup{env}}$ and the coaction $\delta_{\textup{env}}$ and refer to the triple simply as $ \mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta)$.
\end{definition}
As in the case of the C*-envelope for an operator algebra, it is easily seen that if the C*-envelope for a cosystem exists, then it is unique up to a natural notion of isomorphism for cosystems.
The following theorem verifies that every cosystem
has a C*-envelope and gives a concrete picture for it.
\begin{theorem}\label{T:co-env}
Let $({\mathcal{A}}, G, \delta)$ be a cosystem and let $\iota \colon {\mathcal{A}} \to \mathrm{C}^*_{\textup{env}}({\mathcal{A}})$ be the natural inclusion map.
Then the triple
\[
\left( \mathrm{C}^*(\iota(a_g) \otimes u_g \mid g \in G), \ (\iota \otimes {\operatorname{id}}_{\mathrm{C}^*(G)} ) \delta,\ {\operatorname{id}} \otimes \Delta \right)
\]
is (isomorphic to) the C*-envelope for the cosystem $({\mathcal{A}}, G, \delta)$.
\end{theorem}
\begin{proof}
Let $({\mathcal{A}}, G, \delta)$ be a cosystem and fix the embedding $\iota \colon {\mathcal{A}} \to \mathrm{C}^*_{\textup{env}}({\mathcal{A}})$.
By considering the composition
\[
\xymatrix@C=2cm{
{\mathcal{A}} \ar[r]^{\delta \phantom{ooo} } & {\mathcal{A}} \otimes \mathrm{C}^*(G) \ar[r]^{\iota \otimes {\operatorname{id}}_{\mathrm{C}^*(G)} \phantom{oooo} } \ar[r] & \mathrm{C}^*_{\textup{env}}(\iota({\mathcal{A}})) \otimes \mathrm{C}^*(G),
}\]
and recalling that the minimal tensor product of completely isometric representations is completely isometric,
we see that the C*-algebra $$\mathrm{C}^*(\iota(a_g) \otimes u_g \mid g \in G)$$ is a C*-cover for ${\mathcal{A}}$.
We can then endow it with the coaction ${\operatorname{id}} \otimes \Delta$ so that the triple
\[
(\mathrm{C}^*(\iota(a_g) \otimes u_g \mid g \in G), (\iota \otimes {\operatorname{id}}_{\mathrm{C}^*(G)}) \delta, {\operatorname{id}} \otimes \Delta)
\]
becomes a C*-cover for $({\mathcal{A}}, G, \delta)$.
Next let $(C', \iota', \delta')$ be a C*-cover and let $\phi \colon C' \to \mathrm{C}^*_{\textup{env}}({\mathcal{A}})$ be as in (\ref{eq;env}).
We see that the following diagram is commutative
\[
\xymatrix@R=1cm@C=1cm{
C' \ar[rr]^{\delta' \phantom{o} } \ar@{.>}[drr] & &
C' \otimes \mathrm{C}^*(G) \ar[rr]^{{\operatorname{id}} \otimes \Delta} \ar[d]^{\phi \otimes {\operatorname{id}}} \ar@{.>}[drr] & &
C' \otimes \mathrm{C}^*(G) \otimes \mathrm{C}^*(G) \ar[d]^{\phi \otimes {\operatorname{id}} \otimes {\operatorname{id}}} \\
& &
\mathrm{C}^*(\iota({\mathcal{A}})) \otimes \mathrm{C}^*(G) \ar[rr]^{{\operatorname{id}} \otimes \Delta \phantom{ooo} } & &
\mathrm{C}^*(\iota({\mathcal{A}})) \otimes \mathrm{C}^*(G) \otimes \mathrm{C}^*(G)
}
\]
as it is a diagram of $*$-epimorphisms that agree on the copies of ${\mathcal{A}}$.
We then obtain the canonical equivariant $*$-epimorphism
\[
(\phi \otimes {\operatorname{id}}_{\mathrm{C}^*(G)}) \delta' \colon (C', \iota', \delta') \longrightarrow (\mathrm{C}^*(\iota(a_g) \otimes u_g \mid g \in G), (\iota \otimes {\operatorname{id}}_{\mathrm{C}^*(G)}) \delta, {\operatorname{id}} \otimes \Delta)
\]
by following the diagonal arrows and using the coaction identity on $\delta'$.
Indeed a direct computation shows that
\[
({\operatorname{id}} \otimes \Delta) \left( \left( \phi \otimes {\operatorname{id}} \right) \delta' \right)
=
(\phi \otimes {\operatorname{id}} \otimes {\operatorname{id}}) ({\operatorname{id}} \otimes \Delta) \delta'
=
\left( \left( \left(\phi \otimes {\operatorname{id}} \right) \delta' \right) \otimes {\operatorname{id}} \right) \delta',
\]
and the proof is complete.
\end{proof}
Under certain hypothesis, one can obtain a more concrete picture for the C*-envelope.
\begin{corollary} \label{C:normal}
Let $({\mathcal{A}}, G ,\delta)$ be a normal cosystem, and let $\delta_{\lambda}: {\mathcal{A}} \rightarrow {\mathcal{A}}\otimes\mathrm{C}^*_{\lambda}(G)$ be a completely isometric homomorphism satisfying the assumptions of Proposition~\ref{P:Fell ind} with respect to the spectral subspaces of the coaction $\delta$. If $\overline{\Delta_r}: \mathrm{C}^*_{\lambda}(G)\rightarrow \mathrm{C}^*_{\lambda}(G) \otimes \mathrm{C}^*(G)$ denotes the normal coaction implied by Example~\ref{E;normalco}, then
\begin{equation} \label{eq;normalenv}
\Big(\mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta) , \iota_{\textup{env}}, \delta_{\textup{env}} \Big) \simeq \left( \mathrm{C}^*(\iota(a_g) \otimes \lambda_g \mid g \in G),\ (\iota \otimes {\operatorname{id}}_{\mathrm{C}^*_{\lambda}(G)} ) \delta_{\lambda},\ {\operatorname{id}} \otimes \overline{\Delta_r}\right).
\end{equation}
In particular, the coaction $\delta_{\textup{env}}$ on $\mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta)$ is normal.
\begin{proof}
By Theorem~\ref{T:co-env} the C*-envelope of $({\mathcal{A}}, G, \delta)$ is given by
\[
\left( \mathrm{C}^*(\iota(a_g) \otimes u_g \mid g \in G),\ (\iota \otimes {\operatorname{id}}_{\mathrm{C}^*(G)} ) \delta, \ {\operatorname{id}} \otimes \Delta \right).
\]
Since the right side of (\ref{eq;normalenv}) is a $\mathrm{C}^*$-cover for $({\mathcal{A}}, G ,\delta)$, the defining properties of the C*-envelope imply an equivariant $*$-homomorphism
\[
\phi: \mathrm{C}^*(\iota(a_g) \otimes \lambda_g \mid g \in G) \longrightarrow \mathrm{C}^*(\iota(a_g) \otimes u_g \mid g \in G).
\]
If $q: \mathrm{C}^*(G)\rightarrow \mathrm{C}^*_{\lambda}(G)$ is the natural quotient then $({\operatorname{id}}\otimes q) |_{\mathrm{C}^*(\iota(a_g) \otimes u_g \mid g \in G)}$ provides an inverse for $\phi$ and the conclusion follows.
\end{proof}
\end{corollary}
By duality, a coaction of a discrete abelian group $G$ on an operator algebra ${\mathcal{A}}$ corresponds to a point-norm continuous action $\{\beta_\gamma\}_{\gamma \in \widehat{G}}$ of the dual group $\widehat{G}$ on ${\mathcal{A}}$.
Since each $\beta_\gamma$ is a completely isometric automorphism, it extends to an automorphism $\tilde{\beta}_\gamma$ of the C*-envelope $\mathrm{C}^*_{\textup{env}}({\mathcal{A}})$,
yielding a point norm continuous action $\{\tilde{\beta}_\gamma \}_{\gamma \in \widehat{G}}$ of $\widehat{G}$ on $\mathrm{C}^*_{\textup{env}}({\mathcal{A}})$. Again by duality, this corresponds to a coaction of $G$ on $\mathrm{C}^*_{\textup{env}}({\mathcal{A}})$.
Hence, for abelian $G$, the C*-envelope for a cosystem coincides with the usual C*-envelope, equipped with the coaction given by (the duals of) the group of extended automorphisms $\{ \tilde\beta_\gamma\mid \gamma\in \hat G\}$.
Equivalently every coaction of a discrete abelian group on an operator algebra lifts to a coaction on its C*-envelope.
It is not known if this is the case for more general classes of groups.
\begin{corollary}\label{C:fpa 1-1}
Let $(\mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta), \iota, \delta_{\operatorname{\textup{env}}})$ be the C*-envelope for a cosystem $({\mathcal{A}}, G, \delta)$.
Suppose that $\psi \colon \mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta) \to B$ is a $*$-homomorphism that is completely isometric on ${\mathcal{A}}$.
Then $\psi$ is faithful on the fixed point algebra $\left[ \mathrm{C}^*_{\textup{env}}({\mathcal{A}},G) \right]_e$.
\end{corollary}
\begin{proof}
Without loss of generality assume that $\psi$ is surjective. Let $\iota\colon {\mathcal{A}}\rightarrow\mathrm{C}^*_{\textup{env}}({\mathcal{A}})$ be the natural inclusion.
Since $\psi$ is surjective and completely isometric on ${\mathcal{A}}$, the pair $(B, \psi(\iota\otimes {\operatorname{id}}) \delta)$ is a C*-cover for ${\mathcal{A}}$.
By the defining property of $\mathrm{C}^*_{\textup{env}}({\mathcal{A}})$, there exists a surjective $*$-homomorphism $\phi\colon B\rightarrow \mathrm{C}^*_{\textup{env}}({\mathcal{A}})$ so that $\phi \big(\psi(\iota\otimes {\operatorname{id}})\delta \big)= \iota$.
Therefore
\begin{equation} \label{eq;ceident}
(\phi\psi)(\iota(a) \otimes u_g) = \iota(a), \mbox{ for all } a \in {\mathcal{A}}_g, g \in G.
\end{equation}
Now for our purposes, it suffices to show that $\phi\circ \psi$ is injective on $\left[ \mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta) \right]_e$.
However
\[
\left[ \mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta)\right]_e=\overline{\operatorname{span}}\left\{ \prod\iota(a_{g_i})\iota(a_{h_i})^*\otimes1\mid a_{g_i}\in {\mathcal{A}}_{g_i} ,a_{h_i} \in {\mathcal{A}}_{h_i}, \prod g_ih_i^{-1} = e \right\}
\]
and so (\ref{eq;ceident}) implies that $\phi \psi$ is the inverse of the ampliation map on $\left[ \mathrm{C}^*_{\textup{env}}({\mathcal{A}}, G, \delta)\right]_e$.
The conclusion now follows.
\end{proof}
Let us close this section with a discussion on gradings of C*-algebras in the sense of \cite{Exe17}.
\begin{definition}
Let $B$ be a C*-algebra and $G$ a discrete group. A collection of closed linear subspaces $\{B_g\}_{g\in G}$ of $B$ is called a \emph{grading} of $B$ by $G$ if
\begin{enumerate}
\item $B_g B_h \subseteq B_{gh}$
\item $B_g^* = B_{g^{-1}}$
\item $\sum_{g\in G} B_g$ is dense in $B$.
\end{enumerate}
If in addition there is a conditional expectation $E : B \rightarrow B_e$ which vanishes on $B_g$ for $g\neq e$, we say that the pair $(\{B_g\}_{g \in G}, E)$ is a \emph{topological} grading of $B$.
\end{definition}
When $\delta$ is a coaction on a C*-algebra $B$, the spectral subspaces $B_g$ for $g\in G$ comprise a topological grading for $B$ with conditional expectation $E_e = ({\operatorname{id}} \otimes F_e) \circ \delta$ where $F_e : \mathrm{C}^*(G) \rightarrow B$ is the $e$-th Fourier coefficent. Completely contractive maps $E_g : B \rightarrow B_g$ can be similarly defined by setting $E_g:= ({\operatorname{id}} \otimes F_g) \circ \delta$, where $F_g : C^*(G) \rightarrow \mathbb{C} $ is the $g$-th Fourier coefficient.
A grading of a C*-algebra by a group constitutes a Fell bundle over the group, and every Fell bundle arises this way, but not uniquely. Indeed, there may be many non-isomorphic graded C*-algebras whose gradings are all equal to a pre-assigned Fell bundle ${\mathcal{B}}$. At one extreme sits the maximal C*-algebra $C^*({\mathcal{B}})$, which is universal for representations of ${\mathcal{B}}$, while at the other extreme is the minimal (reduced) cross sectional algebra $C^*_\lambda({\mathcal{B}})$ which is defined via the left regular representation of ${\mathcal{B}}$ on $\ell^2({\mathcal{B}})$. We refer to \cite{Exe97, Exe17} for the precise definitions and details.
\begin{definition}
Let ${\mathcal{B}} = \{B_g\}_{g \in G}$ be a topological grading for a C*-algebra $B$ over a group $G$. We say that an ideal ${\mathcal{I}} \lhd B$ is \emph{induced} if ${\mathcal{I}} = \sca{{\mathcal{I}} \cap B_e}$.
\end{definition}
If $\delta \colon B \to B \otimes \mathrm{C}^*(G)$ is a coaction on a C*-algebra and $I \lhd B$ is an induced ideal then $\delta$ induces a faithful coaction $B /I$, see for example \cite[Proposition A.1]{CLSV11}. Normal actions also descend through induced ideals when $G$ is exact, see for example \cite[Proposition A.5]{CLSV11}.
\section{C*-envelopes of cosystems as co-universal C*-algebras } \label{S;main}
In this section we consider the cosystem consisting of the Fock tensor algebra ${\mathcal{T}}_\lambda(X)^+$ of a compactly aligned product system $X$ over a right LCM subsemigroup $P$ of a group $G$, together with a natural coaction. We prove that the associated C*-envelope has the co-universal property of \cite[Theorem 4.1]{CLSV11} with respect to $X$.
Let $\widetilde{t} =\{\widetilde{t}_p\}_{p \in P}$ be the \textit{universal Toeplitz representation for $X$}.
By the universal property of ${\mathcal{T}}(X)$ there is a canonical $*$-homomorphism
\[
\widetilde{\delta} \colon {\mathcal{T}}(X) \longrightarrow {\mathcal{T}}(X) \otimes \mathrm{C}^*(G) : \widetilde{t}_p(\xi_p) \longmapsto \widetilde{t}_p(\xi_p) \otimes u_p.
\]
Sehnem \cite[Lemma 2.2]{Seh18} has shown that $\widetilde{\delta}$ is a non-degenerate and faithful coaction of ${\mathcal{T}}(X)$, where each spectral subspace ${\mathcal{T}}(X)_g$ with $g \in G$ is the closed linear span of the products
\[
\widetilde{t}_{p_1}(\xi_{p_1}) \widetilde{t}_{p_2}(\xi_{p_2})^* \widetilde{t}_{p_3}(\xi_{p_3}) \cdots \widetilde{t}_{p_n}(\xi_{p_n})^* \quad\text{for}\quad p_1 p_2^{-1}p_3 \cdots p_n^{-1} = g \ \ \text{and} \ \ \xi_{p_i} \in X_{p_i}
\]
Let $\widehat{t}=\{\widehat{t}_p\}_{p \in P}$ be the \textit{universal Nica-Toeplitz representation of $X$}. As ${\mathcal{N}}{\mathcal{T}}(X)$ is a quotient of ${\mathcal{T}}(X)$ by an induced ideal, by \cite[Proposition A.1]{CLSV11} the non-degenerate and faithful coaction of $G$ on ${\mathcal{T}}(X)$ descends canonically to one on ${\mathcal{N}}{\mathcal{T}}(X)$. Therefore, the canonical $*$-homomorphism
\[
\widehat{\delta} \colon {\mathcal{N}}{\mathcal{T}}(X) \longrightarrow {\mathcal{N}}{\mathcal{T}}(X) \otimes \mathrm{C}^*(G) : \widehat{t}_p(\xi_p) \longmapsto \widehat{t}_p(\xi_p) \otimes u_p
\]
defines a coaction on ${\mathcal{N}}{\mathcal{T}}(X)$.
The following proposition shows that the Fock algebra, being a reduced type object, admits a normal coaction.
\begin{proposition}\label{P:f coaction}
Let $P$ be a unital subsemigroup of a group $G$ and $X$ a product system over $P$ with coefficients in $A$.
Let $\overline{t}$ be the Fock representation. Then there is a normal coaction
\[
\overline{\delta}\colon {\mathcal{T}}_\lambda(X) \longrightarrow {\mathcal{T}}_{\lambda}(X) \otimes \mathrm{C}^*(G) : \overline{t}_p(\xi_p) \longmapsto \overline{t}_p(\xi_p) \otimes u_{p}.
\]
Moreover each spectral space ${\mathcal{T}}_\lambda(X)_g$ with $g \in G$ is given by the products of the form
\[
\overline{t}_{p_1}(\xi_{p_1}) \overline{t}_{p_2}(\xi_{p_2})^*\overline{t}_{p_3}(\xi_{p_3}) \cdots \overline{t}_{p_n}(\xi_{p_n})^*, \quad p_1 p_2^{-1} p_3\cdots p_n^{-1} = g.
\]
\end{proposition}
\begin{proof}
Let the operator $U \colon {\mathcal{F}} X \otimes \ell^2(G) \to {\mathcal{F}} X \otimes \ell^2(G)$ be given by
\[
U (\xi_r \otimes e_g) = \xi_r \otimes e_{r g}
\text{ for all }
r \in P, g \in G.
\]
We have that $U$ is a unitary in $\L( {\mathcal{F}} X \otimes \ell^2(G) )$ for the Hilbert bimodule ${\mathcal{F}} X \otimes \ell^2(G)$ over $A \otimes \mathbb{C} = A$, with
\[
U^*( \xi_r \otimes e_h) = \xi_r \otimes e_{r^{-1} g}.
\]
We can then directly verify that $U(\overline{t}(\xi_p) \otimes I) = (\overline{t}(\xi_p) \otimes \lambda_p) U$ for all $p \in P$.
Thus $\operatorname{ad}_{U}$ implements a faithful $*$-homomorphism $\delta_{\lambda} : {\mathcal{T}}_{\lambda}(X) \rightarrow {\mathcal{T}}_{\lambda}(X) \otimes C^*_{\lambda}(G)$ given by
\[
\xymatrix{
{\mathcal{T}}_\lambda(X) \simeq \mathrm{C}^*(\overline{t}_p(\xi_p) \otimes I \mid p \in P) \ar[rr]^{\phantom{oooooo} \operatorname{ad}_{U}}
& &
\mathrm{C}^*(\overline{t}_p(\xi_p) \otimes \lambda_p \mid p \in P).
}
\]
Let $\overline{t}_* : {\mathcal{T}}(X) \rightarrow {\mathcal{T}}_{\lambda}(X)$ be the canonical surjection induced by $\overline{t}$. Since the spectral subspaces ${\mathcal{T}}(X)_g$, $g \in G$, for the coaction $\widetilde{\delta}$ are the closed linear span of products
\[
\widetilde{t}_{p_1}(\xi_{p_1}) \widetilde{t}_{p_2}(\xi_{p_2})^* \widetilde{t}_{p_3}(\xi_{p_3}) \cdots \widetilde{t}_{p_n}(\xi_{p_n})^* \quad\text{for}\quad p_1 p_2^{-1}p_3 \cdots p_n^{-1} = g.
\]
and $\sum_{g\in G} {\mathcal{T}}(X)_g$ is dense in ${\mathcal{T}}(X)$, we see that the same persists after the application of $\overline{t}_*$. More precisely, we let ${\mathcal{T}}_{\lambda}(X)_g$ for $g \in G$ be the subspaces given by the closed linear span of
\[
\overline{t}_{p_1}(\xi_{p_1}) \overline{t}_{p_2}(\xi_{p_2})^* \overline{t}_{p_3}(\xi_{p_3}) \cdots \overline{t}_{p_n}(\xi_{p_n})^* \quad\text{for}\quad p_1 p_2^{-1}p_3 \cdots p_n^{-1} = g.
\]
As $\overline{t}_*$ is surjective, we get that $\sum_{g\in G} {\mathcal{T}}_{\lambda}(X)_g$ is dense in ${\mathcal{T}}_{\lambda}(X)$, and that $\overline{t}_*({\mathcal{T}}(X)_g) = {\mathcal{T}}_{\lambda}(X)_g$.
Since by definition $\delta_{\lambda}(a) = a \otimes \lambda_g$ for $a\in {\mathcal{T}}_{\lambda}(X)_g$, we get that $\delta_{\lambda}$ satisfies the conditions of Proposition~\ref{P:Fell ind}. Hence, there is a normal coaction $\delta$ of $G$ on ${\mathcal{T}}_{\lambda}(X)$ whose spectral subspaces are ${\mathcal{T}}_{\lambda}(X)_g$.
\end{proof}
For $F\subseteq P$ finite, the faithful conditional expectation
\[
\overline{E} \colon {\mathcal{T}}_\lambda(X) \longrightarrow B_{P, \overline{t}} ; \overline{t}_p(\xi_p) \overline{t}_q(\xi_q)^* \longmapsto \delta_{p,q} \overline{t}_p(\xi_p) \overline{t}_q(\xi_q)^*
\]
given by the normal coaction of $G$ on ${\mathcal{T}}_\lambda(X)$ coincides with the sum of compressions to the $(r,r)$-entries in $\L({\mathcal{F}} X)$.
That is
\[
\overline{E}(f) = \sum_{r \in P} Q_r f Q_r
\text{ for all }
f \in {\mathcal{T}}_\lambda(X),
\]
for the projections $Q_{r} \colon {\mathcal{F}}(X) \to X_r$.
We need the following auxiliary proposition. Even though it can be deduced from \cite[Theorem 2.7]{KL19b}, we give instead a self-contained proof.
\begin{lemma} \label{P:1-1 Fock cexp}
Let $P$ be a right LCM subsemigroup of a group $G$ and $X$ be a compactly aligned product system over $P$.
If $\Phi \colon {\mathcal{N}}{\mathcal{T}}(X) \to {\mathcal{T}}_\lambda(X)$ is the canonical $*$-epimorphism, then $\Phi$ is faithful on ${\mathcal{N}}{\mathcal{T}}(X)_e$.
\end{lemma}
\begin{proof}
Let ${\mathcal{N}}{\mathcal{T}}(X) = \mathrm{C}^*(\widehat{t})$ and ${\mathcal{T}}_\lambda(X) = \mathrm{C}^*(\overline{t})$.
We will show that $\Phi$ is faithful on every $B_{F, \widehat{t}}$, where $F$ ranges over all finite and {$\vee$-closed} subsets of $P$.
Towards this end suppose that
\[
0 \neq f := \sum_{p \in F} \widehat{t}_p(k_p) \in \ker \Phi.
\]
Let $p_0$ be minimal in $F$ such that $\widehat{\psi}_{p_0}(k_{p_0}) \neq 0$; then $k_{p_0} \neq 0$.
However, due to minimality of $p_0$ we have that
\[
k_{p_0} = Q_{p_0} \left(\sum_{p \in F} \overline{t}_p(k_{p}) \right) Q_{p_0} = Q_{p_0} \Phi(f) Q_{p_0} = 0,
\]
which is a contradiction.
\end{proof}
\begin{proposition}\label{T:Fock is Fell}
Let $P$ be a right LCM subsemigroup of a group $G$ and let $X$ be a compactly aligned product system over $P$.
Consider the Fell bundle
\[
{\mathcal{N}} X := \{ [{\mathcal{N}}{\mathcal{T}}(X)]_g \}_{g \in G}
\]
induced by the canonical coaction $\widehat{\delta}$ of $G$ on ${\mathcal{N}}{\mathcal{T}}(X)$.
Then
\[
\mathrm{C}^*({\mathcal{N}} X) \simeq {\mathcal{N}}{\mathcal{T}}(X)
\quad\text{and}\quad
\mathrm{C}^*_\lambda({\mathcal{N}} X) \simeq {\mathcal{T}}_\lambda(X),
\]
i.e., ${\mathcal{N}}{\mathcal{T}}(X)$ is the full C*-algebra of the bundle ${\mathcal{N}} X$ and ${\mathcal{T}}_\lambda(X)$ is the reduced C*-algebra of the bundle ${\mathcal{N}} X$.
\end{proposition}
\begin{proof}
For the first part, let $t' : X \rightarrow B(\H)$ be a Nica-covariant representation such that the representation
$t'_*: {\mathcal{N}}{\mathcal{T}}(X) \to \mathrm{C}^*(t')$ is injective. Let ${\mathcal{A}}$ be a graded C*-algebra with grading isomorphism $\phi_g : [{\mathcal{N}}{\mathcal{T}}(X)]_g \rightarrow {\mathcal{A}}_g$ for each $g\in G$. Define a map $t : X \rightarrow {\mathcal{A}}$ by setting $t_p(\xi) = \phi_p(t'_p(\xi))$ for $\xi \in X_p$ and $p \in P$.
We claim that $t : X \rightarrow {\mathcal{A}}$ is a Nica-covariant representation.
Indeed, if $\xi, \zeta\in X_p$, $p \in P$, then
\begin{align*}
t_p(\xi )^* t_p( \zeta) &= \phi_{p^{-1}}(t'_p(\xi)^*)\phi_p(t'_p(\zeta)) =\phi_e\big( t'_p(\xi)^*t'_p(\zeta)\big) \\
&=\phi_e\big(t'_e(\langle \xi \mid \zeta\rangle ) \big) \\
&=t_e(\langle \xi \mid \zeta\rangle)
\end{align*}
and so $t : X \rightarrow {\mathcal{A}}$ is a Toeplitz representation. Similar arguments show that $t_p(k_p)=\phi_e(t'_{p}(k_p))$, for any rank-one operator $k_p \in {\mathcal{K}} X_p$, and thus by continuity for any compact operator operator $k_p \in {\mathcal{K}} X_p$. From this it is clear that the Nica-covariance of $t'$ implies that of $t$.
Having established the Nica-covariance of $t$, we now have an induced $*$-surjection $t_* : {\mathcal{N}}{\mathcal{T}}(X) \rightarrow {\mathcal{A}}$ such that $\phi:=t_*$ restricts to $\phi_g$ on ${\mathcal{N}}{\mathcal{T}}(X)_g$. This shows that $\mathrm{C}^*({\mathcal{N}} X) \simeq {\mathcal{N}}{\mathcal{T}}(X)$.
For the second part, there exists a canonical $*$-epimorphism $\Phi \colon {\mathcal{N}}{\mathcal{T}}(X) \to {\mathcal{T}}_\lambda(X)$ which by definition intertwines the coactions, and thus the conditional expectations.
By Lemma~\ref{P:1-1 Fock cexp} the map $\Phi$ is faithful on the fixed point algebra of ${\mathcal{N}}{\mathcal{T}}(X)$ and thus induces an isomorphism of the Fell bundle ${\mathcal{N}} X$.
Since the conditional expectation on ${\mathcal{T}}_\lambda(X)$ is faithful it follows by \cite[Theorem 3.3]{Exe97} that ${\mathcal{T}}_\lambda(X)$ is $*$-isomorphic to the reduced C*-algebra of the Fell bundle ${\mathcal{N}} X$.
\end{proof}
Our next result shows that up to a canonical $*$-isomorphism, the tensoring of any injective Nica-covariant representation of the product system $X$ with the left regular representation of $P$ produces the C*-algebra of the Fock representation of $X$.
\begin{proposition}\label{P:P coa B}
Let $P$ be a right LCM subsemigroup of a group $G$, let $X$ be a compactly aligned product system over $P$ and let $t = \{t_p\}_{p \in P}$ be an injective Nica-covariant representation of $X$.
Then there exists a faithful $*$-homomorphism
\[
{\mathcal{T}}_\lambda(X) \longrightarrow \mathrm{C}^* (t) \otimes \mathrm{C}^*_\lambda(P) : \overline{t}_p(\xi_p) \longmapsto t_p(\xi_p)\otimes V_p.
\]
\end{proposition}
\begin{proof}
Consider the Nica-covariant representation
\[
t\otimes V \colon X\longrightarrow C^*(t)\otimes C^*_{\lambda}(P): \xi_p\longmapsto t_p(\xi_p) \otimes V_p, \,\, \xi_p \in X_p, p \in P.
\]
We claim that the induced representation $(t \otimes V)_*: {\mathcal{N}}{\mathcal{T}}(X)\rightarrow \mathrm{C}^*(t)\otimes \mathrm{C}^*_{\lambda}(P)$ is faithful on the fixed point algebra ${\mathcal{N}}{\mathcal{T}}(X)_e$. According to Proposition~\ref{P:purified}, it suffices to verify injectivity on $B_{F, \widehat{t}}\subseteq {\mathcal{N}}{\mathcal{T}}(X)$, where $F$ is an arbitrary finite {$\vee$-closed} subset of $P$.
Towards this end let $k_p \in {\mathcal{K}} X_p$ with $p \in F$ such that
\[
f := \sum_{p \in F} \widehat{t}_p(k_p) \neq 0
\quad\text{and}\quad
(t \otimes V)_*(f)=\sum_{p \in F} t_p(k_p) \otimes V_p V_p^* = 0.
\]
Let $p_0 \in F$ be minimal such that $\widehat{t}_{p_0}(k_{p_0}) \neq 0$; then $k_{p_0} \neq 0$.
We directly compute
\[
t_{p_0}(k_{p_0})
= (I \otimes P_{\mathbb{C} e_{p_0}}) \bigg( \sum_{p \in F} t_p(k_p) \otimes V_p V_p^* \bigg) (I \otimes P_{\mathbb{C} e_{p_0}})
= 0.
\]
Since $t$ is injective, we have that $k_{p_0} = 0$, a contradiction that establishes the desired injectivity for $(t \otimes V)_*$ on ${\mathcal{N}}{\mathcal{T}}(X)_e$.
It follows now that $(t\otimes V)_*$ is injective on each fiber of the bundle ${\mathcal{N}} X$ of Proposition~\ref{T:Fock is Fell} and so $C^*(t \otimes V)$ becomes a cross sectional algebra of ${\mathcal{N}} X$. According to Proposition~\ref{T:Fock is Fell}, ${\mathcal{T}}_\lambda(X)$ is the reduced cross sectional algebra of ${\mathcal{N}} X$ and so \cite[Theorem 3.3]{Exe97} implies a map
\begin{equation} \label{finally}
\mathrm{C}^* (t) \otimes \mathrm{C}^*_\lambda(P) \longrightarrow {\mathcal{T}}_\lambda(X) : t_p(\xi_p)\otimes V_p \longmapsto \overline{t}_p(\xi_p) .
\end{equation}
The canonical expectation of $C^*(t \otimes V)$ onto $(t\otimes V)_*({\mathcal{N}}{\mathcal{T}}(X)_e)$ coincides with ${\operatorname{id}} \otimes E_P$, where $E_P$ is compression on the diagonal, and so it is faithful. By \cite[Proposition 3.7]{Exe97}, the map in (\ref{finally}) is faithful and the conclusion follows.
\end{proof}
As a consequence the injective representations of a product system $X$ that inherit the coaction of $G$ produce C*-covers for $({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+)$.
\begin{proposition}\label{P:P cover}
Let $P$ be a right LCM subsemigroup of a group $G$ and let $X$ be a compactly aligned product system over $P$.
Let $(B, G, \delta)$ be a cosystem for which there exists an equivariant epimorphism
\[
\phi \colon {\mathcal{T}}_\lambda(X) \longrightarrow B.
\]
If $\phi|_{\overline{t}(X_e)}$ is faithful, then $\phi|_{{\mathcal{T}}_\lambda(X)^+}$ is completely isometric and therefore $(B, G, \delta)$ forms a C*-cover for $({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+)$.
\end{proposition}
\begin{proof}
Consider the unital completely positive map
\[
\psi \colon \mathrm{C}^*(G) \longrightarrow \mathrm{C}^*_\lambda(G) \longrightarrow {\mathcal{B}}(\ell^2(P)) : u_g \mapsto \lambda_g \mapsto P_{\ell^2(P)} \lambda_g |_{\ell^2(P)}
\]
which is multiplicative on the subalgebra of $\mathrm{C}^*(G)$ generated by all $u_p$, $p \in P$. Since $\psi(u_p) = V_p$ for all $p\in P$, the following diagram of completely contractive homomorphisms
\[
\xymatrix{
{\mathcal{T}}_\lambda(X)^+ \ar[d]_{\phi} \ar[rr] & & B \otimes \mathrm{C}^*_\lambda(P) \\
B \ar[rr] & & B \otimes \mathrm{C}^*(G) \ar[u]_{{\operatorname{id}} \otimes \psi}
}
\]
commutes.
By Proposition \ref{P:P coa B} the upper horizontal map is a restriction of a faithful $*$-homomor\-phism and thus it is completely isometric.
Hence $\phi$ is completely isometric.
\end{proof}
\begin{definition}
Following \cite[Section 4]{CLSV11} we say that a representation $t$ of a product system $X$ is \emph{gauge-compatible}, or simply \emph{equivariant} if $\mathrm{C}^*(t)$ admits a coaction of $G$ that makes the canonical epimorphism ${\mathcal{T}}(X)\rightarrow \mathrm{C}^*(t)$ equivariant with respect to the natural (gauge) coaction of $G$ on ${\mathcal{T}}(X)$.
\end{definition}
Carlsen, Larsen, Sims and Vittadello proposed the idea of a co-universal C*-algebra with respect to gauge-compatible, injective, Nica-covariant representations of $X$. Roughly speaking, such a co-universal C*-algebra is the smallest C*-algebra carrying a coaction of $G$ that is generated by an equivariant, injective, Nica-covariant representation of $X$. For the precise formulation see Definition \ref{D:couniversal} below. Carlsen, Larsen, Sims and Vittadello went on to prove that under various hypotheses on the product system, the reduced cross sectional algebra of the Fell bundle associated with ${\mathcal{N}}\O(X)$ does satisfy the co-universal property \cite[Theorem 4.1]{CLSV11}.
\begin{definition}\label{D:couniversal}
Let $P$ be a right LCM subsemigroup of a group $G$ and let $X$ be a compactly aligned product system over $P$.
Suppose $(C, G,\gamma)$ is a cosystem and $j:X \to C$ is a Nica-covariant isometric representation, with integrated version denoted by $j_*: {\mathcal{N}}{\mathcal{T}}(X) \to C$.
We say that $(C, G, \gamma, j)$ has the \emph{co-universal property for equivariant, injective, Nica-covariant representations of $X$} if
\begin{enumerate}
\item
$j_e$ is faithful;
\item $j_*: {\mathcal{N}}{\mathcal{T}}(X) \to C$ is $\hat{\delta}$-$\gamma$ equivariant; and
\item for every equivariant, injective, Nica-covariant representation $t: X \rightarrow \mathrm{C}^*(t)$,
there is a surjective $*$-homomorphism $\phi : \mathrm{C}^* (t) \rightarrow C$ such that $$\phi t_p(\xi_p) = j_p(\xi_p), \mbox{ for all } \xi_p \in X_p \mbox{ and } p \in P.$$
\end{enumerate}
Notice that, as observed at the beginning of \cite[Section 4]{CLSV11}, the map $\phi$ is automatically equivariant because $j_*$ and $t_*$ are surjective.
\end{definition}
\begin{remark}
We have eschewed the notation ${\mathcal{N}}\O^r(X)$ used in \cite{CLSV11} because there is a certain degree of ambiguity in relation to its meaning. On the one hand, it is clear from \cite[Introduction]{CLSV11} and the statement of \cite[Theorem 4.1]{CLSV11}
that ${\mathcal{N}}\O^r(X)$ is implicitly intended to mean any C*-algebra that satisfies the co-universal property,
while on the other hand at the start of the proof of \cite[Theorem 4.1]{CLSV11}, ${\mathcal{N}}\O^r(X)$ is explicitly defined to be the reduced cross sectional algebra of the Fell bundle
${\mathcal{N}}\O X:= \{ [{\mathcal{N}}\O(X)]_g \}_{g \in G}$ of
the natural coaction of $G$ on ${\mathcal{N}}\O(X)$. This causes no problem so long as the product system $X$ satisfies the assumptions of \cite[Theorem 4.1]{CLSV11}, but there is a definite clash for some examples that do not satisfy those hypothesis, see e.g. \cite[Remark 4.2]{CLSV11}. We point out that there is no ambiguity in \cite{DK20} where the notation is exclusively used to denote any C*-algebra that satisfies the co-universal property.
\end{remark}
Our next result shows that the C*-envelope of the tensor algebra ${\mathcal{T}}_\lambda(X)^+$ taken with its natural coaction satisfies the co-universal property, thus establishing the existence of a co-universal object for general compactly aligned product systems over right LCM semigroups. This completes the program initiated in \cite{CLSV11} and continued in \cite{DK20}.
\begin{theorem} \label{T:co-univ}
Let $P$ be a right LCM subsemigroup of a group $G$ and let $X$ be a compactly aligned product system over $P$.
Let
$\ol{\de}^+ \colon {\mathcal{T}}_\lambda(X)^+ \rightarrow {\mathcal{T}}_\lambda(X)^+ \otimes \mathrm{C}^*(G)$ be
the restriction of the coaction from Proposition \ref{P:f coaction} to ${\mathcal{T}}_\lambda(X)^+$. Then the C*-envelope
$
(\mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, {\ol{\de}^+}),\delta_{\textup{env}},\iota_{\textup{env}})
$
of the cosystem $({\mathcal{T}}_\lambda(X)^+, G, {\ol{\de}^+})$
satisfies the co-universal property associated with $X$.
In particular, the canonical coaction on the co-universal object is normal.
\end{theorem}
\begin{proof}
By definition $\mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G,\ol{\de}^+)$ is generated by an injective Nica-covariant, $G$-compatible representation of $X$.
It remains to show that it has the required co-universal property.
Let $\overline{E}$ be the faithful conditional expectation on ${\mathcal{T}}_\lambda(X)$ and let $\Phi \colon {\mathcal{N}}{\mathcal{T}}(X) \to {\mathcal{T}}_\lambda(X)$ be the canonical $*$-epimorphism.
Then we have that
\[
\ker \Phi = \{f \in {\mathcal{N}}{\mathcal{T}}(X) \mid \overline{E} \Phi(f^*f) = 0\}.
\]
In particular, since $\Phi$ is faithful on the fixed point algebra by Lemma~\ref{P:1-1 Fock cexp}, we get that
$$\widehat{E} := (\Phi|_{[{\mathcal{N}}{\mathcal{T}}(X)]_e})^{-1} \overline{E} \Phi$$
is the conditional expectation on ${\mathcal{N}}{\mathcal{T}}(X)$.
Let $t$ be an injective, Nica-covariant, equivariant representation of $X$.
Then $\mathrm{C}^*(t)$ admits a $G$-grading and let us write ${\mathcal{B}} = \{B_g\}_{g \in G}$ for this grading of $\mathrm{C}^*(t)$.
Due to the existence of the conditional expectation on $\mathrm{C}^*(t)$, by \cite[Theorem 3.3]{Exe97} there exists a canonical equivariant $*$-epimorphism
\[
\phi \colon {\mathcal{N}}{\mathcal{T}}(X) \longrightarrow \mathrm{C}^*(t) \longrightarrow \mathrm{C}^*_\lambda({\mathcal{B}})
\]
where $\mathrm{C}^*_\lambda({\mathcal{B}})$ is the reduced cross sectional C*-algebra of the Fell bundle ${\mathcal{B}}$. Let us write $E'$ for the associated faithful conditional expectation on $\mathrm{C}^*_\lambda({\mathcal{B}})$.
For $f \in \ker \Phi$ we have that
$$\widehat{E}(f^*f) = (\Phi|_{[{\mathcal{N}}{\mathcal{T}}(X)]_e})^{-1} \overline{E} \Phi(f^*f) = 0.$$
As $\phi$ intertwines the conditional expectations $\widehat{E}$ and $E'$, we derive that $E'(\phi(f^*f)) = 0$ and so $\phi(f) = 0$, because $E'$ is faithful.
Since $f$ was arbitrary in $\ker \Phi$ we get that $\phi(\ker \Phi) = \{0\}$.
Hence there is an induced $*$-homomorphism $\phi'$ that makes the following diagram
\[
\xymatrix{
{\mathcal{N}}{\mathcal{T}}(X) \ar[rr]^{\phi} \ar[rd]^{\Phi} & & \mathrm{C}^*_\lambda({\mathcal{B}}) \\
& {\mathcal{T}}_\lambda(X) \ar@{.>}[ur]^{\phi'} &
}
\]
commutative.
By construction $\phi'$ is equivariant.
Since $t_e$ is faithful we have that $A \hookrightarrow B_e \subseteq \mathrm{C}^*_\lambda({\mathcal{B}})$ faithfully.
Then, by Proposition \ref{P:P cover} we have that $\mathrm{C}^*_\lambda({\mathcal{B}})$ is a C*-cover of $({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+)$.
Therefore we have the following $*$-epimorphisms
\[
\mathrm{C}^*(t) \longrightarrow \mathrm{C}^*_\lambda({\mathcal{B}}) \longrightarrow \mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+),
\]
which establishes that $\mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+)$ satisfies the co-universal property for $X$.
The last sentence in the statement of the theorem follows from Corollary~\ref{C:normal}.
\end{proof}
\begin{remark}\label{R:aplug4env}
Theorem \ref{T:co-univ} shows that every compactly aligned product system
over a right LCM subsemigroup of a group
has an associated co-universal C*-algebra. This generalizes \cite[Theorem 4.1]{CLSV11}
by removing the assumption that $X$ is injective or that $P$ is directed and the augmented left actions are injective,
and it also generalizes \cite[Theorem 3.3]{DK20} by removing the assumption that $P$ is abelian.
We have been able to do this through the use of nonselfadjoint techniques adapted to the setting of operator algebras with a coaction,
which ultimately relies on the existence of the usual C*-envelope, through our Theorem \ref{T:co-env}.
\end{remark}
\section{Co-universality and Sehnem's covariance algebras} \label{S;Sehnem}
In \cite{CLSV11}, Carlsen, Larsen, Sims and Vittadello show that under certain hypothesis on a product system $X$, the reduced cross sectional algebra of the Fell bundle ${\mathcal{N}}\O X:= \{ [{\mathcal{N}}\O(X)]_g \}_{g \in G}$ of the natural coaction of $G$ on the C*-algebra ${\mathcal{N}}\O(X)$ satisfies the co-universal property. But examples such as \cite[Remark~4.2]{CLSV11} indicate that outside the framework of \cite{CLSV11}, the same bundle may fail to produce a co-universal $\mathrm{C}^*$-algebra for $X$. This raises the question of whether a different bundle might do the job. We settle this question in the present section by considering the Fell bundle determined by the natural coaction on Sehnem's covariance algebra \cite{Seh18}.
We begin by establishing the notation and reviewing the basic details of Sehnem's construction.
Let $P$ be a unital subsemigroup of a group $G$ and let $X = \{X_p\}_{p\in P}$ be a product system over $P$ with coefficients in $A:=X_e$.
For a finite set $F \subseteq G$ let
\[
K_F := \bigcap_{g \in F} gP.
\]
For $r \in P$ and $g \in F$ define the ideal of $A$ given by
\[
I_{r^{-1} K_{\{r,g\}}} :=
\begin{cases}
\bigcap_{t \in K_{\{r,g\}}} \ker \varphi_{r^{-1}t} & \text{if } K_{\{r,g\}} \neq \emptyset \text{ and } r \notin K_{\{r,g\}},\\
A & \text{otherwise}.
\end{cases}
\]
Then let
\[
I_{r^{-1} (r \vee F)} := \bigcap_{g \in F} I_{r^{-1} K_{\{r,g\}}},
\]
and let the C*-correspondences
\[
X_F := \oplus_{r \in P} X_r I_{r^{-1} (r \vee F)}
\quad\text{and}\quad
X_F^+ := \oplus_{g \in G} X_{gF}.
\]
For every $p \in P$ we define
\[
t_{F, p}(\xi_p) (\eta_r) = M_{p,r}(\xi_p \otimes \eta_r) \in X_{pr} I_{(pr)^{-1}(pr \vee pF)}, \text{ for all } \eta_r \in X_r I_{r^{-1} (r \vee F)}.
\]
This is well-defined as $I_{r^{-1}(r \vee F)} = I_{(pr)^{-1}(pr \vee pF)}$ for all $r \in P$, and $I_{r^{-1}(r \vee F)} = I_{(s^{-1}r)^{-1}(s^{-1}r \vee s^{-1}F)}$ for all $r \in sP$.
Therefore we obtain a representation $t_F:=\{t_{F, p}\}_{p \in P}$ of $X$ on $\L(X_F^+)$ that integrates to a representation
\begin{equation}\label{E:repsPhi}
\Phi_F \colon {\mathcal{T}}(X) \longrightarrow \L(X_F^+).
\end{equation}
Now let the projections
\[
Q_{g,F} \colon X_F^+ \longrightarrow X_{gF}
\]
and define
\[
\nor{f}_F := \nor{Q_{e,F} \Phi_F(f) Q_{e,F}} \text{ for all } f \in \left[ {\mathcal{T}}(X) \right]_e.
\]
In particular we have that
\[
t_{F,p}(\xi_p) Q_{g, F} = Q_{pg, F} t_{F,p}(\xi_p)
\quad\text{and}\quad
t_{F,p}(\xi_p)^* Q_{g,F} = Q_{p^{-1}g, F} t_{F,p}(\xi_p)^*.
\]
and so $Q_{e,F}$ is reducing for the fixed point algebra under $\Phi_F$.
\begin{definition}\cite[Definition 3.2]{Seh18}
A Toeplitz representation is called \emph{strongly covariant} if it vanishes on the ideal ${\mathcal{I}}_e \lhd \left[ {\mathcal{T}}(X) \right]_e$ given by
\[
{\mathcal{I}}_e := \{f \in \left[ {\mathcal{T}}(X) \right]_e \mid \lim_F \nor{f}_F = 0\},
\]
where the limit is taken with respect to the partial order induced by inclusion on finite sets of $P$.
We denote by $A \times_X P$ the universal C*-algebra with respect to the strongly covariant representations of $X$.
\end{definition}
That is, $A \times_X P$ is the quotient ${\mathcal{T}}(X)/{\mathcal{I}}_\infty$ for the ideal ${\mathcal{I}}_\infty \lhd {\mathcal{T}}(X)$ generated by ${\mathcal{I}}_e$.
One of the important points of Sehnem's theory is that $A \times_X P$ does not depend on the group $G$ where $P$ embeds, while $A \hookrightarrow A \times_X P$ faithfully.
As a quotient by an induced ideal of ${\mathcal{T}}(X)$, it follows that $A \times_X P$ inherits the coaction of $G$ \cite[Lemma 3.4]{Seh18}.
The following is the main theorem of \cite{Seh18}.
\begin{theorem}\cite[Theorem 3.10]{Seh18}
Let $P$ be a unital subsemigroup of a group $G$ and let $X$ be a product system over $P$ with coefficients in $A$.
Then a $*$-homomorphism of $A \times_X P$ is faithful on $A$ if and only if it is faithful on the fixed point algebra $\left[ A \times_X P \right]_e$.
\end{theorem}
The construction of Sehnem \cite{Seh18} encompasses a number of variants that have appeared in the literature.
This applies to the case where $(G,P)$ is a weak quasi-lattice ordered pair and $X$ is a compactly aligned product system such that $X$ is faithful, or $P$ is directed and the representation of $X$ in ${\mathcal{N}}\O(X)$ is faithful.
In this case Sehnem \cite[Proposition 4.6]{Seh18} obtains that $A \times_X P$ is the Cuntz-Nica-Pimsner algebra ${\mathcal{N}}\O(X)$ of Sims-Yeend \cite{SY10}.
Our next theorem confirms that the Fell bundle of the covariance algebra $A \times_X P$ provides the right setup for co-universality.
Indeed, our equivariant C*-envelope coincides with the reduced cross sectional C*-algebra of the Fell bundle determined by the natural coaction on $A \times_X P$.
\begin{theorem}\label{T:co-un is Fell}
Let $P$ be a right LCM subsemigroup of a group $G$ and let $X$ be a compactly aligned product system over $P$ with coefficients from $A$.
Consider the Fell bundle
\[
\S{\mathcal{C}} X := \{ [A \times_X P]_g \}_{g \in G}
\]
induced by the natural coaction of $G$ on $A \times_X P$.
Then the cross sectional algebra and the reduced cross sectional algebra of $\S{\mathcal{C}} X$ are isomorphic to the covariance algebra and to the C*-envelope, respectively:
\[
\mathrm{C}^*(\S{\mathcal{C}} X) \simeq A \times_X P \quad\text{and}\quad \mathrm{C}^*_\lambda(\S{\mathcal{C}} X) \simeq \mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+).
\]
\end{theorem}
\begin{proof}
For the first part, one argues as in the proof of Proposition~\ref{T:Fock is Fell}, since the strong covariance relations live in the fixed point algebra.
For the second part note that by \cite[Theorem 3.10]{Seh18} we have an equivariant, injective, Nica-covariant representation of $X$ into $A \times_X P$. Hence the co-universality of $\mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+)$ proved in Theorem \ref{T:co-univ}
implies the existence of a $*$-epimorphism
\[
\phi:A \times_X P \longrightarrow \mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+),
\]
which is injective on $A \hookrightarrow A \times_{X} P$ and maps generators to generators. By \cite[Theorem 3.10]{Seh18}, $\phi$ is injective on $[A \times_X P]_e$ and so it is injective on $\S{\mathcal{C}} X$. Therefore $\mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+)$ becomes a cross sectional algebra of the bundle $\S{\mathcal{C}} X$ with a conditional expectation on $\S{\mathcal{C}} X_e$. By the minimality property of the reduced cross sectional algebra \cite[Theorem 3.3]{Exe97}, there is a canonical $*$-epimorphism
\[
\mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+) \longrightarrow \mathrm{C}^*_\lambda(\S{\mathcal{C}} X).
\]
By Theorem \ref{T:co-univ}, the coaction on $\mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+)$ is normal.
Therefore the conditional expectation on $\mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+)$ is faithful and thus the above $*$-epimorphism is faithful.
\end{proof}
Let us see the form of the strong covariance relations for compactly aligned product systems over right LCM semigroups.
The following is proved by Sehnem in \cite[Proposition 4.2]{Seh18} for quasi-lattices, but the same proof passes to right LCM-semigroups as well.
Notice that we consider the restriction to $X_F$ rather than the representation on the entire $X_F^+$.
\begin{proposition}\label{P:sc lcm}
Let $P$ be a right LCM subsemigroup of a group $G$ and let $X$ be a compactly aligned product system over $P$. A representation $t = \{t_p\}_{p \in P}$ of $X$ is strongly covariant if and only if it is Nica-covariant and it satisfies
\[
\sum_{p \in F} t_{F, p}(k_p) |_{X_F} = 0 \Longrightarrow \sum_{p \in F} t_p(k_p) = 0
\]
for any finite $F \subseteq P$ and $k_p \in {\mathcal{K}} X_p$.
\end{proposition}
\begin{proof}
The proof is identical to that of \cite[Proposition 4.2]{Seh18} by replacing $p \vee q$ with $w$ whenever $p P \cap q P = w P$.
Note that the ideals are defined in such a way that if $F$ is a finite subset of $P$ and $r \in P$ then
\[
\sum_{p \in F} t_p(k_p) t_r(\eta_r) = \sum_{r \in pP} t_p(k_p) t_r(\eta_r) = \sum_{r \in p P} t_r( i_p^{r}(k_p) \eta_r)
\]
for all $\eta_r \in X_r \cdot I_{r^{-1}( r \vee F)}$ and every Nica-covariant representation $\{t_p\}_{p \in P}$ of $X$.
\end{proof}
Let $P$ be a unital subsemigroup of a group $G$ and $X$ be a product system over $P$ with coefficients in $A$.
Let $q_\lambda \colon {\mathcal{T}}(X) \to {\mathcal{T}}_\lambda(X)$ be the canonical $*$-epimorphism.
Another interesting C*-algebra related to Sehnem's algebra can be obtained from ${\mathcal{T}}_\lambda(X)$ by taking
the quotient of ${\mathcal{T}}_\lambda(X)$ by the ideal $q_\lambda({\mathcal{I}}_\infty)$. We would like to analyze this quotient
and discuss its relation with the cross sectional algebra $\mathrm{C}^*_\lambda(\S{\mathcal{C}} X)$.
It is easy to see that the ideal $q_\lambda({\mathcal{I}}_\infty)$ of ${\mathcal{T}}_{\lambda}(X)$ is induced, hence $\T_\la(X)/ q_\la(\I_\infty)$ inherits from ${\mathcal{T}}_{\lambda}(X)$
the coaction of $G$ and, with it, a topological grading \cite[Proposition~23.10]{Exe17}.
By \cite[Theorem 3.3]{Exe97}, we then have an equivariant $*$-epimorphism
\[
\T_\la(X)/ q_\la(\I_\infty) \longrightarrow \mathrm{C}^*_\lambda(\S{\mathcal{C}} X),
\]
which is known to be an isomorphism if, for instance, $G$ is exact.
We see that
the representations $\Phi_F$ from \eqref{E:repsPhi} used to define the strong covariance relations are sub-representations of $\overline{\delta}_\lambda \colon {\mathcal{T}}_\lambda(X) \to {\mathcal{T}}_\lambda(X) \otimes \mathrm{C}^*_\lambda(G)$ for $\overline{\delta}_\lambda = ({\operatorname{id}} \otimes \lambda) \overline{\delta}$ where $\overline{\delta}$ is the normal coaction on the Fock representation.
Indeed we can identify
\[
X_F^+ = \oplus_{g \in G} \oplus_{r \in P} X_r I_{r^{-1}(r \vee gF)}
\]
with a submodule of ${\mathcal{F}} X \otimes \ell^2(G)$ through the isometry given by
\[
X_{r} I_{r^{-1}(r \vee gF)} \ni \eta_r \mapsto \eta_r \otimes e_g \in X_r \otimes \ell^2(G).
\]
Recall here that ${\mathcal{F}} X \otimes \ell^2(G)$ is the exterior tensor product of two modules (seeing $\ell^2(G)$ as a module over $\mathbb{C}$), and there is a faithful $*$-homomorphism
\[
{\mathcal{T}}_\lambda(X) \otimes \mathrm{C}^*_\lambda(G) \subseteq \L({\mathcal{F}} X) \otimes {\mathcal{B}}(\ell^2(G)) \hookrightarrow \L({\mathcal{F}} X \otimes \ell^2(G)).
\]
We then see that
\[
t_{F,p}(\xi_p) = (\overline{t}_p(\xi_p) \otimes \lambda_p)|_{X_F^+} = \overline{\delta}_\lambda(\overline{t}_p(\xi_p))|_{X_F^+}
\text{ for all }
p \in P,
\]
and likewise for their adjoints.
Thus $X_F^+$ is reducing under $\overline{\delta}_\lambda({\mathcal{T}}_\lambda(X))$.
Recall also that $X_F$ is reducing for $[{\mathcal{T}}(X)]_e$ as the range of the projection $Q_{e,F}$ and so we obtain the representation
\[
\bigoplus\limits_{ F \subseteq G \textup{ finite} } \Phi_F(\cdot)|_{X_F} \colon [{\mathcal{T}}(X)]_e \longrightarrow [{\mathcal{T}}_\lambda(X)]_e \longrightarrow \prod\limits_{ F \subseteq G \textup{ finite} } {\mathcal{B}}(X_F).
\]
In particular almost by definition we have for an $f \in {\mathcal{T}}(X)$ that
\[
f \in {\mathcal{I}}_e \quad\text{if and only if}\quad \bigoplus\limits_{ F \subseteq G \textup{ finite} } \Phi_F(f)|_{X_F} \in c_0({\mathcal{B}}(X_F) \mid F \subseteq G \textup{ finite} ).
\]
By definition we then get that the following diagram
\[
\xymatrix{
[{\mathcal{T}}(X)]_e \ar[d] \ar[rr] & & [{\mathcal{T}}_\lambda(X)]_e \ar[rr] \ar[d] & &
\prod\limits_{ F \subseteq G \textup{ finite} } {\mathcal{B}}(X_F) \ar[d] \\
[A \times_X P]_e \ar[rr] & & [\T_\la(X)/ q_\la(\I_\infty)]_e \ar[rr] & &
\quo{\prod\limits_{ F \subseteq G \textup{ finite} } {\mathcal{B}}(X_F)}{c_0({\mathcal{B}}(X_F) \mid F \subseteq G \textup{ finite}) }
}
\]
is commutative.
Consequently the $e$-graded $*$-algebraic relations in ${\mathcal{T}}_\lambda(X)$ induce strong covariance relations; this is why strong covariance relations are Nica-covariant.
In particular we obtain the following corollary.
\begin{corollary}\label{C:red seh inj}
Let $P$ be a unital subsemigroup of a group $G$ and let $X$ be a product system over $P$
with coefficients in $A$.
Then $A \hookrightarrow \T_\la(X)/ q_\la(\I_\infty)$.
Moreover a $*$-homomorphism of $\T_\la(X)/ q_\la(\I_\infty)$ is faithful on $A$ if and only if it is faithful on $\left[ \T_\la(X)/ q_\la(\I_\infty) \right]_e$.
\end{corollary}
\begin{proof}
The proof that $A \hookrightarrow \T_\la(X)/ q_\la(\I_\infty)$ follows by combining the commutative diagram
\[
\xymatrix{
{\mathcal{T}}(X) \ar[d] \ar[rr] & & {\mathcal{T}}_\lambda(X) \ar[d] \\
A \times_X P \ar[rr] & & \T_\la(X)/ q_\la(\I_\infty)
}
\]
of $*$-epimorphisms with the fact that $$A \cap c_0({\mathcal{B}}(X_F) \mid F \subseteq G \textup{ finite} ) = \{0\},$$ which is the main argument in \cite[Lemma 3.6]{Seh18}.
The rest now follows by combining this with \cite[Theorem 3.10]{Seh18}.
\end{proof}
Surprisingly, the key to injectivity is the normality of the coaction of $G$ on $\T_\la(X)/ q_\la(\I_\infty)$.
\begin{corollary}\label{C:exa Seh}
Let $P$ be a right LCM subsemigroup of a group $G$ and let $X$ be a compactly aligned product system over $P$ with coefficients from $A$.
Then the equivariant $*$-epimorphism
\[
\T_\la(X)/ q_\la(\I_\infty) \longrightarrow \mathrm{C}^*_\lambda(\S{\mathcal{C}} X)
\]
is faithful if and only if the coaction of $G$ on $\T_\la(X)/ q_\la(\I_\infty)$ is normal.
\end{corollary}
\begin{proof}
First suppose that the coaction of $G$ on $\T_\la(X)/ q_\la(\I_\infty)$ is normal.
Then the equivariant $*$-epimorphism $\T_\la(X)/ q_\la(\I_\infty) \to \mathrm{C}^*_\lambda(\S{\mathcal{C}} X)$ is faithful if and only if it is faithful on the fixed point algebra. By Corollary \ref{C:red seh inj}, this happens if and only if it is faithful on $A$, which is the case because, by Theorem \ref{T:co-univ} and Theorem \ref{T:co-un is Fell}, we have
\[
A \hookrightarrow {\mathcal{T}}_\lambda(X)^+ \hookrightarrow \mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+) \simeq \mathrm{C}^*_\lambda(\S{\mathcal{C}} X).
\]
Conversely suppose that the equivariant $*$-epimorphism is faithful.
By Corollary \ref{C:normal}, the coaction $\delta_{\textup{env}}$ on $\mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+)$ is normal and normality passes to $\T_\la(X)/ q_\la(\I_\infty)$ via the equivariant map.
\end{proof}
\section{Reduced Hao-Ng isomorphisms}
Let ${\mathfrak{G}}$ be a discrete group.
Suppose that the operator algebra ${\mathfrak{A}} \subseteq {\mathcal{B}}(H)$ admits a ${\mathfrak{G}}$-action $\alpha$ by completely isometric automorphisms $\alpha_{{\mathfrak{g}}}$ for ${\mathfrak{g}} \in {\mathfrak{G}}$.
Then one can define the reduced crossed product ${\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}$ of ${\mathfrak{A}}$ by ${\mathfrak{G}}$ as the norm-closed subalgebra of ${\mathcal{B}}(H) \otimes {\mathcal{B}}(\ell^2({\mathfrak{G}}))$ generated by $\pi(a)$ and $U_{{\mathfrak{g}}}$ where
\[
\pi(a) (\xi \otimes e_{{\mathfrak{h}}}) = \alpha_{{\mathfrak{h}}}(a)\xi \otimes e_{{\mathfrak{h}}}
\text{ for all } a \in {\mathfrak{A}}
\quad\text{and}\quad
U_{\mathfrak{g}} (\xi \otimes e_{{\mathfrak{h}}}) = \xi \otimes e_{{\mathfrak{g}} {\mathfrak{h}}}
\text{ for all }
{\mathfrak{g}} \in {\mathfrak{G}}.
\]
It follows that ${\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}$ is independent of the representation of ${\mathfrak{A}}$ on $H$.
At the same time the ${\mathfrak{G}}$-action extends to an action on $\mathrm{C}^*_{\textup{env}}({\mathfrak{A}})$ and one can form the reduced C*-crossed product.
Katsoulis \cite[Theorem 2.5]{Kat17} has shown that
\[
\mathrm{C}^*_{\textup{env}}({\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}) \simeq \mathrm{C}^*_{\textup{env}}({\mathfrak{A}}) \rtimes_{\alpha, \lambda} {\mathfrak{G}}.
\]
The isomorphism also holds for C*-envelopes of cosystems whose coactions commute with $\alpha$.
\begin{proposition}\label{P:cp env}
Let $({\mathfrak{A}}, G, \delta)$ be a (normal) cosystem.
Let ${\mathfrak{G}}$ be a group acting on ${\mathfrak{A}}$ by completely isometric automorphisms $\alpha_{{\mathfrak{g}}}$ for ${\mathfrak{g}} \in {\mathfrak{G}}$, such that
\[
\delta \alpha_{{\mathfrak{g}}} = (\alpha_{{\mathfrak{g}}} \otimes {\operatorname{id}}) \delta
\text{ for all }
{\mathfrak{g}} \in {\mathfrak{G}}.
\]
Then $G$ induces a {(normal resp.)} coaction $\delta\rtimes {\operatorname{id}}$ on ${\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}$ and
\[
\mathrm{C}^*_{\textup{env}}({\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}, G, \delta\rtimes {\operatorname{id}}) \simeq \mathrm{C}^*_{\textup{env}}({\mathfrak{A}}, G, \delta) \rtimes_{\alpha, \lambda} {\mathfrak{G}}.
\]
\end{proposition}
\begin{proof}
For convenience suppose that ${\mathfrak{A}} \subseteq \mathrm{C}^*_{\textup{env}}({\mathfrak{A}}) \subseteq {\mathcal{B}}(H)$ and $\mathrm{C}^*(G) \subseteq {\mathcal{B}}(K)$.
The action $\alpha \otimes {\operatorname{id}}$ of ${\mathfrak{G}}$ on $\mathrm{C}^*_{\textup{env}}({\mathfrak{A}}) \otimes \mathrm{C}^*(G)$ gives rise to the crossed product $[\mathrm{C}^*_{\textup{env}}({\mathfrak{A}}) \otimes \mathrm{C}^*(G) ] \rtimes_{\alpha\otimes {\operatorname{id}}, \lambda} {\mathfrak{G}}$.
To make a distinction we write
\[
\pi(a) \in {\mathcal{B}}(H) \otimes {\mathcal{B}}(\ell^2({\mathfrak{G}}))
\quad\text{and}\quad
\pi'(a \otimes u_g) \in {\mathcal{B}}(H) \otimes {\mathcal{B}}(K) \otimes {\mathcal{B}}(\ell^2({\mathfrak{G}}))
\]
for all $a \in {\mathfrak{A}}_g$, $g \in G$, while we use the symbols
\[
U_{{\mathfrak{g}}} \in {\mathcal{B}}(H) \otimes {\mathcal{B}}(\ell^2({\mathfrak{G}}))
\quad\text{and}\quad
U_{{\mathfrak{g}}}' \in {\mathcal{B}}(H) \otimes {\mathcal{B}}(K) \otimes {\mathcal{B}}(\ell^2({\mathfrak{G}}))
\]
for the different shifts that define the crossed products
\[
\mathrm{C}^*_{\textup{env}}({\mathfrak{A}}) \rtimes_{\alpha,\lambda} {\mathfrak{G}}
\quad\text{and}\quad
[\mathrm{C}^*_{\textup{env}}({\mathfrak{A}}) \otimes \mathrm{C}^*(G) ] \rtimes_{\alpha \otimes {\operatorname{id}},\lambda} {\mathfrak{G}}.
\]
Up to a unitary interchanging of $K$ with $\ell^2({\mathfrak{G}})$, we get the $*$-isomorphism
\[
\Phi \colon [\mathrm{C}^*_{\textup{env}}({\mathfrak{A}}) \otimes \mathrm{C}^*(G) ] \rtimes_{\alpha \otimes {\operatorname{id}},\lambda} {\mathfrak{G}} \longrightarrow [\mathrm{C}^*_{\textup{env}}({\mathfrak{A}}) \rtimes_{\alpha, \lambda} {\mathfrak{G}}] \otimes \mathrm{C}^*(G)
\]
given by
\[
\Phi[\pi'(a \otimes u_g) \cdot U_{{\mathfrak{g}}}'] = (\pi(a) U_{{\mathfrak{g}}}) \otimes u_g
\]
for all $a \in {\mathfrak{A}}_g$, $g \in G$ and ${\mathfrak{g}} \in {\mathfrak{G}}$.
Consider the completely isometric map
\[
\delta \rtimes {\operatorname{id}} := \Phi \circ {\operatorname{Ind}}_{{\mathfrak{G}}}(\delta) \colon {\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}} \longrightarrow ({\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}) \otimes \mathrm{C}^*(G),
\]
where ${\operatorname{Ind}}_{{\mathfrak{G}}}(\delta)$ is the induced representation on ${\mathfrak{A}} \rtimes_\lambda {\mathfrak{G}}$ given by
\[
{\operatorname{Ind}}_{{\mathfrak{G}}}(\delta)(\pi(a) U_{{\mathfrak{g}}}) = \pi'(a \otimes u_g) U_{{\mathfrak{g}}}'
\text{ for all }
a \in{\mathfrak{A}}_g, g \in G.
\]
The compatibility assumption on $\alpha$ guarantees that
\[
(\delta \rtimes {\operatorname{id}})(\pi(a) U_{{\mathfrak{g}}}) = (\pi(a) U_{{\mathfrak{g}}}) \otimes u_g \text{ for all } a \in {\mathfrak{A}}_g, g \in G, \text{ and } {\mathfrak{g}} \in {\mathfrak{G}}.
\]
Therefore, $\delta \rtimes {\operatorname{id}}$ is a completely isometric map which satisfies the coaction identity on ${\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}$.
Since $\delta$ is non-degenerate, we have that $\sum_{g \in G} {\mathfrak{A}}_g$ is dense in ${\mathfrak{A}}$, so that also $\sum_{g \in G} [{\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}]_g$ is dense in ${\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}$.
Hence $({\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}, G, \delta \rtimes {\operatorname{id}})$ is a cosystem.
When $\delta$ is normal then we can deduce that $\delta \rtimes {\operatorname{id}}$ is also normal by working with $\mathrm{C}^*_\lambda(G) \subseteq {\mathcal{B}}(\ell^2(G))$ and $\delta_\lambda$, in place of $\mathrm{C}^*(G) \subseteq {\mathcal{B}}(K)$ and $\delta$, respectively.
This will give that the $*$-homomorphism
\[
({\operatorname{id}}_{{\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}} \otimes \lambda) \delta \rtimes {\operatorname{id}} = \delta_\lambda \rtimes {\operatorname{id}} \colon {\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}} \mapsto ({\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}) \otimes \mathrm{C}^*_\lambda(G)
\]
and thus $\delta \rtimes {\operatorname{id}}$ is normal.
For the second part we will use the realization of the C*-envelope of a cosystem from Theorem \ref{T:co-env}.
By combining with \cite[Theorem 2.5]{Kat17} we have the following faithful $*$-homomorphisms
\[
\xymatrix{
\mathrm{C}^*_{\textup{env}}({\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}, G, \delta\rtimes {\operatorname{id}}) \ar[rr] \ar@{<.>}[dd]& & \mathrm{C}^*_{\textup{env}}({\mathfrak{A}} \rtimes_{\alpha, \lambda} {\mathfrak{G}}) \otimes \mathrm{C}^*(G) \ar[d]^{\simeq} \\
& & (\mathrm{C}^*_{\textup{env}}({\mathfrak{A}}) \rtimes_{\alpha, \lambda} {\mathfrak{G}}) \otimes \mathrm{C}^*(G) \ar[d]^{\simeq} \\
\mathrm{C}^*_{\textup{env}}({\mathfrak{A}}, G, \delta) \rtimes_{\alpha, \lambda} {\mathfrak{G}} \ar[rr] & & (\mathrm{C}^*_{\textup{env}}({\mathfrak{A}}) \otimes \mathrm{C}^*(G) ) \rtimes_{\alpha, \lambda} {\mathfrak{G}}
}
\]
that induce the required $*$-isomorphism.
\end{proof}
Next we discuss the application of Proposition~\ref{P:cp env} to cosystems. Recall that a
\emph{generalized gauge action of ${\mathfrak{G}}$ on ${\mathcal{T}}_\lambda(X)$} is an action $\alpha \colon {\mathfrak{G}} \to \operatorname{Aut}({\mathcal{T}}_\lambda(X))$ that satisfies
\[
\alpha_{{\mathfrak{g}}}(\overline{t}_p(X_p)) = \overline{t}_p(X_p) \text{ for all } p \in P \text{ and } {\mathfrak{g}} \in {\mathfrak{G}}.
\]
Consider the reduced C*-crossed product ${\mathcal{T}}_\lambda(X) \rtimes_{\alpha, \lambda} {\mathfrak{G}}$ of ${\mathcal{T}}_\lambda(X)$ by ${\mathfrak{G}}$ given by the representation $\rho$ of ${\mathcal{T}}_\lambda(X)$ and $U$ of ${\mathfrak{G}}$. Then $\alpha$ preserves ${\mathcal{T}}_\lambda(X)^+$ and thus we get the nonselfadjoint crossed product ${\mathcal{T}}_\lambda(X)^+ \rtimes_{\alpha, \lambda} {\mathfrak{G}}$ in the sense of Katsoulis and Ramsey \cite{KR16}. The ideal $q_\lambda({\mathcal{I}}_\infty)$ of strong covariance in ${\mathcal{T}}_\lambda(X)$ is $\alpha$-invariant, so that $\alpha$ descends to a group action of ${\mathfrak{G}}$ on $\T_\la(X)/ q_\la(\I_\infty)$ which is also a generalized gauge action and gives the reduced crossed product $(\T_\la(X)/ q_\la(\I_\infty)) \rtimes_{\alpha, \lambda} {\mathfrak{G}}$.
We can define a product system by using the concrete representations of $X$ and ${\mathfrak{G}}$.
To this end, for every $p \in P$ we define
\[
X_p \rtimes_{\alpha, \lambda} {\mathfrak{G}} := \overline{\operatorname{span}} \{\rho(\overline{t}_p(\xi_p)) U_{{\mathfrak{g}}} \mid \xi_p \in X_p, {\mathfrak{g}} \in {\mathfrak{G}}\}.
\]
Since $\rho \alpha_{{\mathfrak{g}}}(f) = U_{{\mathfrak{g}}} \rho(f) U_{{\mathfrak{g}}}^*$ for all $f \in {\mathcal{T}}_\lambda(X)$ we can also write
\[
X_p \rtimes_{\alpha, \lambda} {\mathfrak{G}} := \overline{\operatorname{span}} \{U_{{\mathfrak{g}}} \rho(\overline{t}_p(\xi_p)) \mid \xi_p \in X_p, {\mathfrak{g}} \in {\mathfrak{G}}\}.
\]
Consequently
\[
\overline{ (X_p \rtimes_{\alpha, \lambda} {\mathfrak{G}}) \cdot (X_q \rtimes_{\alpha, \lambda} {\mathfrak{G}})} = X_{pq} \rtimes_{\alpha, \lambda} {\mathfrak{G}},
\]
and thus the family
\[
X \rtimes_{\alpha, \lambda} {\mathfrak{G}} := \{X_p \rtimes_{\alpha, \lambda} {\mathfrak{G}}\}_{p \in P}
\]
defines a product system over $P$ with coefficients from $A \rtimes_{\alpha, \lambda} {\mathfrak{G}}$.
Furthermore we can write
\begin{align*}
{\mathcal{K}}(X_p \rtimes_{\alpha, \lambda} {\mathfrak{G}})
& =
\overline{\operatorname{span}} \{ \rho(\overline{\psi}_{p}(k_p)) U_{{\mathfrak{g}}} \mid k_p \in {\mathcal{K}} X_p, {\mathfrak{g}} \in {\mathfrak{G}} \} \\
& =
\overline{\operatorname{span}} \{ U_{{\mathfrak{g}}} \rho(\overline{\psi}_{p}(k_p)) \mid k_p \in {\mathcal{K}} X_p, {\mathfrak{g}} \in {\mathfrak{G}} \}.
\end{align*}
As $X \rtimes_{\alpha, \lambda} {\mathfrak{G}}$ is defined concretely we get that
\[
i_{p}^{pq} \left( \rho(\overline{\psi}_p(k_p)) U_{{\mathfrak{g}}} \right) \rho(\overline{t}_{pq}(\xi_{pq}) U_{{\mathfrak{h}}})
=
\rho(\overline{\psi}_p(k_p)) U_{{\mathfrak{g}}} \rho(\overline{t}_{pq}(\xi_{pq}) U_{{\mathfrak{h}}}).
\]
Moreover we see that
\[
U_{{\mathfrak{g}}} \rho(\overline{\psi}_{p}(k_p)) \cdot
\rho(\overline{\psi}_{p}(k_p)) U_{{\mathfrak{h}}}
=
\rho( \alpha_{{\mathfrak{g}}} (\overline{\psi}_p(k_p) \overline{\psi}_q(k_q)) ) U_{{\mathfrak{g}} {\mathfrak{h}}}.
\]
As $\alpha_{{\mathfrak{g}}}$ defines automorphisms on the compact operators then a proof similar to \cite[Proposition 6.3]{DK20} (see also \cite[Proposition 3.2]{Kat20} for a shorter proof) shows that compact alignment of $X$ implies compact alignment of $X \rtimes_{\alpha, \lambda} {\mathfrak{G}}$.
By construction we see that
\[
{\mathcal{T}}_\lambda(X) \rtimes_{\alpha, \lambda} {\mathfrak{G}} \simeq {\mathcal{T}}_\lambda(X \rtimes_{\alpha, \lambda} {\mathfrak{G}})
\quad\text{and}\quad
{\mathcal{T}}_\lambda(X)^+ \rtimes_{\alpha, \lambda} {\mathfrak{G}} \simeq {\mathcal{T}}_\lambda(X \rtimes_{\alpha, \lambda} {\mathfrak{G}})^+.
\]
Since we will be dealing with two different product systems at once, we will write ${\mathcal{I}}_\infty(X) $ and ${\mathcal{I}}_\infty( X \rtimes_{\alpha, \lambda} {\mathfrak{G}} )$ to distinguish the two relevant strong covariance ideals.
\begin{theorem}\label{T:hao ng}Let $P$ be a right LCM subsemigroup of a group $G$ and let $X$ be a compactly aligned product system over $P$ with coefficients from $A$.
Let $\alpha \colon {\mathfrak{G}} \to \operatorname{Aut}({\mathcal{T}}_\lambda(X))$ be a generalized gauge action by a discrete group ${\mathfrak{G}}$.
Then
\[
\mathrm{C}^*_\lambda(\S{\mathcal{C}}(X \rtimes_{\alpha, \lambda} {\mathfrak{G}})) \simeq \mathrm{C}^*_\lambda(\S{\mathcal{C}} X) \rtimes_{\alpha, \lambda} {\mathfrak{G}}.
\]
If in addition the coaction of $G$ on $A \rtimes_{\alpha, \lambda} {\mathfrak{G}}$ is normal, which is the case e.g.\ when $G$ is exact, then
\begin{equation}\label{E:co-haong}
{\mathcal{T}}_\lambda(X \rtimes_{\alpha, \lambda} {\mathfrak{G}}) /q_\lambda({\mathcal{I}}_\infty( X \rtimes_{\alpha, \lambda} {\mathfrak{G}} )) \simeq ({\mathcal{T}}_\lambda(X) /{\mathcal{I}}_\infty(X)) \rtimes_{\alpha, \lambda} {\mathfrak{G}}.
\end{equation}
\end{theorem}
\begin{proof}
For convenience set $Y := X \rtimes_{\alpha, \lambda} {\mathfrak{G}}$.
By construction we see that
\[
({\mathcal{T}}_\lambda(X) \otimes \mathrm{C}^*(G)) \rtimes_{\alpha \otimes {\operatorname{id}}, \lambda} {\mathfrak{G}}
\simeq^{\Psi}
({\mathcal{T}}_\lambda(X) \rtimes_{\alpha, \lambda} {\mathfrak{G}}) \otimes \mathrm{C}^*(G).
\]
In particular $\Psi$ is equivariant and by restricting $\Psi$ we get
\[
({\mathcal{T}}_\lambda(X)^+ \otimes \mathrm{C}^*(G)) \rtimes_{\alpha \otimes {\operatorname{id}}, \lambda} {\mathfrak{G}}
\simeq
({\mathcal{T}}_\lambda(X)^+ \rtimes_{\alpha, \lambda} {\mathfrak{G}}) \otimes \mathrm{C}^*(G).
\]
By Theorem \ref{T:co-univ} and Proposition \ref{P:cp env} we have that
\begin{align*}
\mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(Y)^+, G, \overline{\delta} \rtimes {\operatorname{id}})
&\simeq
\mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+ \rtimes_{\alpha, \lambda} {\mathfrak{G}}, G, \Psi(\overline{\delta} \rtimes {\operatorname{id}})) \\
& \simeq
\mathrm{C}^*_{\textup{env}}({\mathcal{T}}_\lambda(X)^+, G, \ol{\de}^+) \rtimes_{\alpha, \lambda} {\mathfrak{G}},
\end{align*}
and an application of Theorem \ref{T:co-un is Fell} proves \eqref{E:co-haong}. The second part follows from Corollary \ref{C:exa Seh} and the proof is complete.
\end{proof}
\begin{acknow}
Part of the research was carried out during the Focused Research Group 20frg248: Noncommutative Boundaries for Tensor Algebras at the Banff International Research Station.
Adam Dor-On was supported by the NSF grant DMS-1900916 and by the European Union's Horizon 2020 Marie Sklodowska-Curie grant No 839412.
Evgenios Kakariadis acknowledges support from EPSRC as part of the programme ``Operator Algebras for Product Systems'' (EP/T02576X/1).
Marcelo Laca was partially supported by NSERC Discovery Grant RGPIN-2017-04052.
Xin Li has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 817597).
\end{acknow}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,875 |
StudioSeena's Blog
Visit StudioSeena's Website
Author: Christina Lorraine Young (a.k.a. seena)
Syndication Story
March 31, 2016 | Christina Lorraine Young (a.k.a. seena)
This is such a fantastic story!
Mark Victor Young
This is a poster/teaser designed for my last comic strip collaboration with Tim Levins, called Built to Last. Here's the story of how it came about:
When we sent out our strip Then Comes Marriage to the six or seven major syndicates, we had an amazing response. We had two development type offers and one request to see more strips. This beat all the form letters and the one or two encouraging handwritten comments we got in reply to Rivertown News hands down. We were ecstatic.
An Arts & Crafts Book Like You've Never Seen Before!
February 17, 2016 February 17, 2016 | Christina Lorraine Young (a.k.a. seena)
I am so excited to have been asked to contribute my art to this amazing project! "Around The World With 80 Artists" is a compilation of arts and crafts projects curated by Mahe Zehra Husain, founder of The Creative Art Academy. From budding crafters to advanced artists, there is a project here for everyone! My contributions will include not only some of the coloring pages from The Art Journal Coloring Book, but also a tutorial on how to make your own coloring pages, easy-peasy!
Best of all, 20% of the proceeds from this project will go to The Malala Fund to help girls gain an education. Girls who might otherwise not get their chance at a basic human right so many of us take for granted.
Click the image linked below to sign up for your own FREE e-copy before it goes on sale!
Adult Coloring Books: Mixing Copics & Colored Pencils
February 7, 2016 February 7, 2016 | Christina Lorraine Young (a.k.a. seena)
Have you tried mixing markers with colored pencils when you color in your favourite adult coloring books? It's a technique I love not only because I get to use more of my stash, but also because markers can ensure a more even color base for adding depth & dimension with shading. Shading is half the fun! Here's a quick video where I show you what I'm talking about.
Did you know that I'm giving away FREE stuff?! Subscribe to my monthly newsletter and get your first prezzie, a free coloring page from The Art Journal Coloring Book! Subscribe here and stay tuned for more fun, prompts, freebies and more!
And when you color a page from my book, please share it on Instagram and Facebook with the hashtag #theartjournalcoloringbook so I can see how your pages look – can't wait!
FOTI: Coloring Books from the 1960s!
December 30, 2015 January 26, 2016 | Christina Lorraine Young (a.k.a. seena)
This is just too much fun to keep all to myself! Check out the full article here.
AVAILABLE NOW!!
December 1, 2015 January 27, 2016 | Christina Lorraine Young (a.k.a. seena)
I'm thrilled to announce that my book, The Art Journal Coloring Book, is now available for online purchases! Simply click the cover to be transported to the amazon site.
Sunday Fun
November 29, 2015 December 1, 2015 | Christina Lorraine Young (a.k.a. seena)
This is a wonderful comic strip!
Levins and Young
As today marks the last Sunday of our Rivertown News run, we thought a formal thank-you was in order for our volunteer colorist. As previously mentioned, we didn't do any official Sunday RN strips. But we thought it would be fun to add some color to a few regular strips to post on Sundays.
The coloring was done by Christina from studioseena.com (aka Mark's wife) using markers on printed scans which were then re-scanned and uploaded. She turned this around in record time and we appreciate her great-looking work. Now, please understand that these "colorized" strips are not canon, and were provided for entertainment purposes only (and in a fashion much classier than when Ted Turner tried to colorize Casablanca).
You can see these colored strips at these links: week 1, 2, 3, and 4. Unfortunately, a scan of a scan of a scan loses some quality with each…
The wait for my ART JOURNAL COLORING BOOK is almost over!
November 26, 2015 | Christina Lorraine Young (a.k.a. seena)
Here's a sneak peak at the cover!
Look for it at amazon.com by December 1st!
FOTI: Doodling Helps Memories Stick
November 8, 2015 | Christina Lorraine Young (a.k.a. seena)
Shelley Paul and Jill Gough had heard that doodling while taking notes could help improve memory and concept retention, but as instructional coaches they were reluctant to bring the idea to teachers without trying it out themselves first.
To give it a fair shot, Paul tried sketching all her notes from a two-day conference. By the end, her drawings had improved and she was convinced the approach could work for kids, too.
"It causes you to…
FOTI: Looking at lovely things—and people—can improve quality of life
The usual markers of happiness are colloquially known as the "Big Seven": wealth (especially compared to those around you), family relationships, career, friends, health, freedom, and personal values, as outlined by London School of Economics professor Richard Layard in Happiness: Lessons from a New Science. According to the Goldberg study, however, what makes people happiest isn't even in the Big Seven. Instead, happiness is most easily attained by living in an aesthetically beautiful city. The things people were constantly…
Search This Blog Select Category Altered Books art journal videos Art Journaling Articles Coloring Book facebook group FOTI Inspiration Learn Mixed Media Mixed Media Projects Resources Tutorials Uncategorized
View seenayoung's profile on Facebook
View seenayoung's profile on Twitter
View seenayoung's profile on Instagram
View seenayoung's profile on Pinterest
View UCpKaebS4DE9PLb89902J_Nw's profile on YouTube
Tools for a new year #2020Resolutions
Where Else is My Art? | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 461 |
HomeTouringUK and Ireland tour of Bring It On The Musical cancelled
10 January 2022 Mark Ludmon Touring Comments Off on UK and Ireland tour of Bring It On The Musical cancelled
Last Updated on 16 January 2022 by Showcall Editorial Team
Selladoor Worldwide has cancelled the rest of the UK and Ireland tour of Bring It On The Musical after it suffered losses due to Covid-19
Amber Davies and the company in Bring It On! The Musical. Photo: Helen Maybanks
The UK and Ireland tour of Bring It On The Musical has been called off after the production lost "hundreds of thousands of pounds" due to Covid-related cancellations.
The show was due to continue touring from the end of January until July after finishing its six-week run at the Southbank Centre's Queen Elizabeth Hall, where it will continue to play until 22 January.
However, today that producers said the tour could no longer go ahead because of "the impact of rising Covid-19 cases and self-isolation requirements" which led to 13 performances in London being cancelled.
In a statement, production company Selladoor Worldwide said: "Cancelling 13 performances has resulted in an overwhelming loss of income for the production during a peak period that would otherwise have provided a vital financial backbone of the tour.
"This lost income, amounting to hundreds of thousands of pounds, has sadly rendered the remainder of the tour financially unsustainable. It would be irresponsible for us to continue, and we therefore have no option but to cancel the remainder of the tour.
"We have not taken this decision lightly and have explored every possible alternative to avoid cancellation. Selladoor remain incredibly proud of this fantastically received production and are grateful to everyone who has worked so hard on it, and to all the audiences who have cheered us on.
"We are only too aware of the impact this will have on our wonderful cast, crew, musicians and creative team – as well as our audiences, venues and suppliers we had been due to work with during the tour. We are heartbroken that we have been forced into this position, but the impact of these cancellations caused by Omicron has left us with no other choice."
Customers who have purchased tickets for the UK and Ireland tour of Bring It On The Musical will be contacted by their point of purchase. Tickets for performances at the Southbank Centre are unaffected and remain valid.
The tour was originally due to go to Wolverhampton, Southampton, Edinburgh, Blackpool, Aberdeen, Manchester, Cheltenham, Nottingham, Birmingham, Dublin, Woking, Sheffield, Cardiff, Sunderland, Glasgow, Bradford and Milton Keynes.
The show, which originated on Broadway, stars Amber Davies, Chelsea Hall, Alicia Belgarde, Vanessa Fisher, Connor Carson and Louis Smith and was written by Tony Award winners Lin-Manuel Miranda, Jeff Whitty, Tom Kitt and lyricist Amanda Green.
BRING IT ON MUSICAL TOUR SCHEDULE
Selladoor Worldwide
Soutbank Centre
Dolly Parton's 9 to 5 the musical tour 2022 – tour schedule and tickets
2 April 2021 Showcall Editorial Team Touring, Touring Posters Comments Off on Dolly Parton's 9 to 5 the musical tour 2022 – tour schedule and tickets
Dolly Parton's 9 to 5 the musical tour recommences in 2022. Book 9 to 5 tour tickets below. […]
West End Poster Menu
Bring It On the musical
8 April 2021 Showcall Editorial Team West End Poster Menu Comments Off on Bring It On the musical
Last Updated on 8 May 2021 by Showcall Editorial Team BOOK BRING IT ON THE MUSICAL TICKETS AT SOUTHBANK CENTRE LONDON
Bring It On musical tour 2022 has been cancelled
16 April 2021 Showcall Editorial Team Touring Comments Off on Bring It On musical tour 2022 has been cancelled
Bring It On The Musical musical tour has been cancelled in 2022. The season at Southbank Centre will run until 22 January 2022. […]
Touring Shows | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,339 |
\section{Introduction}
\indent Geometrical frustration in magnetic materials occurs when
the spins are constrained by geometry in such a way that the
pairwise interaction energy cannot be simultaneously minimized for
all constituents. A special example is an exotic class of
crystalline solid known as spin ice ($Dy_{2}Ti_{2}O_{7}$,
$Ho_{2}Ti_{2}O_{7}$). Recently, Castelnovo \emph{et al.}
\cite{Castelnovo08} have proposed that these materials are the
repository of some elegant physical phenomena: for instance,
collective excitations above its frustrated ground-state
surprisingly behave as point-like objects that are the condensed
matter analogues of magnetic monopoles. Some recent experiments
\cite{Fennell09,Morris09,Bramwell09,Kadowaki09} have reported the
observation and even the measurement of the magnetic charge and
current of these monopoles in spin ice materials; in addition,
simulations also support these ideas
\cite{Jaubert09,Castelnovo10}. Besides, to turn the research of
monopoles into a proper applied science, it will be necessary to
ask if the basic ideas of dipole
fractionalization\cite{Castelnovo08,Nussinov07} that give an usual
spin-ice material its special properties can be realized in other
magnetic settings. One of the most promising candidates for
accomplishing that, is the artificial version of spin ices
recently produced by Wang \emph{et al.} \cite{Wang06}.
In this system, elongated magnetic nano-islands are regularly distributed in a
two-dimensional ($2d$) square lattice. The longest axis of the
islands alternate its orientation pointing in the direction of the
two principal axis of the lattice \cite{Wang06}. The
magnetocrystalline anisotropy of Permalloy (the magnetic material
commonly used to fabricate artificial spin ice) is effectively zero, so
that the shape anisotropy of each island forces its magnetic
moment to align along the largest axis, making the islands
effectively Ising-like.
Actually, the fabrication and study of this kind of lower
dimensional analogues of spin ice have received a lot of attention
\cite{Wang06,Moller06,Remhof08,Ke08,Zabel09,Mol09,Libal09,Moller09}.
Indeed, the ability to manipulate the constituent degrees of
freedom in condensed matter systems and their interactions is much
important towards advancing the understanding of a variety of
natural phenomena. Particularly in this context, the possibility
of observing magnetic monopoles in artificial spin ices
\cite{Mol09,Moller09} is a timely problem given that these
magnetic compounds could provide the opportunity to see them up
close and also watch them move (for example, with the aid of
magnetic force microscopy). Very recently, the direct observation
of these defects in an artificial kagome lattice was reported by
Ladak \emph{et al.} \cite{Ladak10}. However, there is a
stimulating challenging for such an observation (or not) in
artificial square lattices as pointed out in advance.
\begin{figure}
\includegraphics[angle=0.0,width=6cm]{fig_1a.eps}
\includegraphics[angle=0.0,width=6cm]{fig_1b.eps}
\caption{ \label{ModifiedSystem} (Color online) The modified
square lattice studied in this work. Top: top view of the
system. The arrows represent the local dipole moments
($\vec{S}_{\alpha(i)}$ or $\vec{S}_{\beta(i)}$).
Bottom: lateral view of the system showing the height offset
between islands. The original material produced by Wang \emph{et
al.}\cite{Wang06} is two-dimensional with $h=0$.}
\end{figure}
\begin{figure}
\includegraphics[angle=0.0,width=7cm]{fig_2.eps}
\caption{\label{Vertices} (Color online) Up: In the artificial
spin ice proposed in Ref. ~\onlinecite{Moller06}, the spins
obeying the ice rule do not point along directions passing by the
center of a tetrahedron as they do in the natural spin ice
compounds. Down: Configurations of the spins obeying the ice rule
in a tetrahedron in the artificial (left) and the natural (right)
spin ices. This small distortion of the spins configuration causes
a residual ordering and consequently, an outstanding energetic
string connects the monopoles in the modified artificial system.}
\end{figure}
In a previous work \cite{Mol09} we have pointed out that monopoles
do not appear as effective low-energy degrees of freedom in
two-dimensional square spin ices, as they do in the
three-dimensional materials $\{Dy,Ho\}_{2}Ti_{2}O_{7}$. Due to the
antiferromagnetic order in the ground-state, the constituents of a
pair monopole-antimonopole become confined by a string which
forbids them to move independently. However, we have also argued
that above a critical temperature, the string configurational
entropy may lose its tension leaving the monopoles free. The
quantitative analysis of such a possible phase transition is under
current investigation\cite{workinprogress}. Meanwhile, other
strategies to find monopoles in synthetic spin ices have been
proposed. M\"{o}ller and Moessner \cite{Moller09} have suggested a
modification of the square lattice geometry in which they argue
that, considering a special condition, the string tension vanishes
at any temperature. This modification in the system produced by
Wang \emph{et al.}\cite{Wang06} consists of introducing a height
offset $h$ between islands pointing along the two different
directions \cite{Moller06,Moller09} (see Fig.
\ref{ModifiedSystem}; such a system is currently under
experimental planning\cite{Schiffer-priv-comm}). Their idea
comprises basically the following: if $h$ is chosen so that the
energies of all vertices obeying ice rule become degenerate, then,
an ice regime is established leaving the monopoles ``free'' to
move (indeed, there is a Coulombic interaction between the
monopoles)\cite{Moller09}. For point-like dipoles they considered
that a degenerate state is obtained when the interactions between
nearest-neighbors ($J_1$) and next-nearest-neighbors ($J_2$) are
equal, leading to the following value for the height offset where
``free'' monopoles occur: $h_{ice}\approx 0.419 a$ (where $a$ is
the lattice spacing)\cite{Moller06}. Taking into account the
finite extension of the dipoles, the height offset diminishes and
as $\epsilon \equiv 1-l/a \rightarrow 0$ ($l$ is the length of the
island), the endpoints of the islands form a tetrahedron, so that
at $h= \epsilon a/ \sqrt{2}$ the ordering disappears, and the
monopoles become free to move\cite{Moller09}.
Here we numerically calculate the energetics of the ground-states
and excitations in the modified square lattice as a function of
$h$. In our calculations we consider point-like dipoles forming
the lattice. Although the main physical aspects of the system
must be correct with this approximation, some parameter values
(such as magnetic charge, string tension, critical height etc)
should be quantitatively altered for the realistic case in which
$l$ has a finite length. On the other hand, since we take into
account all the long-range dipole-dipole interactions, it is
expected that our results could better describe the actual system. For
instance, while in the calculations of
Refs.~\onlinecite{Moller06,Moller09}, the ground-state changes its
configuration at $h=0.419a$, our results indicate that it occurs
at $h=h_1=0.444a$. Besides, we noted that at least one of the several
configurations that satisfy the ice rule does not have the same
energy of the ``ground-states'' ($GS_1$ and $GS_2$) at this very
height, indicating that for $h=h_1$ the system is not in a completely
degenerate state.
We have also shown that the string tension
decreases rapidly as $h$ increases but it does not vanish at any
value ($h\leq a$): rather, at $h=h_1$, its strength reads about 20
times smaller than that of the usual case for $h=0$. A
possible cause of the finite strength of the string tension even
at $h_{1}$ is the fact that, concerning the spin configurations in
a tetrahedron, the artificial spin ice has a slight difference
with its natural counterpart. For the artificial compounds
proposed in Ref.~\onlinecite{Moller09}, the localized magnetic
moments forming a corner-sharing tetrahedral lattice are forced to
point along the longest axis of the islands (here, $x$- or
$y$-directions, see Fig.\ref{Vertices}) while in the original $3d$
spin ices, they point along a $<111>$ axis (indeed, in this case,
the magnetic dipoles point along axes that meet at the centers of
tetrahedra). As a result of this mismatch, there is always a
single ordered ground state in the artificial systems
, which is
responsible for the residual value of the string tension and its
anisotropy.
Another interesting result obtained here with the
point-like dipole approximation is that the magnetic charge of the
monopoles jumps as the system undergoes a transition in its ground
state. In addition, in general, this strength of the interaction
between a monopole and its antimonopole is anisotropic, depending
on the lattice direction and on the type of order. However, as
expected from the above discussions, we note that the system
anisotropy diminishes as $h$ goes to $h_{1}$. Actually, as $h$
increases from zero, the differences found in the values of the
``charges" (as distinct directions for the monopoles separation
are taken into account) decreases, and they tend to disappear as
$h\rightarrow h_{1}$, i.e., in the ice regime (nevertheless,
$h=h_{1}$ is not really an optimal ice regime, at least for
point-like dipoles).
\section{The Model and Results}
We model the system suggested in Refs.~\onlinecite{Moller06,Moller09} assuming: the magnetic
moment (``spin") of the island is replaced by a point dipole at
its center. At each site $(x_{i},y_{i},z_{i})$ of a ``square'' lattice two
spin variables are defined: $\vec{S}_{\alpha(i)}$ with components
$S_{x}=\pm 1$, $S_{y}=0, S_{z}=0$ located at
$\vec{r}_{\alpha}=(x_{i}+a/2,y_{i},h)$, and $\vec{S}_{\beta(i)}$
with components $S_{x}=0$, $S_{y}=\pm 1, S_{z}=0$ at
$\vec{r}_{\beta}=(x_{i},y_{i}+a/2,0)$. Spins pointing along the
$y$-direction and spins pointing along the $x$-direction are in
different planes, separated by a height $h$ (see Fig.~\ref{ModifiedSystem}).
Hence, in a lattice of volume $L^{2}=n^{2}a^{2}$
one gets $2\times n^{2}$ spins
(we have studied systems with $n=20,30,40,50,60,70$). Representing the spins of the islands by
$\vec{S}_{i}$, assuming either $\vec{S}_{\alpha(i)}$ or
$\vec{S}_{\beta(i)}$, then the modified artificial spin ice is
described by the following Hamiltonian
\begin{eqnarray}\label{HamiltonianSI}
H_{SI} &=& Da^{3} \sum_{i\neq j}\left[\frac{\vec{S}_{i}\cdot
\vec{S}_{j}}{r_{ij}^{3}} - \frac{3 (\vec{S}_{i}\cdot
\vec{r}_{ij})(\vec{S}_{j}\cdot \vec{r}_{ij})}{r_{ij}^{5}}\right],
\end{eqnarray}\\
where $D=\mu_{0}\mu^{2}/4\pi a^{3}$ is the coupling constant of
the dipolar interaction. The sum is performed over all
$n^2(2n^{2}-1)$ pairs of spins in the lattice for
open boundary conditions (OBC), while for periodic boundary conditions (PBC)
a cut-off radius was introduced when $r_{ij}> n/2a$.
\begin{figure}
\includegraphics[angle=0.0,width=4.2cm]{fig_3a.eps}
\includegraphics[angle=0.0,width=4.2cm]{fig_3b.eps}
\includegraphics[angle=0.0,width=4.2cm]{fig_3c.eps}
\caption{\label{groundstates} (Color online) (a) Ground-state
configuration for $h<h_1=0.444a$, $GS_1$. Note that this is
exactly the same state obtained in
Refs.\onlinecite{Moller06,Mol09}. (b) Configuration of the
ground-state $GS_{2}$ obtained for $h>0.444$. In $GS_{2}$, each
vertex has a net magnetization but globally the magnetization
vanishes. Note that the ice rule is manifested in every vertex.
(c) Another configuration that satisfy the ice rule but has an energy
higher than the configurations shown in (a) and (b) when $h=h_1$.
}
\end{figure}
The results presented here consider a lattice with $n=70$, which
contains $9800$ dipoles (islands) and PBC. We observed exactly the
same behavior for OBC and PBC and the size dependence of the
results is not appreciable. By using a simulating annealing
process (see Ref.~\onlinecite{Mol09}), the first thing to notice
is that the ground-state configuration changes for a critical
value of $h$. Indeed, as shown in
Fig.~\ref{groundstates} (a), for all values $h<h_1=0.444a$, the system
ground-state (hereafter referred to as $GS_1$) has exactly the
same form as that of the usual case in which $h=0$. However, for
$h>h_1$, the ground-state changes to $GS_2$ (see
Fig.~\ref{groundstates} (b)). Really, as $h \to h_1$,
the energies of both states are comparable, whereas for $h>h_1$
the state $GS_2$ is less energetic (see
Fig.~\ref{groundstatesenergy}). Such a result is in qualitative
agreement with findings of Ref.~\onlinecite{Moller06}, which
presents the transition at $h=h_{ice} = 0.419a$. As expected, both
configurations obey the ice rule (two spins point in and two point
out in every vertex), but while in $GS_{1}$ the magnetization is
zero at each vertex, in $GS_{2}$ it points diagonally, but with
net vanishing magnetization. As shown in
Fig.~\ref{groundstatesenergy}, the energy of $GS_1$ increases
rapidly as $h$ increases while the energy of $GS_2$ is constant.
Actually, for this latter configuration, the horizontal and
vertical sub-lattices are decoupled. We note that these two
ground-states are metastable in the sense that they are local
minima and cannot be continuously deformed one into another
without spending a considerable amount of energy; trying to align
the dipoles from one state to another costs the inversion of two
spins by vertex (half of the spins have to be inverted in the
whole system). This changing has an $h$-dependent energy barrier
which is roughly of the order of $160D$ (for $h=0.444a$ and
$n=70$), making this process much improbable to occur
spontaneously. Thus, considering the system in the $GS_1$ state
and increasing continuously the height from $h=0$, $GS_1$ may
persist even for $h>h_{1}$ because of the large energy necessary
to change to $GS_2$. Besides, in Fig.~\ref{groundstatesenergy} we also
present the energy of the configuration shown in Fig.~\ref{groundstates} (c),
which also satisfy the ice rule but has an energy higher than those of $GS_1$
and $GS_2$ even for $h=h_1$. Consequently, the states satisfying the ice rule
are not completely degenerate.
\begin{figure}
\includegraphics[angle=0.0,width=5cm]{fig_4.eps}
\caption{\label{groundstatesenergy} (Color online) The energy per island of
the two ground-states ($GS_1$ and $GS_2$) and of the configuration
shown in Fig.~\ref{groundstates} (c) (in units of $D$) as a function of $h$ (in units
of the lattice spacing $a$).
Black circles represent the $GS_{1}$ energy while red squares
concern $GS_{2}$ and blue diamonds are for the configuration shown in Fig.~\ref{groundstates} (c).}
\end{figure}
\begin{figure}
\includegraphics[angle=0.0,width=5.0cm]{fig_5a.eps}
\includegraphics[angle=0.0,width=5.0cm]{fig_5b.eps}
\includegraphics[angle=0.0,width=5.0cm]{fig_5c.eps}
\caption{\label{cena10} (Color online) Three of the four basic shortest
strings used in the separation process of the magnetic charges.
Pictures (1) and (2) exhibit strings $1$ and $2$,
respectively used for $h<h_1=0.444a$. The red circle is the positive charge (north pole)
while the blue circle is the negative (south pole). For $h>h_1$
the ground-state is $GS_2$ and we used a linear string-path (not
shown above) and a diagonal path (picture (3)).}
\end{figure}
\begin{figure}
\includegraphics[angle=0.0,width=5cm]{fig_6.eps}
\caption{\label{Coulomb} (Color online) The monopole ``charge''
$q$ (see Eq.~\ref{potential}) obtained analyzing the energy in the
separation process of the charges for the two string shapes shown
in Fig.\ref{cena10} for $h<h_1$. When $h>h_1$, the charges
variation is shown for a linear and diagonal string-paths. Here,
$q$ is in units of $Da$ while $h$ in units of $a$. Note how the
anisotropy of the monopole interaction decreases considerably as
$h\rightarrow h_{1}$ from below.}
\end{figure}
Now, we consider the excitations above the ground-state. In the
two-in/two-out configuration, the effective magnetic charge
$Q_{M}^{i,j}$ (number of spins pointing inward minus the number of
spins pointing outward on each vertex $(i,j)$) is zero everywhere
for $h<h_1$ ($GS_1$) and for $h>h_1$ ($GS_2$). The most elementary
excited state is obtained by inverting a single dipole to generate
localized ``dipole magnetic charges". Such an inversion
corresponds to two adjacent sites with net magnetic charge
$Q_{M}^{i,j}=\pm 1$, which is alike a nearest-neighbor
monopole-antimonopole pair \cite{Castelnovo08,Mol09}. Following
the same method of Ref. \onlinecite{Mol09}, it is easy to observe
that such ``monopoles" can be separated from each other without
violating the local neutrality by flipping a chain of adjacent
spins. We choose four different ways they may be separated (see Fig.\ \ref{cena10}). Firstly,
using the string shape $1$ and starting in the ground-state
$GS_{1}$ (for $h<h_1=0.444a$) we choose an arbitrary site and then
the spins marked in dark gray in Fig.\ \ref{cena10} are flipped,
creating a monopole-antimonopole separated by $R=2a$. Next, the
spins marked in light gray are flipped and the separation distance
becomes $R=4a$, and so on. In this case, the string length ($X$)
is related to the charges separation distance $R$ by $X=4R/2$ (the
monopole and the antimonopole will be found along the same
horizontal or vertical line). Secondly, we also consider a
string-path of form $2$ (for $h<h_1$), making the separated
monopoles to be found in different lines (diagonally positioned;
now we have $X=2R/\sqrt{2}$). More two equivalent ways
were studied for $h>h_1$ in which $GS_2$ is the
ground-state. In this case, however, differently from the
situation in the $GS_{1}$ state, now the monopoles can be
separated by using a linear string-path (so that $X=R$) without
any violation of the ice rule. Finally, another monopoles
separation studied for $GS_{2}$ is the ``diagonal-path" (or path
3), in which the charges are put in different lines. Our analysis
shows that besides the Coulombic-type term $q(h)/R$ (where
$q=\frac{\mu_0}{4\pi}q_1q_2<0$ is the a coupling constant which
gives the strength of the interaction), the total energy cost of a
monopole-antimonopole pair has an extra contribution behaving like
$b(h)X$, brought about by the string-like excitation that binds
the monopoles, say,
\begin{equation}
V(R,h)=q(h)/R+b(h)X(R)+V_{0}(h)
\label{potential}
\end{equation}
where $V_{0}(h)$ is a $h$-depended constant related to the
monopole pair creation (for instance, for $h=0$ $V_0(0)\approx 23
$D and $V(a,0) \approx 29 $D). The results for the ``charge'' $q(h)$ are
shown in Figure \ref{Coulomb} for the range $0<h<a$. When $h<h_1$,
the excitations are considered above $GS_1$, and we observe that
there is a small h-dependent difference in the $q$-value for paths
$1$ and $2$, which vanishes as $h\to h_1$. At higher heights,
$h>h_1$, $q$ is valued with respect to $GS_2$, and is
$h$-independent for a linear string-path. However, for
path 3, it comes back to increase as $h$ increases. Therefore, the
interaction of a monopole with its partner (antimonopole) is
anisotropic in artificial spin ices. Perhaps, it would be more
appropriate to redefine things in such a way that
$q=\frac{\mu_0}{4\pi}Q_1Q_2 \alpha(h,\phi)$, where
$q_{1}q_{2}=Q_1Q_2 \alpha(h,\phi)$ and the actual value of the
charges $Q_{1}=-Q_{2}$ is independent of the angle $\phi$ that
the line connecting the poles makes with the $x$-axis. In this
case, the anisotropy of the interaction (coming from the
background) is implicitly considered in the function
$\alpha(h,\phi)$ but its complete expression was not evaluated
here. Since $\alpha(h_{1},\phi)$ tends to be a constant (independent of
$\phi$, we set $\alpha(h_{1},\phi)=1$ and so, only around the ice
regime (i.e., $h \approx h_{1}$), the interaction tends to be
isotropic. Thus we can find the genuine strength of the magnetic
charge in this artificial compound as being $ Q_1 = \pm \sqrt{4\pi
\mid q(h_{1}) \mid / \mu_{0}} \approx \pm\ 1.95 \mu/a $, where we
have used $ \mid q(h_{1}) \mid =3.8 Da $. Just for effect of
comparison, using some parameters of Ref.\onlinecite{Wang06} such
as $a=320nm$, we get a charge value which is about $80$ times
larger than the typical value found for the original $3d$ spin
ices \cite{Castelnovo08} (or about 100 times smaller than the
Dirac fundamental charge). Besides its anisotropy, another
interesting fact about the Coulombic interaction in the artificial
compounds is that it jumps at $h=h_{1}$. Indeed, at this point,
$q$ abruptly changes from $q_{<} \approx -3.8Da$ to $q_{>}\approx
-3.4Da$ when the linear path is taken into account. Such
a discontinuity may be attributed to the ground-state transition
and that above $GS_2$ the Coulombic interaction between a pair
somewhat incorporates the residual magnetization stored in each
vertex. On the other hand, keeping a diagonal separation
of the monopoles along the ground state transition, the magnetic
charge parameter $q$ increases almost continuously.
\begin{figure}
\includegraphics[angle=0.0,width=5cm]{fig_7.eps}
\caption{\label{tension} (Color online) The string tension for the
two string shapes shown in Fig.\ref{cena10} for $h<h_1$ and for a
linear string-path and path 3 (diagonal) for $h>h_1$. The
green dot and the dashed lines represents an extrapolation of our
data.}
\end{figure}
How the string tension $b$ depends upon $h$ is shown in
Fig.\ref{tension}. Note that, while $GS_1$ is the
ground-state ($h<h_1$), $b$ diminishes as $h$ increases. At higher
heights, and being evaluated over $GS_2$, the tension remains a
non-vanishing small constant for linear path and turns back to
increase for diagonal separation (path 3). In general,
since $b$ is also a function of $\phi$ (i.e.,$b(h,\phi)$), it is
more favorable energetically that a pole and its antipole reside
at the same line in the array. It should be remarked
that, near the ice regime, $b(h_{1},\phi)$ is almost independent
of $\phi$ (almost isotropic limit) and its value is reduced
around 20 times whenever this modified system is compared
to its counterpart at $h=0$ (at zero temperature). In principle,
this result indicates that free monopoles do not appear in this
system. Then, the modified array \cite{Moller09} faces a small obstacle
by the fact that the islands are placed in such a way that the
spins can not point to the center of a tetrahedron as they do in
the $3d$ materials. Indeed, as pointed out before, the spins in
the artificial compound point along its edges (see
Fig.\ref{Vertices}); the islands are rigid objects that do permit
the spins to point only along their longest axis. This disparity
causes an ordering in the artificial material, which diminishes as
$h$ increases; eventually it becomes tiny but persists at
$h=h_{1}$. This persistent ordering contributes for the residual
string tension at the ice regime and also for the different string
tension values as the monopoles are located at different angular
positions in the array. Such a difficulty may be overcome when one
takes the limit $l\rightarrow a$ in the modified array. As pointed
out in Ref.\onlinecite{Moller06}, the mechanism responsible for
the equivalence between the artificial ($2d$) and natural $3d$
spin ices is not operational in $d=2$, as it requires also the
dimensionality of the dipolar interaction to coincide with that of
the underlying lattice. Here, we have a $d=3$ dipolar ($1/r^{3}$)
and Coulombic (``monopolar", $1/R$) interactions in a
two-dimensional array. Independent of this, since the state
$GS_1$ is metastable one could imagine if the excitations could be
considered to lie in $GS_1$ for $h$ slightly greater than $h_1$.
In this case the extrapolation of our results indicate that the
string tension may vanish at $h\approx 0.502 a$ (see Fig.
\ref{tension}).
\section{Summary}
In summary, we have investigated the energetics of the modified
artificial spin ice expressing several quantities, such as ground
states energy, magnetic charges and string tension, as a function
of the height offset $h$. Our analysis show that the ground-state
changes from an ordered antiferromagnetic to a ferromagnetic one
at $h=h_1\approx 0.444a$, which is in good agreement with the
value obtained in Refs.~\onlinecite{Moller06,Moller09},
$h_{ice}\approx 0.419a$. We claim that such a small difference
comes about from the fact that in these cited works, authors
assumed equal nearest-neighbor and next-nearest-neighbor
interactions, whereas we have taken all the dipole interactions
into account. For the excitations above the ground-state we have
found that the magnetic charges interact through the Coulomb
potential added by a linear confining term with tension $b(h)$,
which decreases rapidly as $h$ increases, from $0$ to $h_1$,
assuming a non-vanishing constant value at higher $h$.
Actually, the system presents an anisotropy that manifests
itself in both the Coulombic and linear interactions and it tends
to diminish as $h$ increases, almost disappearing at $h=h_{1}$.
The source of this anisotropy is a residual ordering, which still
persists even in the ice regime (at $h=h_{1}$ for point-like
dipoles). Ordering and anisotropy may disappear completely in the
ideal limit $l\rightarrow a$, $h\rightarrow 0$. Another
interesting result is that the magnetic charge jumps, depending on
the direction in which the monopoles are separated, as the system
undergoes a transition in its ground state. For a separation of
the monopoles, vertex by vertex, along the same line of vertices,
which is possible only for the $GS_{2}$ ground state, the coupling
$q$ exhibits considerably discontinuity in relation to its limit
value in the $GS_{1}$ ground state.
On the other hand, it tends to grow up continuously for the
diagonal path along the transition. Although the residual ordering
leads to a confining scenario for monopoles, its very small
strength, whenever $h\approx h_1$, signalizes a significant
tendency of monopole-pair unbinding at a critical (optimal) height
offset, even at zero-temperature. Further improvements in model
(1), for instance, taking the actual finite-size of the dipoles
into consideration, could shed some extra light to this issue.
Additionally, temperature effects may also facilitate the
conditions for free monopoles. Indeed, the string configurational
entropy is also proportional to the string size and therefore, at
a critical temperature\cite{Mol09}, on the order of $ba$, the
monopoles may become free. In view of that, for small $b$, the
monopoles should be found unbind at very low temperatures.
As a final remark we would like to stress that these results
show that the background configuration of spins has a deep effect
in the charges interactions, being responsible for the string tension,
anisotropies and a kind of screening of the charges.
\section*{Acknowledgments}
The authors thank CNPq, FAPEMIG and CAPES (Brazilian agencies) for
financial support.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,912 |
Earl Anthony Johnson (born 1953), better known as Earl Zero, is a Jamaican reggae singer whose career began in the 1970s. He is the uncle of Toronto rapper Raz Fresco.
History
Born 1953 in the Greenwich Town area of Kingston, Johnson was the eldest of ten children, his father a fisherman and his mother a fishmonger. Zero began his career in the 1970s, first as a member of the group Rush-It with his childhood friend Earl "Chinna" Smith, who recorded for producer Bunny Lee, who gave him the name 'Earl Zero' to distinguish him from Smith. His recording of his song "None Shall Escape the Judgement" went unreleased but it was a hit for Johnny Clarke when recorded by the singer in 1974, and Zero first had success as a singer himself with the Al Campbell-produced "Righteous Works" in 1975. He recorded for Don Mais' Roots Tradition label (recording "Home Sweet Home" and "I No Lie"), and joined Tommy Cowan's Talent Corporation roster, and had further success with "Please Officer" and "City of the Weak Heart", released on Cowan's Arab label, the latter also recorded by Jacob Miller on his Killer Miller album. In 1976 he moved on to work with Bertram Brown, recording "Get Happy", and again recorded with Campbell on "Heart Desire". He continued recording through the late 1970s, recording with Soul Syndicate. His first two albums, Visions of Love and In the Right Way, were released in 1979.
In 1979 he relocated to northern California, and continued to record since, with several albums released. His latest album is Marketplace, to be released in Spring 2011 with producer Siahvash Dowlatshahi. The album features members of the Roots Radics, The Greyboy Allstars, The Devastators and others.
Discography
Visions of Love (1979), Epiphany
In the Right Way aka Only Jah Can Ease the Pressure (1979), Freedom Sounds/Student
Roots & Romance (2007)
And God Said to Man (2009), A-Lone
Marketplace (2011), Foreign Key Records
''Earl Zero Meets Sideway' 'Big Fisherman In Dub' (2020), Sideway Outernational Records
References
External links
Earl Zero at Roots Archives
Foreign Key Records
1953 births
Living people
Jamaican reggae musicians | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,647 |
Q: Monitoring third party API calls using Prometheus In my spring boot application, I'm consuming third party public APIs. I want to monitor this third party API calls, total number of API hits, total number of success responses and number of error responses. I have already added micrometer Prometheus dependency micrometer-registry-prometheus-1.1.3
In which way I can get custom API monitoring ?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,649 |
Da die Wohnung so weit oben ist, ist die Aussicht aus den Fenstern echt super. This is what it means in English Remember: these aren't one-to-one translations : I live in a single family house. Wir haben keinen Keller aber hinter unserem Haus gibt es Pferdeställe. Maven This supports the Maven widget and search functionality. This website is provided for informational and entertainment purposes only and is not intended as a replacement or substitute for any professional financial, medical, legal, or other advice. It consists of three bedrooms, one living room, one kitchen, one dining room, a computer room, two bathrooms, one guest room, and an attic.
Google Maps Some articles have Google Maps embedded in them. So, if you are living in a single family home check out this example! Since our flat is so high up, the views from the windows are really great. To the left of our house, there is a little guest house for our summer guests. And here is how I would say it in English: I live in a two-bedroom flat on the top floor of a multi-story building with my family. Amazon Unified Ad Marketplace This is an ad network.
He has great patient and explains french pronunciation very nicely. Elle habite dans une maison neuve. My desk is quite large, and I have my own computer on it. Amazon Tracking Pixel Some articles display amazon products as part of the Amazon Affiliate program, this pixel provides traffic statistics for those products. Meine Wände sind voll von Fotos und Postern.
What is the French translation for 'at my house'?
As in English, dans describes abstract situations and state: Il est dans les larmes. Il est dans la salle à manger. La salle de séjour est très grande et à côté, il y a un petit salon. Ma maison a deux chambres : la première pour moi et ma femme avec un grand lit. These pronouns come before the verb they modify: Il vous aime.
She runs her house well. For example, a massage therapist would be either a masseur or a masseuse. It also means you also get this kind of situation: Sa mère Again, you can't tell the gender of the child in this example, because the possessive adjective is only interested in the gender of the noun it's describing. Note that the final consonant is pronounced only in the female form. In English, when describing something you like, you do not use the article. Many descriptive words used in English are actually borrowed from French.
Hinter dem Haus ist ein toller Spielplatz mit Blumenbeeten drum herum. Features Google Custom Search This is feature allows you to search the site. Weil wir nicht genug Platz in unserem Zimmer für all unsere Sachen haben, haben wir einen Teil unserer Spielsachen und Bücher in den Keller getan. Ils habitent à la ville, mais nous habitons à la campagne. Because we do not have enough space for all our stuff, we put some of our toys and books in our cellar. Unsere Nachbarn sind manchmal ein bisschen laut, wenn sie in ihrer Wohnung feiern, und dann können wir nicht schlafen. To provide a better website experience, owlcation.
They live in the city, but we live in the country. Floors l'étage m ay-tahzh level le rez-de-chaussée rayd-shoh-say lobby, ground floor le premier étage le deuxième étage le troisième étage first floor second floor third floor le plain pied luh pla n py uhay single-story apartment, space of the same floor de plain pied duh pla n py uhay single-story, same-floor À surface égale, les maisons de plain-pied nécessitent un terrain plus grand. The home is the center of French family life, so words identifying the house, furniture, and areas of the home are a part of everyday language for French people. Mein Schreibtisch ist ziemlich groß, und ich habe meinen eigenen Computer darauf stehen. In the house, we have five bedrooms, two bathrooms, a guest toilet, a living room, a small library, a study, a laundry room, a big kitchen, and an attic for storage. La cuisine est toute petite et nous y mangeons le soir. We also celebrate our birthdays there.
Copyright © 2019 HubPages Inc. Il est dans sa nature de parler à tort et à travers. They're fun, friendly and stress-free! Der Computer befindet sich im Büro. He likes to talk to you. Google DoubleClick Google provides ad serving technology and runs an ad network. This article was co-authored by our trained team of editors and researchers who validated it for accuracy and comprehensiveness.
Times when noun gender doesn't matter There are a couple of times when it doesn't matter if you're talking about a masculine noun or a feminine noun, the possessive adjective will always be the same. I have those papers on my desk. Ma maison est grande et belle. It covers some common adjectives to describe a house, get students thinking about their opinions on their own houses and then there is a link to a luxury real estate agency in France at the end, just for fun. The first three are designed for language learners. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,475 |
Every website has to gain a rank to be seen on the search engine results page. To earn this rank, the guidelines set by the search engines are to be followed by website owners and other strategies are put to work to optimize the search engine. One needs to come up with unique and trustworthy content as well gain more and more backlinks which will play a part in ranking the website. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,078 |
Q: Given finitely many squares whose areas add up to $1$, show that they can be arranged without overlaps inside a square of area $2$. Problem
Given finitely many squares whose areas add up to $1$, show that they can be arranged without overlaps inside a square of area $2$.
[Taken from the book Putnam and Beyond by Razvan Gelca and Titu Andreescu ]
Given Proof
The guess is that a tight way of arranging the small squares inside the big square is by placing the squares in order of decreasing side length.
To prove that this works, denote by $x$ the side length of the first (that is, the largest) square. Arrange the squares inside a square of side $\sqrt 2$ in the following way.
*
*Place the first in the lower-left corner, the next to its right, and so on, until obstructed by the right side of the big square.
*Then jump to height $x$, and start building the second horizontal
layer of squares by the same rule. Keep going until the squares have been exhausted.
Let $h$ be the total height of the layers. We are to show that $h \le \sqrt 2$, which in turn will imply that all the squares lie inside the square of side $\sqrt 2$. To this end, we will find a lower bound for the total area of the squares in terms of $x$ and $h$.
Let us mentally transfer the first square of each layer to the right side of the previous layer. Now each layer exits the square.
$\color\red{\text{It follows that the sum of the areas of all squares but the first is greater than or equal to } (\sqrt 2 − x)(h − x). \text{ This is because each newly obtained layer includes rectangles of base } \sqrt 2−x \text{ and with the sum of heights equal to } h−x}$.
From the fact that the total area
of the squares is $1$, it follows that $$x^2 + (\sqrt 2 − x)(h − x) \le 1$$
This implies that
$h \le 2x^2 − \sqrt 2x − 1x −\sqrt 2$
That is $h \le \sqrt 2$ will follow from $2x^2 −\sqrt 2x − 1x −\sqrt 2\le \sqrt 2$
This is equivalent to
$2x2 − 2\sqrt 2x + 1 \ge 0$,
or $(x\sqrt 2 − 1)2 \ge 0$,
which is obvious and we are done.
My Doubt
What I don't understand is the $\color{red}{red}$ part when the author says
It follows that the sum of the areas of all squares but the first is greater than or equal to $(\sqrt 2 − x)(h − x)$. This is because each newly obtained layer includes rectangles of base $\sqrt 2−x$ and with the sum of heights equal to $h−x$
I wonder if someone could explain why the layer includes such rectangles?
In my opinion when the square is stacked, there is always space beneath each layer since the left square of the previous layer is larger than the squares on the right (beneath this layer).
A: Maybe this extended image helps:
The dotted squares on the right are the copied first squares of the next row.
The red rectangles are covered completely by squares (after moving the first squares down ant to the right); each red rectangle has a width of $\sqrt2-x$; the sum of the heights of the red rectangles is the sum of the green bars on the right, which equals the sum of the same green bars on the left; as the green bars on the left connect to one long bar of length $h-x$, we conclude that the total red area is $(\sqrt2-x)(h-x)$, and of course is $\le 1-x^2$.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,966 |
{"url":"https:\/\/hal.archives-ouvertes.fr\/hal-01304448","text":"# Lattice study of pion-pion scattering using Nf=2+1 Wilson improved quarks with masses down to their physical values\n\n2 CPT - E1 Physique des particules\nCPT - Centre de Physique Th\u00e9orique - UMR 7332\nAbstract : We use 2HEX smeared gauge configurations generated with an $\\mathrm{N}_\\mathrm{f}\\mathrm{=2+1}$ clover improved Wilson action to investigate $\\pi\\pi$ scattering in the $\\rho$ channel. The range of lattice spacings (0.054 to 0.12 fm) and space-like extents (32 and 48) allows us to extract the scattering parameters through the volume dependence of the $\\pi\\pi$-state energies according to L\\\"uscher's formalism. The pion masses (134 to 300 MeV) are light enough to allow the decay of the rho and the level repulsion observed indicates that our data are sensitive to the interaction. We analyse our data with a multi-channel GEVP variational formula. Our results are in good agreement with the experimental values and consistent with a weak pion mass dependence of the $\\rho\\pi\\pi$ coupling constant.\nDocument type :\nConference papers\nDomain :\n\nhttps:\/\/hal.archives-ouvertes.fr\/hal-01304448\nContributor : Laurent Lellouch <>\nSubmitted on : Tuesday, April 19, 2016 - 6:10:38 PM\nLast modification on : Wednesday, January 23, 2019 - 2:38:30 PM\n\n### Identifiers\n\n\u2022 HAL Id : hal-01304448, version 1\n\u2022 ARXIV : 1410.8447\n\n### Citation\n\nThibaut Metivet, On Behalf of The Budapest-Marseille-Wuppertal Collaboration. Lattice study of pion-pion scattering using Nf=2+1 Wilson improved quarks with masses down to their physical values. 32nd International Symposium on Lattice Field Theory (Lattice 2014), Jun 2014, New York, United States. \u27e8hal-01304448\u27e9\n\nRecord views","date":"2019-04-21 04:20:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.30906206369400024, \"perplexity\": 5079.21508030958}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-18\/segments\/1555578530176.6\/warc\/CC-MAIN-20190421040427-20190421061348-00039.warc.gz\"}"} | null | null |
_Controlled Burn
_ By Shannon Stacey
Rick Gullotti lives the good life. He fights fires alongside men he considers brothers and dates beautiful women—though none for very long. And thanks to helping out the elderly couple who own his building, his rent is low. But when his concerns about their health lead to a long-legged southern California girl appearing on his doorstep, life as he knows it starts getting away from him.
Jessica Broussard has no interest in leaving sunny San Diego or her cushy corner office for the rougher side of Boston, but her father dispatches her to deal with the grandparents she's never met. She's unprepared for the frigid winter, loving relatives who aren't the monsters she's been led to believe, and the scruffy-but-hot danger-junkie firefighter who lives upstairs.
At first, Jessica is determined to get out of Boston Fire country and back to her comfortable life as quickly as possible. All she has to do is talk her grandparents into selling their monstrosity of a house. But she underestimates Rick's dedication—and his considerable charm. Nobody's taking advantage of his friends on his watch, even if that makes the very tempting Jessica his adversary. Unfortunately for them both, the only thing more urgent than the matter at hand—and more powerful than their differences—is their sizzling chemistry, and it's quickly becoming too strong to resist.
73,400 words
Dear Reader,
You know what goes great with holiday cookies, long checkout lines (or queues, for those of you not in North America) and a stressful holiday season that makes you want to sneak off and steal a few seconds for yourself? Carina Press books! Buy them from your phone or iPad and read them while hiding in the closet with a batch of cookies, or while leaning against your shopping cart in line while you try to block out the incessant holiday tunes being piped in over bad speakers in the store.
Right in time for those days off over American Thanksgiving is the release of _Controlled Burn_ , the second in Shannon Stacey's Boston Fire series. The firefighters in this book are so hot, you'll be tempted to light your turkey on fire in order to get a visit from your local firemen, but please don't do that. And for those of you not in the US, you might not have Thanksgiving days off, but that's okay—you're not going to want to wait to read it anyway, so go ahead and call in sick to work! Not to worry, if you haven't read _Heat Exchange_ , the first in this trilogy, these romances stand alone so you can read _Controlled Burn_ now and have another steamy hero to look forward to later.
Geek extraordinaire Lexi Carmichael is back in this newest mystery romance, and she's a fish out of water without her beloved technology in the deadly jungle of Papua New Guinea with Chinese thugs on her tail. She'll need to survive on wits alone...how hard could that be? Pick up _No Room for Error_ by Julie Moffett this December, or go back to the beginning of this zany, romantic series in _No One Lives Twice_.
If you love Lauren Dane's books as much as I do, you've been waiting not-so-patiently for a year to see Rowan kick some butt in the newest Goddess with a Blade urban fantasy, _At Blade's Edge_. Rowan Summerwaite is no ordinary woman. Raised at the knee of The First and honed into a weapon by the Hunter Corporation, she wields ancient knowledge from the Goddess Brigid...and is newly married to a powerful Vampire scion. But instead of being on a much-anticipated honeymoon, Rowan is in London gathering her allies and the evidence necessary to drive out the rot within Hunter Corp. and expose whoever is at the top. She'll let no one get in her way.
Kate Willoughby has another sexy, fun stand-alone contemporary romance for us in _Under the Spotlight_. Veteran NHL hockey player Joe Rutherford is accustomed to being in the spotlight, but when he falls for an actress whose career is just starting to take off, he must face the fact that his own might be ending sooner than later. And don't miss _On the Surface_ , _Across the Line_ and _Out of the Game_.
We welcome Annabeth Albert to the Carina Press lineup with her new #gaymers series. In _Status Update_ , a quirky video game designer is stranded, and a charming but reclusive archeologist comes to his rescue. Their sizzling attraction blooms in the middle of a snowstorm, but forging a future together means thawing out frozen hearts and unlocking closet doors.
Last this month is the trilogy end we've all been waiting for—book three of Caitlyn McFarland's Dragonsworn trilogy. We fell in love with dragonlord Rhys and his dragonmate Kai in _Soul of Smoke_ , we cheered and cried for them in _Shadow of Flame_ , and now we finally get to experience the end of their journey, the thrilling conclusion to the battle...and the beginning of their HEA in _Truth of Embers_.
That's our new-release lineup for this December, because we're taking a few weeks off from new releases over the peak of the holiday season, but never fear, we have a backlist of nearly one thousand titles for you to browse, including an incredible selection of _holiday-themed novellas_ you may have missed the first time around!
Paranormal romance fans should definitely read _A Galactic Holiday_ , or _Winter Wishes_ with novellas by Vivian Arend, Moira Rogers and Vivi Andrews. For those who love male/male romance, take a look at _His for the Holidays_ , or _Men Under the Mistletoe_ , which features Josh Lanyon, K.A. Mitchell, Harper Fox and Ava March. If a little extra heat in your holiday is what gets you moving, check out erotic holiday romance anthology _Season of Seduction,_ or _Red Hot Holiday_ with Anne Calhoun, Leah Braemel and K.A. Mitchell.
And if you love a good, sigh-worthy, make-your-heart-happy contemporary romance, we have quite a selection for you to choose from, including anthologies like _Romancing the Holiday_ , _All I'm Asking For_ , _Holiday Kisses_ , and novellas by Jaci Burton, Shannon Stacey, Brighton Walsh, Kat Latham and more. And don't miss one of my personal favorites— _Starting from Scratch_ by Stacy Gail.
Whatever you read and wherever you are, the team at Carina Press thanks you for making 2015 an incredible year of publishing and wishes you the very happiest of holiday seasons, with only wonderful books to help you make it through!
And, as always, until next month here's wishing you a wonderful month of books you love, remember and recommend.
Happy reading!
Angela James
Editorial Director, Carina Press
Contents
Chapter One
Chapter Two
Chapter Three
Chapter Four
Chapter Five
Chapter Six
Chapter Seven
Chapter Eight
Chapter Nine
Chapter Ten
Chapter Eleven
Chapter Twelve
Chapter Thirteen
Chapter Fourteen
Chapter Fifteen
Chapter Sixteen
Chapter Seventeen
Chapter Eighteen
Chapter Nineteen
Excerpt from Exclusively Yours by Shannon Stacey
Author Note
Acknowledgments
Also by Shannon Stacey
About the Author
About The Boston Fire Series
Copyright
Chapter One
"Five bucks says she requested Ladder 37 when she called 911."
Rick Gullotti glared at Gavin Boudreau, then shook his head. "That's bullshit."
They were back at the station after a run and, as the lieutenant of Boston Fire's Ladder 37, he had to stay in the bay with the guys and take care of the gear. Even if they were being idiots. In the bay next to him, the guys from Engine 59 were doing the same. Stowing the gear, checking tanks and supplies. The ladder truck and the pumper engine that shared the three-story brick firehouse always rolled together, and the guys of L-37 and E-59 operated well as a team.
A team whose members loved to give each other shit, Rick thought as Scotty Kincaid yelled from the other side of the bay. "That's the fourth time that woman's needed the fire department in six months, Gullotti. Must be rough when all your emergencies happen while you're still in your lace nightgown."
"Maybe it's you she's after," Gullotti called back.
"It wasn't me she hugged with so much...gratitude."
Yeah, that had been awkward. He didn't mind being offered cookies or invited to stay for lunch, but the hugging he usually managed to avoid. Thankfully he hadn't taken his bunker coat off, so the feel of a curvy woman in satin and lace hadn't gotten through, but he was going to have to be more careful in the future.
"She was definitely grateful." Chris Eriksson—who was one of the older guys in the house, but avoided promotions due to an extreme aversion to paperwork—paused in the act of wiping down L-37's bumper to smirk at him.
Rick's phone vibrated in his pocket, and he pulled it out, anticipating a summons from upstairs. It wasn't going to take long for the story to circulate, and he knew they'd have to come up with a way to gently discourage the woman's attempt to date via frivolous emergency calls. Not only was it a waste of time and money but, if it escalated, she could accidentally burn down her house.
But the text was from Karen Shea. She was a nurse he'd dated for a while before she met a guy who had the potential to be the one she wanted to spend the rest of her life with.
They just brought Joe into the ER. Stable, but he took a fall and Marie got upset.
Shit. He'd rented the third floor of Joe and Marie Broussard's house for years, and the elderly couple had become more than just landlords. They were like family, and worry settled in the pit of his stomach.
We're wrapping up after a run. I can sneak over for a few mins.
I'll tell them. Marie's having tea and Joe's griping about having to wait for scans.
They were okay, then. And he knew Karen would keep an eye on them until he got there.
"Tell me you didn't give her your number," Eriksson said, nodding at the phone in Rick's hand.
"Who?"
"The grateful lady in the lace nightgown."
"Hell, no. It's Joe and Marie. They're in the ER with Karen."
"Damn. Is it serious?"
"Joe fell and she got upset, I guess. Nothing critical, but I need to tell Cobb I'm heading out and get over there. If a call comes in, bring my gear and I'll meet you there."
"Will do." Chris snorted. "And we'll leave you some of this grunt work to do, too. Trust me."
The emergency room wasn't busy, so he asked if Karen was free instead of asking for the Broussards. He wanted more information before he saw the older couple. About five minutes later, Karen came into the waiting room and smiled at him.
He gave her a quick hug because they'd stayed friends, but a flash of light caught his eye. There was a diamond ring on her left hand, and he took hold of her fingers to give it a look.
"That was fast," he said.
She was practically beaming. "Yeah, but when it's right, it's right. And we have a little incentive to make it legal."
It took a few seconds for her words to sink in, and he realized she was pregnant. Genuine happiness for her came first, but on the heels of that was a pang of regret. He really liked Karen and he wished they'd had whatever chemistry it was she shared with the lucky guy she was going to marry.
But how many times had he heard himself referred to as not the marrying kind? More times than he could count, even if he wasn't totally sure what that meant.
"Congratulations," he said, making sure she could see his sincerity on his face. "He's a good guy."
"He is." It looked as if she was going to get all misty-eyed, but then she put her nurse face back on. "Okay. I probably shouldn't have texted you. Marie's calmed down and it's looking like Joe's going to be punted out as soon as the scans are done. But her blood pressure was up and she looked a little dizzy when they brought them in."
"Always text me," he said. "Where did he fall?"
"At the bottom of the stairs. He was trying to measure to see about putting in a stair lift so Marie can get upstairs to her craft room and he says his sock slid on the hardwood tread because she didn't get all the Murphy Oil Soap wiped up."
Rick sighed and rubbed the back of his neck. "The house is too much for them. And Marie won't let me hire a cleaning service for them no matter how hard I push."
"I hate to tell you this, but Joe's doctor was here making rounds, so the ER doc pulled him in. They want to talk about elder care options."
"It's probably time to start having those discussions, I guess. If he sets up a time, I can be with them and keep them honest. They're still in denial when it comes to their limitations."
Karen hesitated, then exhaled. "The other nurses and I call you because we know you, but Joe and Marie haven't updated their legal information. Dr. Bartlett already left a message at the last known contact for their son."
"They called Davey?" Rick shook his head. "That douche bag probably won't even return the call."
"I just thought you should know before you see them."
"Do they know? About the call, I mean?"
"I don't think the doctor's been in to follow up with them yet, so probably not."
He should tell them himself, before the doctor did, so Joe and Marie wouldn't be taken off guard. Their son was a painful subject and they were already having a shitty day. "We should probably make sure Joe isn't making a break for it."
Recognizing the change of subject for what it was, Karen led him through the security doors and down the hall to a curtained-off room.
Marie stood when she saw him and held out her arms. Rick hugged her, some of his worry eased by the steadiness in her slim, tall figure. Even at seventy-eight, Marie was strong. Neither of them was as strong as they used to be, though, and it was becoming a problem.
"They shouldn't have called you," Joe grumbled from the bed. Rick let go of Marie to put his hand on the man's shoulder. Taller and four years older than his wife, but not quite as thin, Joe had once been rugged as hell. Age and a stroke had taken a toll, though, and Joe was having trouble reconciling with the fact he wasn't fifty anymore.
"If a call comes in, I'll have to go, but we'd just finished a run. Pretty lady in a lace negligee thought she smelled smoke."
"Same one as last time?" Joe asked, leaning back against the stack of pillows.
"Yup."
"You said she was pretty," Marie said. "Maybe you should ask her on a date. She obviously likes you."
"Jesus, Marie." Joe scowled at his wife. "You can't encourage that or half the women in the city will be setting their tablecloths on fire."
Rick laughed and sat on the exam stool, leaving the visitor's chair for Marie. Hoping it would be a few more minutes before the doctor came back, he listened to the familiar banter between the two people who'd come into his life as landlords and become like family. And he tried to figure out how to tell them the hospital had reached out to their son because Joe and Marie knew as well as Rick did that Davey probably wouldn't reach back.
* * *
Jessica Broussard parked her rental car at the curb and flexed her fingers because they practically ached from her death grip on the steering wheel. Driving in Boston was certainly no joke.
Having learned through previous experience that navigation systems weren't infallible, she squinted to make out the brass numbers tacked to the front of the tall blue house. Then she looked at the address she'd punched into the GPS and took a deep breath.
This was it. Her grandparents' home.
The flight from San Diego to Boston had given her plenty of time to obsess about all the ways this trip made no sense. Whenever her father was unavailable, Jessica checked his voice mail in order to keep Broussard Financial Services running, but she hadn't known what to do about the call from the Boston doctor. Reaching out to her father had resulted in a brusque demand for her to deal with the problem before she even got a chance to tell him it was personal.
But she couldn't deal with it. The doctor wouldn't speak to her about Joe and Marie Broussard, the grandparents she'd never met, because she wasn't on the form. And, when she was tossing and turning at two in the morning, she wondered if it was because they didn't know she even existed. The plan formed—seemingly brilliant as many insomnia-born plans were—to deal with her father's problem and to meet the people David Broussard had barely spoken of, and never kindly.
A curtain in the house twitched, and Jessica realized she'd been staring. It was time to get out of the car, or drive back to the airport and force her father to call the doctor.
She climbed out of the car, bracing herself for the blast of cold air, and walked toward the front door as a pickup drove past and then turned into the driveway. Jessica paused with one foot on the bottom step, but the man who got out of the truck definitely wasn't one of her grandparents.
"Can I help you?" he asked, walking toward her.
"I'm looking for Joe and Marie Broussard."
He nodded. "I'm Rick Gullotti. I rent the apartment upstairs. They expecting you?"
No, they most definitely were not. That two-in-the-morning plan had also included not giving the Broussards the opportunity to tell her not to come. "No, they're not. But I'm...their granddaughter. Jessica."
The man froze in the act of extending his hand to shake hers, and his eyebrows rose. He had great eyebrows, which was ridiculous because when had she ever noticed a man's eyebrows before?
"I wasn't aware they have a granddaughter," he finally said, and she could tell he was trying to be careful with his words.
"To be honest, I don't know if they're aware of it, either."
"Okay." He dropped his hand. "Do you mind if I ask why you're here? Is your visit related to the doctor calling Davey?"
Davey? Not once in her entire life had Jessica heard her father referred to as anything but David.
She took her time answering, assessing her options. On the one hand, it would be easy to dismiss him as a tenant who should feel free to mind his own business. But on the other, he knew her grandparents well enough to call their son Davey and she didn't know them at all. When it came to moving them into a better living situation and getting the house on the market, he could be her strongest ally.
"The doctor refused to talk to me and my father is unavailable. If Joe and... If my grandparents add me to their paperwork, I can help them navigate their options."
After a long moment spent staring at her as if trying to read her mind, he nodded. "I'll introduce you."
When Jessica stepped down to let him go in front of her, she realized how tall he was. She wasn't sure she had an actual type, other than a preference for men taller than she was, but circumstances had led to her last few relationships being with younger men. Judging by the hint of gray peppering his short, dark hair and scruff of a beard, Rick Gullotti definitely wasn't younger. His blue eyes were framed by laugh lines, and she got the feeling he laughed a lot.
Worn jeans hugged his bottom half, and a T-shirt did the same for the top. He'd thrown a hoodie on over it, but it wasn't zipped—which meant he had to be crazy—so his body was well displayed. Very well.
"How can it be this cold already?" she asked, trying to divert her attention away from the view before she said something stupid, like asking him just how many hours per day he worked out to look that amazing.
Rick shrugged. "It's that time of year. It's going to be warmer the next few days—maybe back up to fifty—and then there's snow in the forecast. Welcome to Boston in December."
"Snow." She'd gone on a ski trip once, during her college days. There had been a fireplace and alcohol and as little snow as possible.
"I hope you brought boots."
"I won't be here that long."
He gave her a hard look she couldn't quite decipher and then opened the front door without knocking. She followed him in, trying to block out her father's voice in her head.
Crass. Alcoholic. Bad tempers. When she was eleven, she'd had to do a genealogy project in school. They're just not our kind of people, Jessica, and you're upsetting me. I don't want to hear about this nonsense again. That was the last time she asked about her grandparents. Her project was entirely fictional and earned her an A.
"Rick, is that you?" she heard a woman call from the back of the house, and Jessica's stomach twisted into a knot. "Did you get the... Oh. You have company."
Jessica looked at her grandmother, emotions tangling together in her mind. Marie was tall and slim, with short white hair and blue eyes. And Jessica knew, many years from now, she would look like this woman.
"Where's Joe?" Rick asked, and Jessica was thankful he seemed to want them together because it bought her a few more seconds to gather herself.
"He's in the kitchen. Come on back."
When Marie turned and walked away, Jessica looked up at Rick. He nodded his head in that direction, so she followed. Other than a general sense of tidiness and a light citrus scent, she barely noticed her surroundings. Her focus was on her grandmother in front of her and an awareness that Rick Gullotti was behind her.
Her grandfather was sitting at the kitchen table, working on some kind of puzzle book with reading glasses perched low on his nose. When he looked up, he frowned and then took the glasses off to stare at her.
"I found Jessica outside," Rick said. "She says she's your granddaughter."
Marie gasped, and Jessica felt a pang of concern when she put her hand to her chest. "What? She can't be."
"If her hair was short, she'd look just like you did years ago, Marie," Joe said, and she wished she knew him well enough to know if the rasp in his voice came from emotion or not.
"I can't believe Davey wouldn't tell us he had a baby."
"Davey hasn't told us anything in almost forty years."
"I'm thirty-four," Jessica said, as if that explained everything, and then she immediately felt like an idiot. "I'm sorry. I should have called first."
"Did Davey send you because that damn doctor called him?"
"I came because of the call, yes." She couldn't bring herself to admit yet that her father had no idea she was here or why.
Silence filled the kitchen, and she became aware that the Broussards had a real clock hanging in their kitchen—the kind with a second hand that marked the awkward seconds with a tick tick tick.
Jessica was torn. The logical analyst voice in her head—the part of her that had earned her a cushy corner office in her father's investment business—wanted her to set up a time to speak with them about the doctor's call and then check into the hotel room she'd reserved. But her inner eleven-year-old wanted to hug her nonfictional grandmother.
"It's a long flight," Rick said, stepping out from behind her so she could see him. "You hungry?"
His quiet words breaking the silence also seemed to break the tension, and Marie gave her a shaky smile. "Have a seat and tell us all about yourself. Rick, are you going to stay for a while?"
"I'll stay for a little bit," Rick said, and though his voice was even enough, the look he gave Jessica made it clear he wasn't just a tenant in this house and he wasn't sure what he thought of her yet. "I want to hear all about Jessica."
Chapter Two
Rick wasn't sure exactly what to make of Jessica Broussard. The only thing he knew for sure about Joe and Marie's surprise granddaughter was that she smelled pretty damn good for a woman who'd just flown across the entire width of the country.
She didn't look too bad, either. Her long, blond hair was in a long and straight ponytail, and if she was wearing makeup, it was subtle. A soft sweater that looked more fashionable than warm reached her thighs, which were encased in black leggings that disappeared into similarly nonfunctional boots. The soft leather might make her legs look amazing, but they weren't keeping her feet warm. And she was tall enough so it wouldn't be awkward to kiss her.
Not that it mattered, since he had no intention of kissing Jessica. But, being tall himself, it was something he tended to notice about women.
But what he didn't know about her was why she'd flown all the way from San Diego to Boston at the drop of a hat to show up on the doorstep of people she didn't even know.
"I'm really not hungry," Jessica said, pulling out a chair to sit. "But I'd love a glass of water if it wouldn't be too much trouble."
"It's no trouble at all." Marie pulled out the chair next to Jessica's. "Rick, would you get Jessica a glass of water, please?"
Smiling, he opened the cabinet and took down one of Marie's "company" glasses, rather than grabbing one of the plastic tumblers they usually used. After rinsing it out, he filled it with ice and water from the fridge.
"Thank you," Jessica said when he set it down in front of her. But she didn't take a sip immediately. She wrapped her hands around it as if she just needed something to do with them.
Instead of taking the fourth seat at the table, Rick leaned against the counter and folded his arms across his chest, watching her.
"What do you do for work, Jessica?" Marie asked, and he felt a pang of sadness at the anxiety in her voice. She would try not to show it, but the woman was a wreck on the inside.
"I work for my father, actually, at Broussard Financial Services. We do financial planning and manage investments and things of that nature. As his vice president, I handle everything when he's unavailable, so of course I returned Dr. Bartlett's call yesterday. It sounded urgent."
"Are there other people in the office?" Marie asked. "If you're here, who's running things now?"
"We do have staff. And I have my laptop. Other than rescheduling a few face-to-face meetings, most of my work can be done remotely."
"Let me ask you something," Joe said, fiddling with his reading glasses. "Does your father know you're here?"
"No, he doesn't," she answered after a long silence, and Rick got the feeling she didn't want to answer the question. The granddaughter they didn't know showing up in Boston unannounced when their son couldn't even be bothered to return a call was interesting, but he really hoped she wasn't up to no good in some way. "The doctor couldn't discuss your situation with me because I'm not on the form, but my father is unavailable, so I decided to come in person."
Unavailable. She'd used that word outside, too, and he wondered what it meant. Most people would say he was on vacation or at a remote fishing cabin or chained in a basement somewhere. The use of unavailable seemed deliberate, meaning she didn't care for them to know what Davey Broussard was up to.
"I feel bad that you came all the way out here," Marie said. "Dr. Bartlett overreacted and shouldn't have called."
"Needs to mind his own damn business," Joe muttered.
Rick cleared his throat. "Maybe he did overreact this time, but it's not a bad idea to go over your legal papers and discuss your options once in a while."
"We can talk about all that tomorrow," Marie told them. "Right now I want to hear about my granddaughter."
Rick did, too, actually. He watched her slowly relax as she told them about growing up in San Diego. She'd graduated second in her class and gone to the University of Denver for her degree. Then she'd joined her father's company and worked her way up to second in command, poised to take the reins when he retired. She'd never been married, but she owned a lovely condo and drove a convertible Audi.
He wondered if Joe or Marie would press for the details she'd left out. There was no mention of her mother or siblings. Had she wanted to join her father's company or was it simply expected of her? For almost an hour he stood there while they talked, but she never said anything that wouldn't be out of place in a professional bio.
"I'm so glad you're here," Marie said after a while, resting her hand on Jessica's arm. Rick watched the younger woman's gaze settle on the touch, her smile a little on the shaky side. "I should start supper. Is there anything you don't like? Or do you have any food allergies?"
"I... No. I like most foods and I'm not allergic to anything that I know of."
"Oh, good. I have a lasagna in the freezer. I can pop it in the oven so we can get you settled in while it cooks."
"Oh, I appreciate the invitation, but I really should go and get settled into my hotel. Is there a time we can get together tomorrow to talk?"
Rick and Joe exchanged amused looks when Marie held up her hands and shook her head. "Oh, you don't need a hotel, honey. We have a guest room upstairs. It has its very own bathroom and everything."
"That's really generous, but I already have a reservation."
"No sense in wasting money like that," Joe argued.
"I'll be working a lot, too. Just me and my laptop, you know?"
"You can work here," Marie said. "We have really good internet so Rick can talk to all of his girlfriends on Facebook."
"Hey!" He laughed, shaking his head. "I don't have girlfriends on Facebook. And that's not why we have internet."
"Imagine what people would think if my granddaughter stays in a hotel," Marie pushed.
"None of your friends know you have a granddaughter," Jessica pointed out.
Joe snorted. "Trust me, they will."
Rick pushed away from the counter and walked toward the table. "You may as well just give me the key to your car so I can bring your bags in."
"Go ahead and pull the car into the driveway, too," Joe said. "Get it off the street."
"I..." Jessica gave Rick a look that was clearly a plea for help, but there was nothing he could do for her. Marie had made up her mind and she was possibly the most stubborn woman he'd ever met.
"I would really like for you to stay with us," Marie said quietly, touching Jessica's arm again.
Her granddaughter just nodded, her smile less anxious this time, and pulled the rental's key out of her sweater pocket to hand to Rick.
After parking the very compact car in the shadow of his truck, Rick popped the trunk and pulled out her suitcase. Then he wheeled it around to the other side of the car.
He wasn't sure what to do about the stuff on the front passenger seat. While he'd noticed she had a small pocketbook on a thin chain across her body, she'd left a tote bag and some other stuff in the car.
After a moment's hesitation, he zipped the expensive-looking sunglasses into the case he found and dropped it into the top of the bag. A pen and a tin of mints went in after it, and then he looped a scarf through the tote's handles.
He lifted the tote out of the car and noticed a small legal pad had been under it. The house's address was scrawled across the top, so she'd probably pulled the pad out to enter it into the navigational system. But the list of addresses under Joe and Marie's house, written in much smaller letters down the page, caught his eye as he was in the process of putting it in the bag.
The street names were all familiar and when he read the abbreviations and dollar amounts listed with each one, he realized they were meant to be comps—lists of houses for sale in the area that might have a comparable value to Joe and Marie's.
So it looked as if their granddaughter had amused herself on the plane by researching their home's worth. What she might not be aware of was that, with an actual backward, two-car garage—with accompanying driveway—and spacious third-floor apartment, he'd take a wild guess at high six-figures.
Or maybe she was aware of it and the amount factored into her urgent need to meet her grandparents after thirty-four years. For all he knew, her unavailable father had something to do with it.
He didn't want to believe it, though. He'd seen her face when Marie had walked into the room and Jessica wasn't going to be winning any poker tournaments anytime soon. She'd been trembling. It was subtle, but he'd noticed. And there had been a lot of emotion in her big-eyed expression. He didn't know her well enough to read them, but it was obvious meeting her grandmother meant something to her beyond dollar signs.
Jessica Broussard was definitely a mystery, and the only thing Rick was certain of was that, for Joe and Marie's sake, he was going to have to keep a close eye on her.
* * *
A wave of relief had washed over Jessica when Rick walked out of the kitchen. The entire time she'd been talking to Joe and Marie, trying to make a connection with her grandparents, a part of her had been distracted by the man leaning against the counter.
He hadn't been looming, exactly, but he was a big guy and made for a definite presence in the room. His arms being folded had stretched his lightweight sweatshirt across his shoulders, and when he crossed one ankle over the other, it had the same effect on his jeans and thighs. He was very, very distracting.
And then he'd laughed, turning her somewhat wary awareness into a much more potent, very different kind of awareness. His laugh was not only warm and rich, but loud, and she realized she didn't have men in her life who laughed like that. Her father rarely laughed at all, and the men around them tended toward polite laughter.
"It breaks my heart to have to ask this because I feel like I should already know," Marie said, breaking into Jessica's thoughts, "but is your mother that girl he met at college? I don't remember her name now and he never brought her home to meet us, so I can't even tell you what she looked like."
"My mother's name is Emily and I know they met at college, but I don't know if she's the same one." She took a long drink of water, wishing there was a way to avoid telling the rest. "She left us when I was three, so I don't really remember her."
"Oh." Marie fell silent, giving her the sympathetic look Jessica had come to expect years ago on the rare occasion her mother was brought up. "Did he remarry?"
That made her laugh, though it sounded harsh and humorless. "Several times. He's currently in the process of divorce number four."
And, even though he invariably brought those failed marriages down on himself, divorces were hard on her father and one of the reasons the reins of BFS were currently in her hands.
She didn't even want to imagine how he was going to react when he learned she'd handed those reins over to the staff. Not totally, of course, but she'd delegated like she'd never delegated before in order to manage this trip, and her father wasn't going to like anything about it.
"I'm sorry to hear that." Marie sighed. "I've always tried to imagine him happy, even if he didn't keep in touch."
Why? The word was on the tip of Jessica's tongue, but for some reason she didn't ask the question. If her grandparents felt anything like she did on the inside right now, they all had enough on their emotional plates without digging into the reasons behind their estrangement from their son.
"Davey hasn't been happy a day in his life," Joe said, his voice gruff with some emotion that went deeper than anger.
Every time she heard the name Davey, Jessica's mind tripped over it. These strangers knew her father, but they seemed to know a different version of him and that fascinated her. She wanted to know more about him.
She heard the front door and then the thump of footsteps on the stairs. Rick must be bringing her bags upstairs, and she fought down a rush of panic. Was she really staying here? With her grandparents?
"We should get you settled in," Marie said, standing up. "Being on a plane all day like that must be exhausting."
Jessica couldn't disagree, especially considering the amount of anxiety that had accompanied her, and the shift in time zones wasn't going to help. She followed Marie to the stairs, but paused halfway up when a framed photo caught her eye. She'd barely noticed all the family pictures on display, but this one had been blown up.
Even though he was just a child—young enough to show off two missing front teeth in a huge smile—Jessica had no trouble recognizing her father. And Joe and Marie hadn't changed very much, either, even though Joe had been a little beefier. They all looked so happy, smiling for the camera, and the ache in Jessica's stomach intensified.
She had a few pictures of her mother. There was even a photo of them together, taken just before her third birthday. They'd both been looking at the camera with solemn eyes. Jessica's mouth had been turned down in what looked like sadness and her mother's lips had been pressed tightly together.
There were no happy family portraits on David Broussard's walls.
When she heard Marie pause at the top of the stairs, she forced herself to look away and climb the rest of the steps. Maybe later she'd look at all the framed photos and try to get a handle on her emotions before having any conversations with her grandparents.
Halfway down the hall, they passed Rick, who was heading back for the stairs. He smiled at Marie, but some of the sparkle went out of his eyes when he turned it on her. "I put your bags in your room."
"Thank you." She already knew she'd lose some sleep trying to solve the mystery of Rick Gullotti. Was he afraid she was there for nefarious reasons? Or did he have nefarious plans of his own that her presence could derail?
Marie led her to the last door on the right, which was standing open, and Jessica saw it was a slightly barren but very clean guest room. Her suitcase and her tote were set just inside the door, and she saw that he'd thrown the stuff she'd had on the seat into the bag.
"I don't think it's too dusty in here." Marie pulled off the sheet draped over the bare mattress before walking to a closet. She pulled out a pile of fresh bedding and together they made the bed.
"Can I ask you a question?" Jessica asked when they were almost done.
Marie smiled at her from across the bed, but her eyes were wary. "Of course."
"How did you get my father's business number to put on your forms? I know he's had the number a long time, but...not that long." Somehow she doubted making sure they had his current contact info was high on her father's priority list.
"Sometimes I type his name into the Google on the computer at the library," Marie said, a hint of sadness creeping into her voice. "I'm not very good with computers, but I clicked on the first thing in the list it gave me and it was a website for his business—and yours, I guess. It has a phone number so I put it on the form, and there's a picture of him, too. I look at it a lot."
Jessica had no idea what to say to that, so she kept her mouth shut, but it made her sad to think this woman had been pining for her son. A son who seemed to harbor no good feelings toward her at all. She tried to remind herself that people changed and almost forty years was a long time.
Although, her father never seemed to change.
"There." Marie ran her hand over the quilt to smooth out a wrinkle, and smiled at Jessica. "It's no five-star hotel, but I think you'll be comfortable."
"I know I will. I'm glad I'm staying." And she was. It was going to be awkward, of course, but distance wouldn't help make it any less so.
They started toward the door, but at the last second, Marie turned to face her again. "I know this is probably weird for you, but would you mind if I gave you a hug?"
"I...I'd like that."
When Marie wrapped her arms around her, Jessica sighed and rested her head on her shoulder. Tears blurred her vision, so she closed her eyes and let herself soak in the emotion.
She knew the coming days would be a mess. Her father would be angry. There would be doctors, real estate people and perhaps lawyers to talk to with her grandparents, and there would probably be some emotional conversations about the family's past.
But for now, she was content to hug her grandmother.
* * *
Rick walked through the door of Kincaid's Pub and just the sight of Tommy Kincaid and "Fitz" Fitzgerald sitting at the bar relaxed him. Both retired firefighters, they'd been a fixture in the place even before Tommy bought it, enabling Fitz to claim the back stool by right of best friendship.
Kincaid's wasn't pretty, but firefighters had made it their own decades before—even before Tommy bought it—and it was like a second home for the guys of Ladder 37 and Engine 59. Memorabilia and photos from the local stations decorated the place, along with a signed photo of Bruins legend Bobby Orr screwed right to the wall to keep anybody from walking off with him.
Lydia Kincaid was behind the bar tonight and she waved to him when he walked in. She'd left the family business—and Boston—for a while, but came back to help out on a temporary basis a few months before. Temporary until she hooked up with Aidan Hunt, who was assigned to Engine 59 with her brother and his best friend, Scott. The firehouse had been a little tense when that relationship news broke but now, almost four months later, the drama was forgotten. Scott and Aidan were as tight as they'd ever been and Lydia had a diamond on her left hand.
And she had a beer in her right hand, which she set down on the bar next to the one she'd poured for her brother. Scotty was alone, so Rick walked up and draped his arm over his shoulders. "You hanging out with all your friends?"
"Screw you. I thought Aidan might show up, but Lydia's making him do responsible adult shit, I guess."
His sister rolled her eyes. "He's grocery shopping because we like to eat. He said he might stop by to shoot some pool later, or he might not."
"I figured he'd spend more time here, not less," Rick said. "Since you're here."
"I don't go hang around the firehouse just because Aidan's there."
He shrugged. "True. But he was hanging out here long before you became the reason why."
"He'll probably be in, unless there's an animal documentary on. Then he'll sit down and end up asleep."
Rick watched her mouth curve upward in an affectionate smile and took a few swallows of beer as she walked away. He was happy for her. He'd known her for years, since she was Scott's sister and she'd been tending the bar since he was old enough to drink. And he was happy for Aidan, too. He was a good guy.
"You're antsy tonight," Scott said, and Rick realized he was tapping his fingers against his mug. "What's up?"
"My landlords' granddaughter showed up from San Diego today."
"Joe and Marie have a granddaughter?"
"That's what I said when she showed up."
"And?" Scotty prompted when he didn't offer up any more details.
"And what?"
"Where has she been? Why is she here now?"
Rick filled him in on what little he knew, pausing now and then to sip his beer. It didn't take him very long to tell the story, of course, since he had a lot more questions than answers when it came to Jessica.
"So she's basically vice president of a financial management company, but she gets on a plane to Boston with no advance notice because her father got a call about her grandparents, who she's never even met?" Scotty frowned. "That's a little weird, don't you think?"
"I'm not sure what to think. I don't like the fact she's already researching the value of the house, though."
"What I can't believe is that they haven't updated their legal situation in how many decades? From what you've told me, their son wants nothing to do with them."
Rick nodded. "Yeah, but what else are they gonna do? With Davey out of the picture, it's just the two of them."
"And you."
"No." Even the suggestion Joe and Marie would disinherit Davey in his favor made him uncomfortable. "When push comes to shove, I'm their tenant. It's bad enough they cut me such a break on the rent. They don't need to be giving me more than that."
"They're not just giving you a break on the rent for no reason, though. They want to keep you because they trust you and because you take care of the house. And the yard. And pretty much everything else a son would do for them."
"I don't want the house. Or their money. I just want them to be comfortable and safe. If that means selling the house to find them something more manageable or to pay for one of those assisted-living places, so be it. I'm a big boy. I can find a new place to live."
"So you're just going to stay out of it?"
Rick took a long drink, considering the question. "No. Maybe Jessica's here because her father's unavailable, whatever the hell that means, and she wants to help out her grandparents and maybe even get to know them. Or maybe they got a phone call and saw dollar signs. I'm not going to sit back and watch father and daughter shuffle Joe and Marie off to some shit hole and take control of their finances."
"I don't know your landlords as well as you do, but they don't seem like the type to fall for something like that."
"I hope you're right," Rick said. "But their son left a big hole in their lives and... If you could have seen Marie's face when it hit home that Jessica was really her granddaughter. They're vulnerable, even if they don't see it."
"We don't have another shift until Tuesday morning, so you'll be able to keep an eye on her."
"And what woman are you keeping an eye on now?" Lydia asked. She'd been passing by, carrying a couple of empty mugs from some guys at the end of the bar, and she stopped in front of them.
"Not that kind of keeping an eye on," Rick said. "I don't have my eye on anybody in that sense right now."
Her eyebrow arched. "It's not like you to be single for long."
She walked away before he could respond, but he wasn't sure what he'd say, anyway. It made him a little uncomfortable to hear her say that, he realized. He dated a lot. So what? He was single and his relationships almost always ended mutually. Most of his ex-girlfriends were still women he considered friends.
Like Karen. He turned his head to face his friend. "Did you know Karen's engaged?"
Scotty nodded. "Did she tell you or did you hear it somewhere? I don't think too many people know yet, actually."
"I saw her ring when I was in the ER for Joe and Marie yesterday."
"She tell you the rest of it?"
"About them having a baby? Yeah." Rick took a long swallow of his beer. "I'm happy for her."
"Really? Because you look kind of like a man who's one more beer away from writing a bad country song on the back of a bar napkin."
"Sure, I liked her. But it wasn't a forever kind of thing." He shrugged. "The first time I saw her with the new boyfriend, I knew there was something between them we didn't have."
"You will someday. Probably."
Scott Kincaid was probably the last guy he should be talking about relationships with, but there was nobody else around. "What do you think not the marrying kind really means?"
Scotty snorted. "Hell if I know, but I've been told more than once if you look it up in the dictionary, you'll find a picture of me."
"Maybe next time a woman says it, I should ask her to be more specific."
"I'm not sure I want to know."
Rick wasn't sure he did, either. But seeing how happy Karen had been lately made him see how big a difference there was between having a woman in your life and having a woman you wanted to spend the rest of your life with.
Once the issue with Jessica Broussard had been resolved, he was going to have to give some serious thought to making himself into the marrying kind. Whatever the hell that meant.
Chapter Three
Jessica opened her eyes and blinked at the sun shining through the frilly white curtains. She'd struggled with sleep issues her entire life, so she had room-darkening drapes in her bedroom at home and only ever knew what time of day it was by looking at the clock.
There was certainly no doubt it was morning right now. And it was her first full day in Boston, in her grandparents' house.
She rolled onto her back and stared at the ceiling, surprised she'd slept at all. The last thing she remembered was the clock ticking over to one o'clock as she tried to reconcile the Joe and Marie she'd met yesterday with the crass, alcoholic, bad-tempered people her father had refused to talk about.
Even given the fact people changed and her grandparents were different now than when they'd raised their son, Jessica's gut told her something wasn't right about the way he'd cut his parents out of his life. Maybe she'd always suspected that, but it had taken something of a perfect storm for her to face it. She'd become painfully aware most of her friends had married and started families, while she was still acting as her father's business partner and hostess, and she wasn't sure how she felt about that. At thirty-four, she needed to figure out if she even wanted those things, or if she liked her life just how it was.
There had been a lot of introspection, though. And a realization that, when her father eventually passed away, she'd be alone. Then the call had come from Boston. And her father had been unavailable to stop her from getting on the plane.
Maybe she'd find out that, once the element of surprise wore off, Joe and Marie weren't very nice people, after all. If that was the case, she could just get on a plane back to San Diego. She'd be sad, but at least she'd know her father had been right all along.
Now she desperately wanted a cup of coffee, but she wasn't sure what would happen when she went downstairs. Marie might not mind if she went through her usual morning routine of catching up on stock movement and financial news on her phone while waiting for the caffeine to kick in. Or she might want to chat and make breakfast together. As lovely as that sounded, business came before family. She'd learned that at her father's knee.
Ten minutes into scanning reports, though, and the craving for coffee burned through her good intentions. Coffee was too ingrained in her morning routine to attempt productivity without it. After getting dressed, she grabbed her phone and her laptop and went down the stairs.
Her grandparents were sitting in the living room when she reached the bottom, and they both looked over at her. Joe was sitting in a leather recliner, a mug of coffee on an end table next to it. And Marie was seated on the couch with her feet up on the coffee table, flipping through a magazine.
"Good morning," she said, feeling awkward all over again. She really shouldn't have let Marie talk her out of staying at a hotel.
"Good morning," Joe said, and then he turned back to the television. It was turned up pretty loud, so she guessed they didn't really do morning small talk.
Marie smiled. "Good morning, honey. There's coffee in the carafe. And I left some muffins and a few slices of bacon on a plate for you. There's a paper towel over it. If you want eggs, I can fry you up some."
"No, thank you. A muffin will be plenty."
"You can go ahead and do your computer stuff at the kitchen table if it's comfortable. We're watching our morning shows for another hour, at least."
"Thank you."
Once she'd fixed her coffee and inhaled a cranberry muffin and two strips of bacon, Jessica sat down at the kitchen table and got to work. It wasn't ergonomically ideal, but she wouldn't be spending enough time on the computer to worry about it today. Not only did she use her phone a lot, but she felt as if it would be rude to ignore her grandparents on her first day there.
By the halfway point of her second coffee, she'd cleared her inbox and exchanged a few emails with Sharon, her father's secretary and the woman who'd be doing the heavy lifting as far as keeping the office up and running. There were several messages from clients to respond to, but overall things were quiet. Most people were wrapped up in ski trips and the upcoming holidays once December hit.
She jumped, almost bumping her coffee cup, when the back door opened and Rick walked in. He'd skipped the sweatshirt today and she admired the way his navy T-shirt clung to his upper body before forcing herself to look at his face. He looked tired.
"Good morning," she said, watching him walk to the coffeemaker.
"Morning. Where are Joe and Marie?"
"In the living room watching television. Marie said they have morning shows they usually watch, so I could go ahead and work at the table." And speaking of work, why wasn't he at work? She hadn't expected him to be around until later in the day, if at all. "Do you have a job?"
He stopped in the process of pulling the carafe off the brew plate to look at her. "Excuse me?"
She felt the heat in her cheeks. Polite conversation was usually a lot easier for her, but there was something about Rick that made her feel awkward. "I'm sorry. I was surprised to see you because I guess I just assumed you'd be at work today, but I didn't mean to be so abrupt about it."
"I'll be at work tomorrow if there's some reason it matters."
Jessica wasn't sure what he meant by that, but there was some bite to his tone. "Are you upset that I'm here?"
After pouring himself a mug of coffee, he set the carafe back in place and then turned to face her. He leaned against the counter, just as he had the day before, and looked at her over the rim of his mug. She waited, saying nothing, while he drank a few sips of coffee.
"No." He cradled the mug in his hands and shook his head. "I'm not upset. But I find it a little funny you think two people who didn't know you existed twenty-four hours ago will just put their legal and financial affairs in your hands."
Her eyes widened as what he wasn't saying sank in. "You think I'm here to take advantage of Joe and Marie?"
"I really hope you're not, for their sake, but I don't know you so I'll probably be around a little more than you thought, just to keep an eye on things."
She tried not to take offense, but the implication she was running some kind of con on her own family stung. "Or maybe you're unhappy I'm here because you want to be in charge of their legal and financial affairs."
He snorted. "Sorry, Jess, but you're barking up the wrong tree there."
Jess? When was the last time anybody had called her that? High school, maybe. She couldn't remember, but she knew she'd gone only by Jessica in college because her father had explained it was a stronger name, and she'd probably be taken more seriously.
She didn't correct Rick as she usually did other people, though, and she wasn't sure why. "It's a valuable piece of property and it's obvious you've been helping them maintain it for quite some time. It's not unreasonable to think you might feel entitled to something."
"Maybe it's not unreasonable, but it's wrong."
There was no way to force trust. "So I guess we're at an impasse and we'll have to take each other's word for it."
"For now." He pulled out the chair directly across the table from her and sat down. "I'm a firefighter. That's my job."
"Really?" Now that he was sitting in front of her, she realized the small logo on his T-shirt's pocket said Boston Fire.
"Yeah. I work two twenty-four shifts each week and I don't have a second job like some of the guys, so I'm around quite a bit."
"Is that a warning?" She smiled to let him know she was joking with him, and was relieved when he smiled back.
He had a great smile. It softened the hard angles of his face and deepened the laugh lines around his eyes. He was even scruffier today than he'd been yesterday, and for the first time in her life, she got the appeal. It was too easy to imagine how that gray-flecked scruff would feel against tender skin.
"You okay?"
She wished he'd stop arching his eyebrow like that. It was distracting. "I'm fine. Just a little warm, I guess."
"Maybe because you're wearing a coat."
Jessica looked down at the thick, fleece zip-up she'd bought on a whim at the airport. "It's not really a coat, exactly. And it's cold here, remember?"
"That was yesterday, and you were outside. We heat the inside with these newfangled things called furnaces."
She laughed and unzipped the fleece so she could pull her arms free of it. "Are you from here? Do you have family nearby?"
"My parents still live in Fall River, where I was born. It's about an hour and a half south of here. And I have an older brother, who lives and teaches high school science in the next town over from them."
"So he's not a danger junkie, like you?"
There went that damn eyebrow again. "Danger junkie?"
"Don't you have to be a little bit of a danger junkie to be a firefighter?"
"Or maybe I became a firefighter because I'm a safety junkie." He took a sip of coffee, his gaze locking with hers.
She wasn't sure she bought that. "Maybe a little of both."
"Oh, Rick, it's you." The eye contact was broken when Marie spoke, and they both looked toward her. "I thought I heard talking, but I wasn't sure if Jessica was doing one of those video meeting things."
"We do have a setup for video conferencing in the office, but I mostly talk to the team by text. It's easier. Except for my father, who hates texting. He either calls or summons me to his office."
Marie's mouth pinched a little at the mention of her son and that bothered Jessica. Her father—and the company—was a huge part of her life, so she tended to talk about him a lot. If hearing about him made her grandparents unhappy or uncomfortable, that was going to be a problem.
"Joe made a couple of phone calls and we can see the doctor next week, because he wants to follow up after the fall he took anyway, and we're waiting for a return call from our lawyer. It's been a long time since we talked to him. I hope he didn't retire. Anyway, can you stay that long?"
Jessica hesitated. Things were going smoothly in the office and her accounts were all in order. Even though it was only her second day away from the office, she felt confident everything could be handled in her absence. Truthfully, with technology the way it was, it almost made no difference whether she was in her office or in a kitchen in Boston. But eventually her father was going to surface and when he did, he was going to be livid.
She looked at the hopeful expression on her grandmother's face and smiled. "I can stay."
* * *
The next day, Rick stepped over a hole in the charred roof and walked to the edge to look down at the scene on the street. They were six stories up, so he had quite a view of the neighborhood. There were probably a dozen engines jammed around the corner lot, along with support vehicles and the police cruisers. The bystanders were wandering away now that the fire was out and there was just the boring stuff left.
Roof fires were never fun, but there had been no injuries and it hadn't spread. As long as none of them tripped over a line and fell through a hole or a weak spot, all would be well.
Jeff Porter was sitting on the brick fascia, and Rick hoped like hell it wouldn't crumble out from under him. Porter was a big guy. "All clear?"
"Yeah, we can start picking this shit up anytime we're ready," Rick said. "I'm just taking a few minutes to relax. It's pretty quiet up here."
"I hear that."
It was warm, though the weather wasn't going to last long. The temperature was already dropping and there was snow in the forecast, but for now they were seriously overdressed. Rick slid off the big bunker coat and tossed it next to Porter's before turning to watch the guys from E-59 head down L-37's ladder with hose in tow. He stood with hands tucked in his suspenders, soaking in the sun.
"Took the wife to Kincaid's for lunch yesterday," Porter said. "I hear your landlords have a surprise granddaughter."
Of course he'd heard. By now, everybody probably had. He'd told Scotty. Both of Scotty's sisters—Lydia and Ashley—worked the bar at Kincaid's. And they were both with guys assigned to Engine 59. Lydia was engaged to Aidan, and Ashley was married to Danny Walsh, the engine company's LT. And each of the guys had helped him with some project or other at the Broussards' over the years, the most recent being the handicapped ramp in the back of the house, so they knew Joe and Marie in varying degrees.
"Yeah, her name's Jessica. She showed up day before yesterday."
"Must be awkward."
"It is. Joe and Marie are over the moon to have her there, of course, but they're all still dancing around the issue of Davey."
"That's their son, right? Her dad?" Rick nodded. "How long is she staying?"
"Not sure yet. They've got some meetings next week, I guess, but she's kind of a big deal at her old man's company from the sounds of it, so she'll have to go back to San Diego eventually."
"San Diego." Porter snorted. "Went there once. Hated it."
"Next time don't take your mother-in-law. Or the kids."
They were laughing when Rick got a heads-up from Danny Walsh that relaxation time was over and they needed to hustle. Their trucks were blocking another company from leaving and they wanted to unclog the streets before the elementary school up the street dismissed.
Once they'd repacked and made the drive back to the house, they backed the ladder truck and the pumper engine into the side-by-side bays and went through the post-run routine of checking and restocking equipment, and cleaning the trucks. Rick and Danny went up to the second floor to take care of some paperwork, while the rest of the guys went up to the living space on the third floor of the old brick building.
He did step into the bathroom and wash away the soot he'd managed to get on his neck and up one side of his face. But he knew if he went up and made himself a coffee or pulled up some couch for a few minutes, he wasn't going to drag himself back to the hated desk.
By the time he made his way upstairs, he could smell the big pot of chili that had been simmering for most of the day. There were some drawbacks to feeding a building full of guys chili, of course, but Chris Eriksson's recipe was too good to resist. And anything that simmered, slow-cooked or could be shut off and reheated made for a good meal because the dispatchers couldn't say, "Hey sorry, but they're eating so it'll be an hour or so."
The somewhat outdated space on the third floor never felt as small as it did at mealtimes, when the guys all came together. Cobb had come up, getting a break from the office in which the chief oversaw both companies. His own guys from Ladder 37. Jeff Porter. Gavin Boudreau. Chris Eriksson. And the guys from Engine 59. Danny Walsh. Aidan Hunt. Scott Kincaid. And the kid, Grant Cutter. All together, they made a good team, and they were like brothers.
Then Rick watched Grant jostling for space in front of the shredded cheese and crackers with Gavin—who was only a few years older—and felt old. In some cases he was starting to feel more like an uncle or other mentor to the younger guys, and he wasn't sure he was ready to be that guy yet.
Once he'd scored a bowl of the chili and topped it with some shredded cheese and garlic salt, Rick went into the living room to watch the news while he ate. Most of the guys would hang in the kitchen and shoot the shit, even if it meant standing while they ate, so he was able to grab a seat on the battered love seat with Aidan. Jeff and Scott were on the big couch, and Cobb was sitting in one of the wooden rockers.
Because they were all busy eating, he was able to watch the news in peace. There was footage of the roof fire and they watched the district chief give a statement for the cameras. Rick knew him, of course, but it wasn't Cobb in front of the cameras because they'd been called out when additional alarms were struck, so the scene wasn't theirs. But he knew Joe and Marie would ask him about it later anyway since they were sitting in front of their television watching the same news broadcast.
He wondered if Jessica was watching it with them. Probably curled up at the opposite end of the couch from her grandmother, maybe wrapped in the fleece blanket Marie kept draped over the back of the couch once the weather turned cooler. Even though they'd had a decent couple of days, the chill had to be a bit of a shock coming from San Diego.
"What's so funny, Gullotti?"
Rick jerked his gaze to Cobb, who was scowling at him, his dark and caterpillar-like eyebrows almost meeting over his nose. "What?"
"They're talking snow in the forecast and you're the only guy in the room grinning like somebody just told you there are naked twins waiting for you in the bunk room."
"Oh, I didn't even hear the forecast. I was thinking about something else."
"What's her name?" Scott asked with a smirk.
Jessica. "Maybe I was thinking about the time you got stuck going through a window and I had to push you through like that cartoon bear."
Before Scotty could come back with a smart-ass response, the alarm sounded and they all groaned. Rick shoved his way into the kitchen to dump his bowl in the sink and then joined the stampede down to the bays.
As he stepped into his boots and pulled the suspenders on the pants up over his shoulders, he hoped this wouldn't be a long call because chili was a bitch to clean up after the fact. And as he grabbed his bunker coat and helmet off their hooks, he wondered what Jessica would think if she saw him on the late news in all his gear. A lot of women tended to find firefighters sexy, but he had no idea if she was one of them or not.
Rick swung up into the seat, scowling. He also had no idea why he cared.
* * *
After dinner was eaten—far earlier in the day than she was accustomed to—and the dishes were washed, Jessica excused herself to her room. She'd heard her phone ringing in the distance while they ate, and that particular ringtone was only assigned to her father.
She hadn't answered it, of course, but she hadn't heard the voice-mail tone. That meant, if she didn't call him back very soon, he'd try again.
"Are you going to come watch the news with us?" Joe asked before she left the kitchen. "We watch the six o'clock news together every night."
"The news?" She almost said no, because checking in on financial news online would be a more productive use of her time than watching highlights of budget fights and Boston sports games on the television. But there was something about the way he said it that made it sound less like a polite question and more like an invitation to join them in a family activity. "Sure. I'll make sure I'm finished in time."
The smile on his face made her smile in return, thankful she'd made the right call. "Great. We'll make extra decaf tonight."
At least their third-floor tenant wasn't around tonight, she thought as she went up the stairs to her room. Not that she didn't like him. That wasn't the problem at all.
The problem was that whenever he was in the room, she had to resist the urge to look at him. She kept telling herself it was because he was tall and broad at the shoulder. Of course he'd draw the eye. But she'd also found herself wondering if his hands were as strong as they looked and what the scruffy beard on his face would feel like against skin, and she was pretty sure neither of those things had anything to do with how much space he took up in the kitchen.
Jessica had just closed her bedroom door behind her when her cell phone rang again, vibrating in her pocket while playing the distinctive ringtone that signaled a call from her father. Sighing, she pulled it out. She'd been hoping to do a quick sweep of her email and make sure nothing was happening at the office before calling him back.
Talking to him had been inevitable. While she hadn't expected him to step foot in the office for several more weeks, at least, he usually checked in with her or Sharon every so often. As tempting as it was to mute the ringer and let his call go to voice mail, Jessica knew he'd only keep calling back until he got through to her. And he would get angrier with each attempt.
"Hi, Dad."
"What the hell do you think you're doing?"
So he knew where she was, which meant he'd called Sharon before calling her. "I called to tell you about the message from the doctor, but you chose not to listen."
"What is it you think you're going to accomplish?"
"You told me to handle it. I'm handling it." More or less.
"Jessica, why didn't you tell me my parents are involved?"
Because he hadn't given her a chance to talk before barking out his demands and hanging up on her, like usual. But she recognized by his tone that he wasn't in the mood to admit any fault on this one. All she could do was try to keep anything emotional off the table. "I wanted to solve their problem for you as quickly as possible, and coming to Boston seemed like the most efficient way to accomplish that."
"I expect your assistance when it comes to the company, but this is personal. My family is none of your business."
Jessica was glad they were having this conversation by cell phone so he couldn't see her actually look at the phone and cock her head sideways in an are-you-serious-right-now kind of way. "I'm your daughter."
"I know who you are. And you're also vice president of Broussard Financial Services."
"I am your daughter," she repeated. "I am your family. That makes your parents—who are my grandparents, by the way—very much my business."
He was quiet for a few seconds, and she waited, knowing he was pondering the best route to take. "I told you a long time ago that they're not our kind of people, honey. And you know how much I depend on you in the office. I can't do it without you. I can hire somebody to help out my parents, but nobody can run this business for me like you do."
In the past, she would have given in. Not because she was flattered. Regardless of the truth in what he said, she knew he was saying it to manipulate her. For years she'd been telling herself that she let him get away with it because it made her life easier, not because it was actually effective.
But she wasn't finished in Boston. The initial awkwardness of staying in the house with her grandparents was wearing off, and she was enjoying getting to know Joe and Marie. Their conversations were still of the getting-to-know-you variety, though. They were almost comfortable enough with each other to maybe start having some heart-to-heart discussions and if she left for California now, it might not happen. Who knew when her father would free her up to return to the East Coast again?
They seemed hale and hearty enough to Jessica, but she reminded herself she was here to discuss their elder care options because Joe had ended up in the emergency room. And they were both on a variety of medications. If she returned to San Diego and something happened to one of them before she could get back...
"I prefer to stay and continue working remotely while helping Joe and Marie consider their options," she said firmly. "I have my laptop and my phone. That's all I ever use in the office, and they have good Wi-Fi and don't mind if I work at the kitchen table. The staff texts me when they need to and, as you know, that's how they usually communicate with me, anyway."
"They don't mind if you work at the kitchen table," he repeated in a flat voice, and she realized she'd given away the fact she wasn't talking to him from a hotel room. "Where are you staying?"
"I'm staying at their house. With Joe and Marie." The silence went on so long, Jessica glanced at her phone's screen to make sure the call timer was still running and he hadn't hung up on her. "The office is fine, Dad. Everything's running as smoothly as usual. Sharon and I are in contact several times per day. And, as I said, I'm perfectly set up for remote work."
"What about me? This isn't an easy time for me, Jessica."
A lifetime of conditioning kicked in and she nodded her head, but when she opened her mouth, the words wouldn't come out.
She didn't want to go home, but she knew she wouldn't get anywhere with him by playing the sentiment card. Instead she tried speaking his language. "They've already set up meetings for next week. Imagine how it would look if word got out you weren't willing to help your own parents with their affairs. It could be a PR disaster if word hit the right circles."
"My parents do not travel in the right circles."
Jessica closed her eyes and said a silent apology to Rick. "Their tenant could be a problem. They're very close and he's protective of them, so I wouldn't put it past him to cause a fuss. And he works for the city, so he probably knows a lot of people."
It was the truth, even if she knew her father wasn't envisioning a firefighter, but rather a guy in a suit at the city hall. She was in a tough spot because she wanted to stay with Joe and Marie a while longer. But she also couldn't lose her father and possibly derail her career for people she'd just met, no matter how much she wanted to get to know them, so it would be a balancing act.
"You have clients," he said, but she could hear the weakening in his voice. He was probably nearing the point where he'd give her what she wanted just so he could get off the phone and have a drink.
"My clients are being taken care of. And Sharon's the only person who knows why I'm here. Everybody else believes I'm wooing a potential client."
"The meetings are next week?"
"Yes. I haven't set up a meeting with a real estate agent yet, though, but hopefully I'll find one who can come out on short notice."
"Don't get too close to them," he warned. "Keep our personal business to yourself. But I'll let you stay until this matter's resolved so I don't have to hear about it again."
She let the statement of granted "permission" slide. "Thanks, Dad. I'll talk to you soon."
Once he'd hung up, she sat on the edge of the bed for a few minutes to calm herself. Her father was always draining to a point, but never so much as when he was drinking.
After a few minutes, she opened her laptop and lost herself in her inbox and stock reports. She kept an eye on the clock in the corner of the screen, though, so when it was almost six o'clock, she saved everything and went down the stairs.
Joe was in his chair and Marie at her usual end of the couch. And there was a mug of what she assumed was decaffeinated coffee sitting on the coffee table in front of the other end. She smiled a greeting and then curled up in the corner before reaching for the mug.
"Thank you," she said, and then took a sip.
"Are you okay?" Marie asked. "You look tired all of a sudden."
"My father called. He didn't know I was here and he's concerned about my not being in the office while he's unavailable." There. That was mostly the truth. She didn't see any reason to tell them he was more upset that she was with them.
"Do you want to talk about it?"
"Maybe another time," she said. "It's time for the news, anyway."
Because she seemed to live in a constant state of not-quite-warm-enough, even in the house, Jessica pulled the blanket off the back of the couch and tucked it around herself as the news began.
"Oh, I wonder if Rick will be on TV," Marie said as they started into a story about a roof fire.
Jessica didn't lean toward the screen, but she tried not to blink as they ran footage of the fire somebody had taken with a cell phone. She wasn't sure how she would tell which one was Rick with all the gear on, but she tried not to blink anyway.
And when she didn't see him, she tried not to be disappointed. And she really tried not to wonder if she'd see him tomorrow. According to Joe, Rick worked a twenty-four-hour shift and then had forty-eight hours off. Then he worked another twenty-four hours and had seventy-two off. This was his second of the week, so he'd be home for several days.
And he'd already made it clear he intended to keep his eye on her. She just needed to remember it was because he didn't trust her and not let herself develop a crush on the man. She was too old for crushes. And, in this case, it couldn't end well.
Chapter Four
The next morning, Jessica was dressed and ready to head downstairs by seven-thirty, since that seemed to be when Joe and Marie ate breakfast. She wasn't much of a morning eater, herself, but she recognized that Marie wanted to feed her and there was no sense in disrupting their routine.
As soon as she walked into the kitchen, she felt as if the vibe had changed somehow, as if they'd been talking about her and stopped when she came down stairs. Joe wasn't a cheery morning person to begin with, but this morning he spent a lot of time staring into his coffee cup. And Marie's lips kept pressing together as if she was trying not to say something that would be upsetting.
She'd set down three plates of scrambled eggs and toast before it appeared to Jessica she couldn't hold it in anymore. "Can I ask you something about Davey?"
Jessica nodded, pushing some eggs around on her plate. "Of course."
"Did he tell you we were dead?" Marie's voice was almost a whisper.
Jessica froze, her heart breaking at the question. How painful it would be to have a child who'd rather pretend you were dead than admit you were alive and just didn't want to see you? "No. He never said that."
The breath seemed to rush out of her grandmother's lungs. "Oh, good. We thought maybe that was why... Well, it doesn't matter. You're here now."
Jessica took a big, bracing gulp of coffee because it was time to put it all out there. She'd rather do it now than have her grandparents thinking she hadn't wanted to know them. "I asked about you sometimes when I was little. But it made Dad angry, so eventually I stopped asking. He said...he said you weren't our kind of people. That you were crass and drank a lot. I was a little girl, so I never questioned what he said. But when the doctor called...I wanted to meet you."
The hypocrisy of her father damning anybody for drinking burned in Jessica's stomach, but she took a bite of her toast to calm it. Marie dabbed at her eyes with her napkin, but she was smiling. It was Joe who spoke, though, after a gruff clearing of his throat.
"I guess during Davey's teen years, things were a little hard. Work was tight and we hit a rough spot in our marriage. I made some mistakes. We fought a lot and I drank too much. I admit it."
"Davey was always different," Marie said quietly. "He always wanted better and I always felt like he was embarrassed of us."
That sounded like her father and Jessica had certainly felt as though her father was judging her and finding her wanting a few times in her life. "I guess he hasn't changed very much."
"We've mellowed with age," Joe said. "I won't deny that. But the last time he was home, your grandmother was hurt that Davey wouldn't bring his girlfriend home to meet us."
Marie shook her head. "Joe, don't."
"She has a right to know," he said, looking across the table at Jessica. "He said he'd never bring a girl to this shit hole and called Marie trash. I'm not proud of it, but I put my hands on him. I put him up against the wall and told him if he couldn't respect his mother, he could leave. He never came back."
"He was young and stupid," Jessica said, and then she covered her mouth with her hand. "I'm sorry. I shouldn't be trying to defend him. It's a habit, I guess."
"We reached out to him a few times," Marie said, "but there was nothing. It was like there had never been a relationship with us. Eventually it hurt too much to keep trying, but I always hoped when he was older and had his own kids, he'd come around."
"He's very...self-centered," Jessica said quietly. "And I'm so sorry his relationship with you has been so painful."
"Has he been a good father to you?" Joe asked.
"Yes," she said without hesitation. "Not perfect, of course. Who is? But we're close and he's done the best he could. I have a great life."
"That's what matters, then." Her grandfather gave her a warm smile, which she returned.
Even though he tended to wreak havoc on her senses, Jessica was relieved when the back door opened and Rick walked in. She could take the hard conversations in small bits, but she wasn't used to emotional talks.
"Hey, everybody," he said, and Jessica noticed he did a bit of a double take when he looked at Marie. She was smiling, but her eyes were still a little wet and her cheeks flushed. "How's everybody doing this morning?"
"We're good," Marie said. "Do you want some breakfast? I can whip up some more eggs with no trouble at all."
"I already ate. Today seems like a good day to get some errands done and I was thinking I'd drag Joe along. We need to make new plywood tents for your bushes in the backyard because we trashed the old ones last year, remember?"
"I meant to ask you about that, but I forgot," Marie said. "I have a list somewhere."
"Didn't you just work twenty-four hours?" Jessica was surprised to see him looking maybe a little tired around the eyes, but mostly ready for a day of yard work.
"We sleep between runs," he said. "Last night was quiet. Trust me, there are days when the only thing I do when I get home is strip off my clothes and crawl into bed."
"I can imagine," she said. Good lord, she could imagine him stripping without any effort at all. "I mean, I can imagine working twenty-four hours if it was busy would be hard."
"Oh, today's Saturday," Marie said. "Joe and I are supposed to go to a barbecue lunch for Valerie's grandson's birthday today."
"A barbecue?" Jessica almost dropped her fork. "It's winter. You guys do know it's winter, right?"
They all laughed, and then Joe shook his head. "It's not even cold yet. If the hamburger doesn't freeze between the time you walk out the door to the time you put it on the grill, it's warm enough to barbecue."
"I'll take care of the bushes," Rick said, walking toward the coffeepot. "You already told Valerie you'd be there and you know how she is. Every time you see her for the entire year, she'll find a way to take a dig at you for missing the party."
"Yes, that's true." Marie looked at Jessica. "Do you want to go?"
Besides the fact she didn't feel nearly hardy enough to stand around outside eating burgers with strangers, whether the burgers froze or not, she didn't feel up to facing the questions that would inevitably come her way. "I'm a little behind on work, actually. If you don't mind, I'll take the time to catch up."
"Good call," Joe muttered. "Valerie's husband has two degrees of grilled burgers. Raw on the inside or hockey puck."
"Rick going in and out won't distract you or keep you from working, will it?" Marie asked.
Jessica looked at Rick, who was watching her with those blue eyes. Oh, Rick going in and out would definitely distract her. "He won't keep me from working."
He raised that damn eyebrow of his and grinned, as if she'd just issued a challenge.
* * *
If a woman did all of her work on her cell phone and laptop, why she needed to have a pen was beyond Rick. But Jessica had one and it seemed like every time he walked through the door, she was sitting at the table playing with her damn pen.
He'd seen her tapping it against her bottom lip, which naturally made him notice her bottom lip and how utterly kissable it looked. She tapped it on her teeth. And on this trip inside he saw she appeared to be concentrating on the screen particularly hard while she sucked on the cap.
And, dammit, he forgot what he'd gone inside for. That was assuming he'd even had a reason and wasn't subconsciously coming up with excuses to see Jessica. Grabbing a drink had made sense. Another trip inside to rummage through Marie's junk drawer for a permanent marker had made sense. But since he lived upstairs, there were only so many reasons he needed to be in there.
"Hi," she said, dropping the pen onto the table, and he realized he'd been staring at her. "Do you need something?"
That was a good question. He'd obviously needed something since he'd gone inside, but seeing her lips puckered around the pen had wiped his mind clear of everything except her mouth. He had to say something, though. "I wanted to see if you could give me a hand for a second, but you're obviously busy."
"No, I'm not." She closed the laptop so fast he got the impression she'd been looking for an excuse to be done. "It's time for a break, anyway."
"Great. Appreciate it." And now he had to come up with something for her to do. "You don't have any paper."
"Excuse me?" She got up and pushed her chair in. "Do we need paper? I have a legal pad upstairs if we do."
"No. We don't need paper for what we're doing." Of course she had a legal pad. She'd been using it to figure out what her grandparents' property might be worth. "I just thought it was funny you have a pen, but nothing to write on."
Her mouth twisted in a wry smile. "I quit smoking six years ago. I'm a fidgeter by nature and quitting seemed to make it worse, so I was constantly fidgeting with the pen. After a while I realized having that keeps me from wanting to get up out of my chair constantly, so I always have a pen in my hand when I'm working."
And in her mouth. "Congratulations on quitting. It's not easy, from what I've heard."
"Thanks. It wasn't easy to quit, but most of the time I'm proud of myself." She laughed. "Sometimes I wonder what the hell I was thinking because the cigarettes sure made life easier, but I know better."
"Joe used to smoke cigars, until Marie made him quit. Does your dad smoke?"
She shook her head. "Never has, that I know of. I started in high school, when it seemed like a good diet plan. Don't comfort eat. Just comfort smoke instead."
He wasn't sure what to say to that. He wanted to ask her why she'd needed comfort. Or tell her she certainly didn't have to worry about a diet plan because her body was pretty damn perfect just as it was. But neither were any of his business, so he just smiled and led the way to the backyard.
She trailed her hand down the railing of the handicapped ramp Joe and Marie used. "This looks fairly new."
"Yeah, some of the guys from my station helped me build it a few months back. The front steps can be a bitch in the winter and the back ones needed replacing. Seemed like a good time to do it." He picked up the two pieces of plywood he'd built a frame around and formed them into an A shape. "I'll show you how to hold them while I screw the hinges in."
"Okay." Either she didn't realize this wasn't something he really needed help with, or she didn't care. "What are these for?"
"They make tents over the bushes so the weight of the snow won't crush them or break the branches off."
She stopped in the act of bending over to look at the plywood panels. "You get that much snow?"
"Sometimes."
When she had the two pieces of wood lined up, Rick bent to drive in the screws. It put their heads very close together and, when he inhaled, he could smell her shampoo or soap or something. It wasn't strong enough to be perfume, but it was enough to be distracting.
Together, they made quick work of the first two, but he didn't like the way the third lined up. Rather than risk his drill slipping and catching her fingers, he set it down and put his hands over hers to adjust her hold on the wood.
The touch must have startled her because her head jerked up. Her face was so close to his, he would barely have to lean forward to kiss her. And with her hands so small and soft under his, and her gaze locked with his, he really wanted to. As if she could read his thoughts, her face flushed and her lips parted slightly. Unless he was totally misreading the signal, she wouldn't slap his face for trying.
But it would be a huge mistake, so he very reluctantly dragged his attention back to the task at hand. "Here, hold it like this so I don't accidentally nick you with the drill."
Jessica nodded and dropped her gaze back to their hands. He wasn't sure if the sudden scowl was one of concentration or if she was thinking about what had just passed between them. Or hadn't, as the case may be.
As soon as her fingers were in the right place, he removed his hands from hers and picked up the drill again. The sooner they were finished with this, the sooner he could stop calling himself every kind of an idiot for making up a stupid reason for having her out there in the first place.
She helped him do the fourth frame and then stepped back as he put them in place over Marie's more delicate bushes. Rick was keenly aware Jessica was watching him as he screwed a small cross member on each one so they couldn't collapse if the snow was heavy enough to shift the hinges.
"It's snowing!"
He turned to see her with her face turned up to the sky, watching scattered flakes fall. "Yeah, they said we might get some flurries off and on before the actual snow starts."
"Is it true that if you stick your tongue to a metal pole, it'll stick?"
"If it's cold enough, hell yes, it'll stick. We responded to three calls for that last winter."
Her eyes widened, making him chuckle. "You're kidding."
"Nope. And now that all the kids want funny videos or selfies for the internet, they do dumb shit like that all the time and we get to lecture them while saving them from themselves. Licking metal poles seems to be popular for some reason."
"So how cold is too cold?"
"It's definitely not cold enough yet." He put his hands on his hips and looked at her. "Is licking a pole on your list of things to do while in Boston or what?"
As soon as he said the words, his inner twelve-year-old boy snickered, but he hoped she wouldn't catch the accidental innuendo.
"I have no intention of licking any poles while in Boston, thank you." Yeah, she'd caught it. He could tell by the way her lips tightened in an effort not to smile. The tiny quirk at the corners gave her away, though.
He had to stop paying so much attention to her mouth. After putting the battery drill back in its box, Rick wrapped the cord around the circular saw and put them both, along with the square and a few other miscellaneous tools, back in Joe's toolshed. After he'd snapped the padlock closed, he turned, expecting Jessica to have gone back into the house.
But she was still in the yard, frowning at the snow flurries that were barely worth noticing. "Joe and Marie will get home before the roads get slippery, right?"
He smiled. "Yeah, they will. This is just a flurry, I promise, and the roads won't be affected. The snow's supposed to pick up some later in the day. And speaking of driving, do you need to do something about that rental?"
"No. I already talked to them because I anticipated having it for a few days, but since it was already open-ended, they don't really care. I'm not sure about driving it in the snow, though."
"You don't need to. Joe or I can drive you if you absolutely need something before the roads are clear. It's still early in the winter, so you shouldn't have any problems."
"You don't worry about Joe driving?"
It took him a second to realize she probably meant because of his age and not because of the snow. "Not really. There have been a couple of times Marie or I have had to taxi him around, but unless his doctor tells him he's done driving, there's no reason he can't."
"And the doctor isn't concerned?"
"Not that I've heard. It seems to be living arrangements he's concerned about."
She sighed and tilted her head way back to take in the three-story building. "It's a lovely house, but it's so big."
"They like it. And I can tell you right now, they'll fight to stay here."
"Be honest, though. If you move out, can Joe and Marie still take care of the property without you?"
That was a tough question to answer. He definitely didn't want them doing some of the stuff he took care of. The idea of Joe up on a ladder cleaning the gutters, for instance, made him ill. And he didn't know if they could afford to hire people to do all those tasks because he'd never asked about their finances. They were none of his business.
"I don't know," he said, going for honesty. "But I don't plan on going anywhere anytime soon."
"What if you fall in love and get married and want to start a family?"
Maybe that had been on his mind a little lately, but it didn't appear it was going to happen anytime soon. "Don't worry, I'm not the marrying kind."
She rolled her eyes. "That's what all guys say and then, bam, wedding rings and minivans."
"No minivans. An SUV, maybe." He didn't really want to think about what vehicle he'd cart his hypothetical family around in and preferred to talk about her. After fending off that urge to kiss her, he needed to put a little more distance between them again. "Joe and I spend a lot of time talking, just so you know."
"What's that supposed to mean?"
"Just that if you try to push them in the direction you want them to go instead of the direction they want to go, I'll hear about it."
Jessica looked at him a long time, her mouth in a grim line, before she shook her head. "I know I can't make you trust me, but they're my grandparents. I'm not going to try to screw them out of anything."
"You've been with Davey for thirty-four years. You've been with Joe and Marie for three days. Can you blame me for wondering where your loyalty lies?"
"Says the man who's sunk a lot of time and hard work and maybe even money into a property that he has no claim to other than through the affection of its owners."
It should have pissed him off, but he found himself smiling. He admired the way she stood her ground without letting temper get the better of her. "As long as we both have Joe and Marie's best interests at heart, we shouldn't have a problem."
"I guess we'll just have to wait and see how it turns out, but you can trust me," she said, and then she walked up the ramp and back into the house, letting the screen door slam behind her before she closed the big door with a solid thud.
Rick bent to pick up the scraps of wood with a sigh. With the Broussards' future on the line, he definitely hoped their granddaughter was right. He wanted to trust her. But he needed to trust her for the right reasons, and neither her mouth nor the dreamy expression on her face as she watched the snowflakes fall were the right reasons.
One thing he was certain of was the fact he didn't want Joe and Marie to come home and find out he'd pissed off their granddaughter. Once he'd picked up the yard, he'd go inside and make sure he hadn't offended her too badly.
* * *
Jessica tried opening her laptop, but she gave up after ten minutes or so and closed it again.
For a few crazy seconds outside, she'd thought Rick was going to kiss her. There was something about the way he looked at her—especially her mouth—that made her sure he wanted to.
What a disaster that would be, she thought. Since a few minutes later, it sounded a lot as if he was accusing her of wanting to take advantage of Joe and Marie financially, kissing Rick could only add to the weird emotional place she'd found herself in.
To distract herself from the sexy firefighter she absolutely couldn't kiss, she reached across and picked up the puzzle book sitting open on her grandfather's end of the kitchen table, along with the pencil. She'd already figured out that Joe loved his puzzle books, but only the language puzzles. The math ones were rarely even started, never mind finished.
Even though she submersed herself fairly quickly in numbers, she heard Rick's footsteps outside before the door opened. She was thankful because it gave her a few seconds to focus on not looking as if she'd been thinking about kissing him.
"Hey," he said, closing the door behind him. "Sorry if I'm bothering you again, but I just wanted to make sure you're not too mad at me."
It took her a few seconds to realize what he was talking about, and then she smiled. "I'm not mad. I mean, I didn't like the implication, but I understand where you're coming from. Plus I'm doing Joe's math puzzles and I'm one of those weird people who find numbers soothing."
He looked at the puzzle book and then arched an eyebrow when he realized she'd already finished the puzzle. "I guess if you're in charge of taking people's money and making it into more money, you must be pretty good at math."
"I am. My father made sure of that."
"How do you make sure somebody's good at math? Isn't that a you-are-or-you-aren't kind of thing?"
"He told me when I was a little girl that I have a natural aptitude for it."
Rick grinned. "Of course you do. Your grandmother taught advanced high school math for almost forty years."
"Really? I guess numbers must run in my family. I don't know why, but I just assumed she was a homemaker. Maybe because she's so good at it and Joe seems so...old-fashioned, I guess."
"It's only been a few days, Jess. You and your grandparents aren't going to learn everybody's life stories overnight."
"I don't know why I didn't ask, though. Or why she wouldn't have mentioned it, since math's a big part of my job."
He took up his usual position, leaning against the kitchen counter. "I think she just wants to know about you so much she doesn't think to tell you much about herself. They're still wrapping their minds around the fact you even exist, you know."
She nodded, feeling as if there was a lump of emotion clogging her throat. "He made them sound pretty horrible, you know. And it made him so angry when I asked about them that I stopped. Maybe I should have kept asking."
"You were a kid. And why wouldn't you believe him? You were only getting one side of the story and you had no reason to doubt what he told you." He shifted his weight, crossing one ankle over the other. "I'm a little surprised you never reached out to them when you were an adult, though."
"It would have made my father unhappy."
"A lot of things make parents unhappy. They get over it."
"Do they?" She fiddled with the pencil, rolling it between her fingers before tapping it on the book. "I guess my mother didn't get over it, since she never came back."
His expression turned serious, and he inhaled deeply through his nose. "I'm sorry about that. It's a pretty shitty thing for a mother to do, but I highly doubt you were the one who made her unhappy enough to abandon being a mother."
Jessica shrugged, trying to hide how much she wanted that to be true. "Maybe not. But what I do know is that my mother took off, and was an only child whose parents had both passed. My paternal grandparents were supposedly awful people, and stepmothers come and go. When you only have one person in your life who's family, you try not to piss him off too much."
He nodded his head, as if he could see her point. "Since we're kind of on the subject, what does unavailable mean?"
It was tempting to pretend she didn't know what he was talking about, but it was a core word in her vocabulary. I'm sorry, but my father is unavailable at the moment... "He drinks. Which is really ironic considering it's one of the things he holds against his parents. Or Joe, at least."
"So Davey's an alcoholic?"
"It sounds so weird to me, the way everybody here calls him Davey. He's always David now. Not even Dave." She paused and shoved her hands into her coat pockets. "And I honestly don't know if he's an alcoholic. He'll go a long time without drinking at all. Or he'll have a few cocktails here and there at social events. But if things get rough he...binge drinks, I guess you'd call it. He just disappears and spends days drunk. Sometimes weeks. He's unavailable right now because my most recent stepmother is about to join my previous three stepmothers in the ex-wives club."
"Ouch."
"He's not an easy man to live with." That was a bit of an understatement.
"Yet you've built your entire life around him."
There was no censure in his voice. No inflection implying she was an idiot. It was just a statement of fact, but it still made her wince inside. "I've built my life to suit me, but he is the only family I've ever had before now. We're a team."
It was a habit to defend him, she supposed. She'd done it often enough with the staff and trying to play peacemaker with his wives. But it was also the truth. Other people, including her mother, had come and gone, but she and her father had always been a team.
"Family should be a team," Rick agreed. "And I'm glad you're taking the time to get to know Joe and Marie because they're your family, too. And they're good people."
"I think so, too."
"Good. While I'm thinking of it, I'm going to check the filters on the furnace because I think it's time to change them out. It's in the cellar, though, so I shouldn't be in your way."
Jessica stood and pushed the puzzle book and pencil back to Joe's end of the table. "I'm probably going to do some laundry or something, anyway. I'm not in the mood to sit in this chair today."
"Sitting at the desk doing paperwork is the only part of the job I don't like," Rick said, shaking his head. "I don't know how people who work in offices stand it."
"Well, I don't have to climb giant ladders and risk my life in smoke and fire. So there's that."
He laughed as he walked toward the door to the cellar. "Good point."
Because the rich sound of his laughter did funny things to her nerves, Jessica gave a little wave and walked out of the kitchen. Everything in her life seemed to have changed so much and so fast with that one voice mail from Joe's doctor, so she knew she had to be careful about being vulnerable emotionally.
She needed to squash this attraction she seemed to have for Rick, and the best place to start was probably getting out of the kitchen and not staring at the cellar door, waiting for him to reappear.
Chapter Five
Jessica loved exploring the house. Every time she looked around, she seemed to notice something new. And since she was too antsy after her conversation with Rick to sit in front of her laptop, she went into the big living room.
She'd already looked at the framed family photos scattered around. There weren't many, and she got the sense Marie hadn't been much for taking pictures. The staircase wall had pictures of her dad, and she'd spent some time yesterday looking at them. There was very little of the boy growing up in the variety of frames in the man she knew. He'd been cute with no front teeth, but it was obvious he didn't like having his picture taken. And there were no photos of him at all after his senior portrait, in which he glared sullenly at the photographer in front of what looked like a department-store studio backdrop.
It was the treasures that she really enjoyed. Her father wasn't a knickknack kind of guy, and certainly wasn't sentimental about things, so she'd grown up in a very uncluttered household. But on display in Marie's curio cabinet was all manner of things. The bride and groom figurine from her grandparents' wedding cake. A clay cup her dad had made them in elementary school. A gilt-edged teacup so old the fine age cracks made the flowers look almost mosaic. According to Marie, it had belonged to Joe's grandmother and was the only piece of china left from the set that had come from Nova Scotia with her.
Today she wandered to the bookshelf and, tilting her head, scanned the spines. There were a lot of old Westerns and Agatha Christie titles, which made her smile. And on the top shelf was a framed newspaper article. She realized it was a picture of a firefighter and leaned closer.
All of the gear obscured the identity of the man helping an extremely pregnant woman onto the ladder while the black smoke billowing from the window framed them. But the caption told her it was Rick, and that the woman's water had broken halfway down the ladder and her daughter had been born in the ambulance on the way to the hospital.
"That was a helluva day."
Jessica turned at the sound of Rick's voice, caught off guard because she hadn't heard him come back up the cellar stairs. "They were both okay?"
"Yeah." He tucked his fingers into the front pockets of his jeans and shrugged. "Three stories up on a ladder with the most pregnant woman I'd ever seen was already hairy. Then her water broke and she started panicking. There was no way to throw her over my shoulder, so she just leaned back against me while I tried to get us both to the ground in one piece."
His gaze was fixed over her shoulder, probably on the framed clipping, but he had a faraway look. Jessica couldn't wrap her mind around the fact doing stuff like that was his job. "Is it always like that?"
He snorted, shaking his head. "No, thank God. We get our share of fires, but there are accidents and medical calls. Cats stuck in trees."
"Why did you become a firefighter? And don't tell me it's because you're a safety junkie. If you wanted safety, you'd probably be a teacher, like your brother."
"My teachers would be horrified at the thought." He gave her a grin that made her whole body tingle. "Guy I played hockey with sometimes was at the fire academy and there was some trash talking and, to make a long story short, I became a firefighter to prove I could. Almost like a dare. I guess I still do it because it pays good, the benefits don't suck and I really can't imagine myself doing anything else."
She wanted to ask more, but they heard the faint squeak of the back door's hinges, followed by Marie's voice. Despite being disappointed her conversation with Rick was at an end, since he'd turned and walked away, Jessica was relieved her grandparents were home. She knew it was silly to be worried about a few snowflakes, but she also knew that the older people got, the worse their reflexes were.
Following Rick into the kitchen, she listened to them all talk about the barbecue. The conversation mostly consisted of news about a lot of people Jessica didn't know, so it was tempting to grab her laptop and go upstairs. It had been several hours since she checked her email and that might have been a record for her.
But she liked the easy rhythm of their interactions. They even seemed to have their own individual spots for talking. Joe sat at the table, with his word search book open in front of him. Marie puttered around the kitchen, getting ready to make dinner. And Rick, as usual, leaned against the counter.
Because it seemed the logical thing to do, Jessica had sat in the chair she'd been using since she arrived on Wednesday. While Joe, Marie and Rick were almost a family unit, it made her feel almost as if she belonged to have a usual spot, too.
But she jumped in her chair when her phone, still sitting on the table next to her laptop, rang. It was her father's ringtone, and his name flashed on the screen. The others looked at it, since it was hard to ignore the sound, and she realized Marie could see the name when her body stiffened.
"Sorry," Jessica said softly, tapping the option to send his call to voice mail, and then flipping the switch to silence her phone in case he called again. He almost certainly would.
"You can answer that, you know," Joe said. "It could be important business."
"It's nothing that can't wait." It would be too awkward to talk to him while they were in the room. "It's not like Rick's job, which is literally life-and-death."
"And cats in trees," he added, making Joe and Marie laugh.
The bubble of tension popped, and Jessica smiled. "Can't forget the cats."
"I should head upstairs," Rick said a few minutes later. "I left laundry in the washer and I hate when I forget it's in there and have to wash it again."
"You'll come down and eat supper with us, won't you?" Marie asked.
Jessica saw him hesitate as his gaze met hers. Then he looked away with a sigh. "You don't need me underfoot. And besides the laundry, I've got a list of other stuff waiting to get done."
"You took care of my bushes today. And I'm making stuffed manicotti."
He groaned. "You know I'm a sucker for any of your pasta dishes."
"Come back down in about two hours, then."
Jessica watched him go, and then snatched her phone off the table when it vibrated loudly against the wood. She should have anticipated that. After rejecting her father's second call, she cradled the phone in her hands under the table where it wouldn't be as noticeable if it went off again.
"I'm going to go watch some television, I think," Joe said, pushing himself up off his chair. He winced a little as his knees straightened, but he gave Jessica a wink. "I get nervous when I'm the only man in the kitchen."
Marie snorted. "You should be. And Jessica probably needs to go upstairs and deal with work stuff, anyway."
Yes, she needed to. But after a few seconds, she shook her head. Then she powered off her phone completely and tossed it on top of her laptop. "Actually, I've never made stuffed manicotti. If you don't mind teaching as you go along, I'd love to help."
The smile that lit up her grandmother's face made Jessica's heart ache, and she knew she'd made the right choice. She'd probably still regret it when she finally had to return to her father's phone call, but for right now, she was going to hang out in the kitchen and cook a meal with her grandmother.
After the men vacated the kitchen, they got to work. Jessica wasn't surprised Marie didn't have to pull out a recipe card or cookbook, though she did promise to write it down for Jessica after dinner if she liked it.
"This was one of your dad's favorite meals growing up," her grandmother said.
"It still is, actually. I probably would have made it before now, except when we dine together, it usually doubles as a business meeting. It's a lot easier to do that in a restaurant."
"Do you cook at home for yourself, though?"
"Sometimes, but definitely nothing like stuffed manicotti. I have an indoor grill I love and I'll toss a quick salad to go with whatever meat I grilled for dinner. I'm not very creative, I'm afraid."
"And none of your stepmothers taught you how to cook?"
Jess sighed. "Most of them haven't been very fond of me, I guess. I'm a big part of my father's life and he would defer to me a lot even for household decisions."
"Why haven't you ever married?" Marie said, popping the lid off of a tub of ricotta cheese. "If you don't mind my asking. I'm being very nosy, I guess."
"Confession time. I don't really care how to make stuffed manicotti. I just wanted to spend time getting to know you and the kitchen seems like a good place, so nosy is kind of the point. I found out from Rick that you taught math in high school and I can't believe I hadn't already asked you that."
Marie laughed. "I think I've been hogging all the questions. But I'd really rather hear about your love life than my teaching career, that's for sure."
Jess snorted and shook her head. "I wouldn't call it a love life. I date, of course. But for some reason, most of the men I've gone out with have been younger than I am, maybe because they're not beating the time to start a family drum."
"And you're not ready for that yet?"
"I think I'm getting there, but not quite yet. And besides the age issue, there's my father. The men I've dated have either wanted a chance to work with my father or they've been scared spitless of him. It's really annoying, so I haven't dated much at all lately."
"You'd be surprised how many friends I have with grandsons that would be perfect for you."
Jess side-eyed Marie, who laughed at her. "I have enough on my plate right now."
"Oh, but let me tell you about this one young man. Well, young meaning forty-five or so."
Two hours of blind date dodging and a lot of laughter later, Jessica found out why Rick had been willing to forego doing his own chores to come back downstairs for dinner. Even though the ones Jessica had stuffed looked a little messy, the manicotti tasted amazing and she ate until she couldn't bear to put another bite in her mouth.
Then she leaned back in her chair with a groan. "I can't keep eating like this. I swear my jeans are already getting too tight and I have a closet full of pencil skirts. Those are not forgiving."
"What's a pencil skirt?" Rick asked from across the table.
"They're long, like midcalf length, and they hug your...let's just say they're somewhat form-fitting." When he raised an eyebrow, she tried not to blush. "They're flattering, but they won't be for long if I keep having seconds of everything Marie cooks."
"You have a beautiful figure," her grandmother said, and Jessica didn't miss the slight nod of Rick's head before he quickly turned his attention back to his plate.
"Not for much longer. Our office building has a fitness center in it, so I usually work out at the end of the day. Only for half an hour or so, but I can gather my thoughts and sweat out any frustrations before heading home. And it keeps my jeans from getting too tight, I guess."
"Rick, you belong to a gym, don't you?" Marie asked. "Even though she doesn't need it as far as I can tell, you should take her to work out with you if it makes her feel better."
Jessica's imagination coughed up an image of a shirtless, sweaty Rick and the instant hot flash made her feel anything but better.
* * *
Rick had just put a big chunk of stuffed manicotti into his mouth and he took his time chewing it. He didn't need a flashing neon arrow to see what direction Marie was going with that question, but he had a suspicion the place he had a membership to and the San Diego office building "fitness center" Jessica used were on totally opposite ends of the gym spectrum.
"Sometimes," he said once he couldn't put off swallowing his food any longer. "We have some workout equipment at the station, so I don't really get to the gym very often. I probably talk about it a lot more often then I actually see the inside of it."
Marie set her fork down and took a sip of her drink before turning her laser focus on him. "There aren't any of those gyms just for women nearby, I don't think, and I don't want Jessica running around the city trying to find one. But I don't want her going to your gym with a bunch of strange men all by herself, either."
"It's honestly okay, Marie," Jessica said, but Rick knew it was a lost cause. "I don't think I'll actually outgrow my jeans before I get back to San Diego."
"But you use it to clear your mind, too, and it's probably stressful being out here while you have a business to run all the way across the country."
The obvious answer would be for Jessica to go back across the country and just run her business, but Rick wisely kept that suggestion to himself. "I can take you to the gym. You want to go tomorrow?"
"I..." She gave him a look across the table that clearly said she wasn't sure she wanted to at all, but she was getting the hint that Marie wasn't going to give up on the idea. "Okay."
"How's ten o'clock? It's not far. Down the street and around the corner, so we can walk. And there's no place to change, so wear what you want to work out in."
"Sounds good."
"That's settled, then." Looking very satisfied with herself, Marie picked up her fork again. "Since you're going to the gym tomorrow, do you want another manicotti?"
Once they were done eating and the kitchen had been cleaned up, Rick made his second escape of the day. They'd invited him to watch television with them, but he'd reminded them he had stuff he needed to get done.
It wasn't entirely true. He didn't have much of a to-do list other than the laundry and some light housekeeping, but he'd pretty much had his fill of Broussards for the day. They were exhausting, really. He'd been worried about Joe and Marie for so long, he should have been relieved to have Jess there to take up some of the slack. But he couldn't help but wonder if it had been a coincidence that they were meeting with the doctor on a Tuesday, when they knew he'd be at the station.
And then there was Jessica. He found it difficult to concentrate around her and, when he wasn't around, he found himself looking forward to seeing her again. That was a disaster in the making. If she was anybody but Joe and Marie's granddaughter, he'd probably have asked her out to dinner already. He'd get to know her and maybe, if the attraction was mutual, do something about the growing ache that intensified every time he was around her. But the last thing this situation needed was the two of them getting entangled in any kind of a relationship. He wasn't sure how Joe and Marie would feel about it but, good or bad, it would change everything.
After finishing his laundry and doing some other chores, Rick stretched out on his couch and did some channel surfing. Nothing caught his interest, so he stopped flipping on an action movie he'd seen at least a dozen times and tossed the remote onto the coffee table.
He woke up just before six the next morning with a stiff neck and groaned. A nice king-sized bed in the next room, and he slept on the damn couch. Being able to sleep anywhere, in almost any position, made his job easier, but it sure sucked when he did it at home.
Before jumping in the shower, Rick made himself a coffee and walked to the window. The snow hadn't amounted to much and it would probably melt off on its own by noon. He'd sweep off the ramp and throw some sand down before Joe and Marie left for their traditional Sunday brunch at the senior center, but he could ignore the rest.
He scrambled a few eggs and dropped a couple of slices of American cheese on top to melt while he toasted an English muffin. Then he showered and threw on a pair of sweatpants and a T-shirt that was so old the Boston Red Sox logo was almost worn away. If he was taking Jessica to the gym, he may as well get a workout in, too. It wasn't as if he could stand around and watch her, whether he wanted to or not.
At nine-thirty, he pulled on a hoodie and went down the back stairs to take care of the ramp. That didn't kill enough time, so he swept off the vehicles while he was at it. And, when the back door finally opened and Jessica stepped out, he laughed.
She stopped halfway down the ramp and scowled at him before looking down at the parka that covered her from neck to knee. "There's snow. Snow means cold, so Marie lent me one of her coats."
That was a February kind of coat, but she was a California girl, so he just grinned. "As long as you're comfortable. You ready?"
They walked in silence for a few minutes while Jessica looked around the old neighborhood. It was a nice neighborhood, if a little shabby in places, and geared toward families. There were no plazas or big box stores in sight, but there was a market on almost every corner and a lot of small shops along the main street.
"Since you have to work on Tuesday, is there anything in particular you'd like for me to ask the doctor?" she asked as they rounded the corner onto a side street.
"At this point, not really. Mostly I'd just like to know if he has any specific concerns, you know what I mean?" As far as he knew, the appointment was mostly a formality. A follow-up for Joe's fall and to replace Davey with Jessica on their paperwork. "If there's something serious going on they're not telling me about, I'd like to know."
She was quiet for a moment, and then he caught the sharp nod of her head from the corner of his eye, as if she'd made a decision. "If there's something going on, I'll make them tell you."
Rick didn't miss the phrasing. She wasn't going to tell him herself, maybe not wanting to break their confidence. It annoyed him a little that she'd be in the loop but he wouldn't be, but he also had to respect the fact she was taking this—and her newfound loyalty to her grandparents—seriously.
He stopped in front of a metal door with chipped blue paint and the gym's logo. It was dimly lit and smelled like sweaty socks, and there were a few guys already working out. Jessica didn't hesitate. She simply looked around as he led her to a long bench that had hooks hung over it. They only had one locker room and it looked like a science project gone bad, so Rick never used it. He stripped off the hoodie and hung it on a hook before looking at her.
"You ready to sweat?" he asked.
She nodded, but didn't meet his gaze, which he found amusing. Then she unzipped the parka and shrugged it off.
Jess in gym clothes wasn't something he'd really given a lot of thought to. Now he was afraid he wouldn't be able to think of anything else for a long damn time.
No baggy sweats and shapeless T-shirt for her. She had on tight black leggings that hugged every single curve. Her calves. Her thighs. Her hips. And, when she turned slightly, her ass. Most definitely her ass. And she was wearing a similarly body-hugging tank top with a scooped neck that showed a hint of cleavage. Her arms were bare and toned, and still tanned whereas most of New England's citizens were already losing their summer color for the winter.
Whatever she did in her fancy executive fitness center, it was definitely working for her.
"I ran to the store after supper last night and picked up a few things." She scowled, looking down at the new clothes. "When I go home, I'll either have to buy and check a second suitcase or mail a box back to San Diego."
Rick could tell by the silence that he wasn't the only man in the room appreciating Jessica's choice of workout gear, but he forced himself not to turn around and glare at anybody. "As you can see, we don't have a lot of fancy stuff. Weights. There are a few speed bags and a couple of heavy bags if you want to hit things. There are two bikes over in the corner. And more weights."
"I think I'll just spin for a while."
He frowned. "Spin?"
"The bikes. I'll just ride the bike for some cardio."
"Ah, okay. I'll probably lift some weights for a while."
"Hey, Gullotti, you got any tickets for the game on you?" a guy called from across the gym.
"Not on me, but if you stop by the station, any of the guys can hook you up."
"What game?" Jess asked after the other guy had nodded his thanks.
"Charity hockey game," he explained. "Fire versus police."
"Oh, I've heard of those. They're a big deal."
Rick smiled. "Yeah. This isn't the big battle of the badges game. Just a smaller neighborhood game to raise money for Toys 4 Tots. And most of the people not only buy the tickets, but they bring toys, too. It's a tradition and the turnout's always good."
"Is it soon?"
"Next Saturday."
"Oh." She looked thoughtful for a few seconds. "Do Joe and Marie go?"
"Every year. I can get you a ticket if you think you'll still be around," he offered, even though he was pretty sure she'd head west once the doctor appointment was out of the way. She didn't need to be there for the lawyer, and Joe and Marie weren't anywhere near ready to consider a real estate agent yet. If at all.
"I haven't taken a vacation in years," she said. "I'd like to see a hockey game, I think. Especially with Joe and Marie. I'll stay that long."
Even though she sounded sure of the decision, he could see the worry in her eyes and the set of her mouth. Her old man was not going to be happy with her. "I'll save you a ticket, then. Joe and Marie will love having you there."
He wasn't so sure how he felt about it, though. Watching her walk to the exercise bikes, her ass perfectly displayed by the yoga pants or tights or whatever the hell they were called, was excruciating. And the longer she was around, the harder it would be to keep telling himself he didn't want her.
Deliberately choosing a weight station that didn't have him facing the exercise bikes, Rick wondered how long he'd have to work out in order to wear his body out to the point it didn't react to Jessica. But he knew, no matter how much he made himself sweat, he didn't have that kind of time.
Chapter Six
"Today would be a good day to drive over to Brookline and pick out a Christmas tree."
Jessica glanced over at her grandmother from the stove, where she was scraping and folding eggs in Marie's big cast iron skillet. She'd never made eggs this way—scrambled in a little of the leftover bacon grease—and eating them was going to cancel out what little good she'd accomplished at the gym yesterday, but they were going to be worth it. She hoped. When Marie made them, they were delicious, but this was Jessica's first attempt without help.
"You mean a real tree?" she asked. "Wouldn't it be easier to have an artificial one?"
"Easier, maybe, but we both like the look and smell of a real tree, so as long as we have Rick to help Joe carry one in and set it up, we'll stick with tradition." Marie pushed another four slices of bread into the toaster and smeared butter across the slices that had just popped. "I know you have to go home for the holidays, but if we get a tree now we can at least share a little Christmas spirit while you're here."
"What are you scheming now, woman?" Joe asked as he walked into the kitchen, no doubt lured in by the smell of bacon and coffee.
"Did Rick mention having any plans today?" Marie asked instead of answering the question.
Jessica turned off the burner and divvied up the scrambled eggs between the three plates on the counter as her grandparents discussed whether or not they should bother Rick and argued about how long the tree had lasted last year and if they should wait another week. She had a feeling Joe would lose on that point, since Marie's primary motivation seemed to be sharing the experience with her.
She wasn't sure she could take another day with Rick, though. It was one thing to be attracted to him and indulge in a very secret fantasy crush. But when they'd built the plywood frames for Marie's bushes, he'd looked as if he was going to kiss her. And then, at the gym, they'd both spent the entire time trying—and failing—to pretend they weren't sneaking looks at each other. If he felt the attraction as strongly as she did, separation might be the only way to resist temptation.
And it was Monday, though the only way she'd been sure when she woke up that morning was by looking at her phone. Since her grandparents were retired and Rick's work schedule was so different from the norm, she was having trouble keeping track of what day it was. While she'd done a round of email responses and market research reading before they started breakfast, she'd need to video chat with Sharon later. And she needed to check on the many end-of-year processes, especially the tax forms.
"What kind of Christmas tree do you like, Jessica?"
She realized Joe was talking to her. "I don't know. Aren't Christmas trees pretty standard things?"
He laughed. "Besides the fact there are a lot of different kinds of real trees to choose from, there are also artificial ones. Some even come in weird colors. Those don't seem like Christmas decorations to me, but young people don't always embrace traditions."
Jessica dealt out the toast and bacon Marie had made to each of the plates with the scrambled eggs and carried them to the table. "I have a small artificial tree, and it's the kind with the lights already strung on it, but it still looks traditional. My father likes the fiber optic trees and the one we use for the office party has colors that change in time to a music playlist. It's fun, I guess, but not one I'd want in my home."
"You need to experience a real tree," Marie said. "Joe, after you finish your breakfast, call Rick and see if he's busy today."
Three hours later, Jessica was bundled into her grandmother's parka and warm boots and sitting in the backseat of Rick's pickup with Marie. She was sitting on the passenger's side, behind Joe, so she had a perfect view of Rick's profile as he drove. He looked relaxed as he drove through what looked to Jessica like an insane network of narrow streets, talking about sports with Joe.
Marie chattered away about the neighborhoods they passed through and Jessica was able to pay attention well enough to say the right things at the right times, but she couldn't stop herself from looking at Rick. He hadn't shaved that morning, and she was free to admire the scruffy line of his jaw. When he smiled at something Joe said, his eye crinkled at the corner.
"Oh, there's a wonderful secondhand store near here," Marie said. "It's all high-end and designer stuff, so of course it's all barely worn before they get rid of it to make room for the newest trends. A friend was telling me about all the bargains she got on school clothes for her grandchildren."
"It's always nice to save money," Jessica said, watching Rick as he took a big gulp of coffee from the travel mug he'd brought. His throat worked as he swallowed, and she had a crazy urge to run the tip of her finger down over his Adam's apple.
As the truck rolled to a stop at a red light, Rick turned and looked back at her. The questioning arc of his eyebrows and the amused tilt to his mouth told her he was definitely aware of her watching him. Blushing, she turned her head and looked out her window.
"I bet we could find you some nice winter things there," Marie continued, seemingly unaware of the silent look Jessica and Rick had just exchanged. "Some sweaters, maybe."
Jessica laughed. "I'm definitely going to have to ship boxes back to San Diego. There's a limit to how many suitcases the airline will lug across the country for me."
"Why wouldn't you just leave the winter clothes here? You have the closet and the dresser, and we can always get some of those vacuum bags to store the sweaters and things in if it's going to be a while between visits. Oh, and you can pick out new bedding, of course. It's been so long since we redid that room it's not even funny."
"It's fine," Jessica assured her, but her chest ached a little at the thought of the room not being a guest room, but being her room. And it did make sense to leave the winter clothes there because, as she smiled at her grandmother, she knew that she'd be making frequent trips between San Diego and Boston for what she hoped would be many years to come. "And you're right about leaving the winter clothes here. That way I won't be carrying them to California and back for no reason, since I definitely won't wear them there."
Left unsaid was the possibility she wouldn't have a room for long. While leaving behind the few cold-weather belongings she'd accumulated wouldn't be an issue, she wouldn't make herself too much at home while their future in the house was still uncertain. But if they moved into a smaller place, it wouldn't stop her from visiting. She'd either stay at a hotel nearby and pack her suitcase for the weather, or she'd rent a small apartment or look into a time-share or whatever she had to do.
When they arrived at what looked like a real farm—something Jessica hadn't expected to see within a short driving distance of her grandparents' neighborhood—she was touched by the care Rick showed in helping Marie climb out of his truck. Then he walked around the front end while Joe was getting out and offered his hand to her.
After a moment's hesitation, she put her palm over his and their fingers curled together as she stepped out onto the running board and then hopped down. When her feet hit the ground, though, he didn't let go right away. She met his gaze as the touch lingered, and the awareness hung between them. He didn't want to let go of her hand and she didn't want him to.
"Ten bucks says we look at every tree on the lot and end up buying one of the first three she looks at," Joe said.
After holding her gaze for a few more heartbeats, Rick released her hand and turned away. "No way in hell I'm taking that bet. This isn't my first Christmas-tree-lot rodeo with you two."
But it was Jessica's first time, so she tried to shake off the lingering effect Rick's touch had on her nerves and lose herself in the experience. Joe wasn't kidding when he told her there were all different kinds of Christmas trees. Some had tinier needles than others, and some had almost a bluish tint. There were tall, elegant trees, and round ones so full it would probably take yards and yards of garland to make it from top to bottom.
"Jesus, Marie, that tree would take up half the living room," she heard Joe say, and she laughed at the massive tree her grandmother was checking out.
Rick leaned close enough to speak quietly in her ear, putting his hand to the small of her back. "He says that at least twice every single year."
She could barely concentrate on the words he said with his mouth so close to her ear. And even through her thick coat, she could feel the weight of his hand. "They're a funny couple. I can't imagine being married to somebody as long as they've been married."
"At the rate I'm going, I'd have to live to be a hundred and thirty or so."
She laughed, then started walking as her grandparents moved on to another stand of trees. Rick moved with her, his hand still on her back. "I think you must have some fundamental flaw I haven't seen yet in order to still be single. Don't most women think firefighters are sexy?"
"I think they find the idea of firefighters sexy, but the reality can be tough. Long shifts. Sometimes the hours are erratic. There's a lot of worrying and waiting when you're married to a firefighter. And all of that's before you start factoring in the emotional toll the job can take on the guys. Sleep problems. PTSD. Alcoholism. Anger management issues. The long-term toll on our health."
"You're right. That's not sexy." She tilted her head back so he could see her smile. "You should tell people you're a financial advisor. No excitement there. We don't have sexy T-shirts, though."
"I have a newfound appreciation for the sexiness of financial advisors, actually."
She turned her attention back to her grandparents when a hot blush spread across her cheeks. There was nothing subtle about his flirting now, or the hand on her back, and she was suddenly anxious. With five more days until the charity hockey game, there would definitely be enough time for them to get into trouble before she went back to San Diego. And she wanted it—wanted him—but there was no denying it would be a short-lived fling that could have long-term repercussions.
Watching Joe and Marie squabble over a tree she seemed to think was too scrawny, Jessica reminded herself she was here to help them conduct business. And their interests might not align with Rick's. Being Joe and Marie's granddaughter was already a huge conflict of interest. This thing that may or may not be happening with Rick would be even worse.
"What kind of Christmas trees do you like?" she asked, feeling a need to change the subject.
"I like them all, from scraggly little Charlie Brown trees to big old city square trees. My very favorite real trees are ones that are well-watered and have yearly inspected light strings on them. And no extension cords."
She laughed. "You're right. You're totally a safety junkie."
"I can't help it. And as for my own personal tree, I rarely get one. I usually work Christmas Day, and I spend Christmas Eve with my family in Fall River or with Joe and Marie. I have a ceramic one from possibly the 1970s that plugs in and has little light-up bulbs that Marie gave me, and that's enough."
"Rick, what do you think of this one?" Marie called, and he let his hand fall away from her back. "Joe says it's too fat."
He shrugged. "It's a little...round, to be honest. I think it'll either block part of the television or it'll block any light coming through the window, depending on where you put it."
Marie sighed. "Jessica, what do you think? You should pick one you like."
Jessica was new to family politics, since she'd only really had to worry about keeping one person happy for most of her life, but she figured the best way out of this was to suggest a tree neither Joe nor Marie had already presented a case for. "There's one we just passed that seems like a perfect height and it's not too round. And it has that bluish look that's really pretty."
Fifteen minutes later, the tree had been run through a machine that wrapped it in netting. After Rick put it in the bed of his truck, Jessica closed his tailgate while Joe helped Marie into the passenger seat.
"Nicely done, by the way," Rick said in a quiet voice. "That's the quickest I've ever seen them agree on a tree."
"If I ever join a dating site, I'm going to make that a test question. Do you like tall and skinny Christmas trees or round, fat ones?"
"A smart man will respond with whichever tree my wife falls in love with."
She wasn't sure about smart, but she was pretty sure that was the answer a guy like Rick would give. She hadn't known him long, but she'd known him long enough to know that he'd choose whatever put a smile on his family's faces.
An ache spread through her body, and she gave him a quick, tight smile before going around him to get in the truck.
Whoever finally won Rick's heart was going to be one very lucky woman. It kind of sucked that, thanks to their circumstances, she didn't have a shot at it.
* * *
Rick wasn't surprised when Marie tried to veto the traditional practice of letting the tree sit for a while to let it settle. She was all in on Christmas while she had Jessica there.
"You have to give it twenty-four hours to let the branches fall," Joe told her. "You know that."
"The sooner we get it all decorated, the longer Jessica has to enjoy it."
"I'll be here at least through the weekend," Jess said. "And I'll probably take a lot of pictures and make them my screensaver at work so I'll feel festive and think of you every time I'm at my computer."
"I suppose you're right." Marie sighed and looked at the tree, still in the netting and leaned against the wall.
Rick smiled from his spot on the couch. It was fun watching Jess figure out how to make both of her grandparents happy without actually taking one's side over the other. It was a skill she was developing pretty quickly considering she'd only known them for a week.
He had to believe it was because she felt genuine affection for them. Maybe she'd come to Boston with the intention of pushing them into selling the house and getting her hands on a slice of that valuable real estate pie, but he couldn't be sure. What he could be sure of was the fact she was seriously bonding with her grandparents and their genuine well-being would be her primary consideration going forward.
"We can at least get the decorations out," Marie said.
"I'll go drag the boxes out of the garage," Rick said, thankful to have something to do. "We can put it in the stand and give it some water, plus we can inspect the lights. Jess, you want to give me a hand?"
She looked startled for a second, but recovered quickly. "Sure."
"The storage loft in the garage doesn't have a light because I need to replace the bulb socket and keep forgetting. I just need you to hold a flashlight for me."
He wasn't sure why he'd made the request. There were windows at either end of the loft and enough light filtered in so he'd managed to drag the Halloween decorations out and put them away without a flashlight. It was just like the plywood frames for the bushes all over again. He didn't need her help. He was just using it as an excuse to spend time alone with her.
It would be awkward to back out now, so he waited while she shoved her feet into sneakers and put a sweatshirt on for the short walk to the garage. After unlocking the side door, he flipped on the main lights and led her to a narrow wooden staircase at the back of the garage. The loft was basically just plywood over ceiling joists, but it was dry and a lot easier to access than the attic space in the house.
"Wow, there's a lot of stuff in here," Jessica said as the flashlight beam danced around the boxes, plastic totes and miscellaneous junk stacked high.
"Yeah, but it's more organized than it looks. Stuff that probably should have been thrown out fifteen years ago is in the back, under the eaves. The closer you get to the center, the more recently the stuff's been used. The front's mostly decorations for the various holidays."
Suddenly Jess squealed in a choked, horrified way that made it sound as if she was being strangled, and Rick turned, hoping she hadn't cut herself on anything. Non-adventurous, indoor sort of people tended to go a long time between tetanus shots.
But when he saw her face, he couldn't hold back the laughter. She must have strayed too far into the corner because she had a mess of cobwebs in her hair and across her face and chest.
"Where's the spider?" she asked in a small voice.
"What spider?"
She narrowed her eyes at him. "What do you think made the cobwebs? A stray cat?"
"They're probably old. Like a spider starter home, and it's already moved on to something bigger and better." He set down the box and moved toward her. "Are you afraid of spiders?"
"Not usually, but I'd rather not have one on my face. And I hate cobwebs."
"Stay still. I'll get them off." He wiped the sticky strands off her face first, his fingertips skimming over her soft skin. Her lips parted slightly when he touched her, and he found himself staring at her mouth.
She shivered slightly, but he couldn't be sure if it was from his touch or if she was still freaked out about the spiderwebs, so he quickly picked the few off her sweatshirt and wiped them on the corner of a box.
There were more in her hair, and those weren't as easy to get out. Most of them he was able to lift off with his fingers, but a few he had to kind of scrape out by running strands of her hair between his thumbnail and the knuckle of his index finger.
After brushing the last of the cobwebs off on the box, he ran his fingers through her hair to make sure he'd gotten it all. Her hair was as silky as it looked, and he loved watching it slide through his fingers in the dim light of the loft. And what the hell. She hated cobwebs, so he did it again, just to make extra sure.
"I think you're just playing with my hair now," she teased as the last strands slipped free.
"Maybe." He grinned. "You never know, though. Spiders can be sneaky bastards."
"I wasn't complaining."
Her voice was soft and he heard the invitation in it. This time when he buried his fingers in her hair, he didn't slide them free again. He pulled her closer and watched her lips part.
And when she lifted onto her toes, her head tilted back in obvious invitation, he lowered his mouth to hers. Her hands ran up his arms and over his shoulders, pulling him close, while he gripped her hip with one hand and kept the other entwined in her hair.
He dipped his tongue between her lips and she opened to him with a sigh that made his entire body tighten in response. Kissing her was everything he'd imagined it to be, and he'd spent a lot of time thinking about it. Deepening the kiss, he felt a rush of satisfaction when she moaned against his mouth.
Then her fingernails grazed the nape of his neck and for a few seconds all he could think about was getting her naked and kissing every inch of her body. But on the heels of that thought came the awareness that they were in a filthy storage loft and he was not supposed to be putting his hands on her. Or his mouth.
Reluctantly, he broke off the kiss and took a step back. She opened her eyes, her gaze soft with desire. Hooking her bottom lip with her teeth, she looked at him as though she'd also forgotten they were surrounded by boxes, dust and cobwebs.
"Shit." He ran his hand over his hair and blew out a breath. "I shouldn't have done that. We should forget it happened."
"I don't think I'll be forgetting that anytime soon."
She said the words in a light tone, but he could see the confusion in her eyes. "I'll take that as a compliment, but we both know this is a bad idea."
Jessica tilted her chin up, looking him in the eye. "Is it?"
"Don't you think so?" His mind coughed up the whole list of reasons why he should keep his hands off of this particular woman...but maybe he was wrong.
"I guess you're right. It's not like you and I could go anywhere and I wouldn't want Joe and Marie to feel like they're caught in the middle."
Well, damn. That had definitely been on his list of reasons not to kiss her and the fact she agreed meant he probably wasn't wrong. "Yeah. So we'll forget this happened and try not to do it again."
"Like I said, I'm pretty sure I won't forget it, but I agree we should try not to do it again."
At least she looked as disappointed as he felt. "Let's get these Christmas boxes inside so Marie can start the festivities. And try to stay out of the cobwebs, okay?"
"Ha-ha. You're a funny guy, Rick Gullotti." She gestured at the rows of boxes. "Hurry up before the spiders figure out we're wrecking the joint."
Chapter Seven
They met with Joe and Marie's doctor on Tuesday, in a very cramped office that made Jessica feel slightly claustrophobic. She was just glad Rick had to work because she wasn't sure the room would have held them all. She was also afraid she wouldn't be able to concentrate with him there after that kiss, but mostly it was the small room.
Or maybe it wasn't actual claustrophobia, but the magnitude of being there and trying to help these two people she barely knew but had almost instantly fallen for to figure out what they were going to do with their lives.
Managing investments for her clients was a high-pressure job. Not even for a second did she ever forget there were families depending on that hard-earned money and she took that responsibility very seriously. But she'd never had an emotional connection to a client before, and it was making her stomach feel queasy.
Once introductions had been made, the doctor didn't waste any time getting down to the business at hand. "It's time to consider downsizing. From what I've heard, that house is going to be too much for you pretty soon."
Jessica listened to the doctor, but she was watching Joe and Marie through the corner of her eye. They were sitting in chairs directly across from the doctor, but she'd been given a chair wedged in slightly ahead and to the side of them, so she could see them both.
And they didn't look very happy. Joe looked as if he might get stubborn about it, but her grandmother's mouth trembled until she pressed her lips together.
"Maybe if somebody lived with you, it would be different," the doctor continued. "But you've had a stroke, Joe, and you fell the other day. And Marie, with your blood pressure and that arthritis flaring up more and more often, do you want to be trying to keep up with that house? Going up and down those stairs?"
"Rick lives with them," Jessica pointed out.
"Rick being the third-floor tenant?" When she nodded, he started tapping his pen lightly on his legal pad. "Does Rick clean the house? Is he on hand to run up and down the stairs for them at any hour? Can he hear if one of them calls for help?"
"Maybe we could get those necklaces with the buttons you push in an emergency."
"I'm not wearing any damn necklace," Joe muttered.
"And Rick's a firefighter," Jessica continued, figuring that was a fight for another time. "He knows CPR and, well, whatever else firefighters have to learn when it comes to first aid, which is probably a lot."
"A firefighter." The doctor nodded. "So he works long shifts, then. They moved to a 24-hour shift recently, didn't they?"
"Well yes, but..." She let the sentence trail away, not sure what she should say. Or if she should say anything. Her intention had been to determine what was in Joe and Marie's best interests, not to help them further entrench themselves in a house they possibly shouldn't be in anymore. She thought she'd be able to stay impartial, but maybe she couldn't.
"What about you, Miss Broussard?"
"Call me Jessica, please. And what about me?"
"She lives in San Diego," Marie said. "She's just visiting so we can get the paperwork straightened out. We want you to meet her and know you can contact her personally in the future, instead of our son."
Jessica belatedly realized the doctor had been asking her if she would be able to take care of her grandparents, and she was thankful Marie had jumped in. She wouldn't want to say no directly, but it really wasn't feasible. Her life was in California. She had a home and friends and a business there.
And her father, she thought, wincing at the fact he'd almost been an afterthought.
"Look, this isn't a decision you have to make today." The doctor finally stopped tapping his pen and leaned back in his chair. "It's a dialogue I like to have with my patients before it becomes an urgent issue. It gives you a little time to work through it in your minds. Maybe imagine yourself in a cute little assisted-living condo in a social community. No worrying about mowing the lawn or shoveling snow or doing repairs on anything. No stairs."
They didn't have to worry about most of that, Jessica thought, since Rick took care of everything for them. But no matter how much they liked him or he liked them, her grandparents shouldn't be dependent on their tenant when it came to whether or not they were capable of staying in their house. As she'd told him, he might fall in love and go off to have a family of his own. And he was a firefighter. Something could happen to him on any given day. If he got hurt on the job and was laid up, she had no doubt Joe was the kind of guy who'd try to make do on his own rather than ask for anybody else's help.
"I think it would make more financial sense, too," she heard herself saying, and everybody turned to look at her. "The upkeep and utilities for that house must be astronomical, especially the heating costs. Between the value of your property and the savings you'd see, you'd probably be very comfortable."
"I'm comfortable right where I am." Joe crossed his arms, glaring at the framed medical certificates over the doctor's head.
Marie sighed. "He said we don't have to make the decision right now, but it's something we'll have to think about."
"You're both in pretty good health, all things considered," the doctor said. "But wearing yourself out taking care of a house that's substantially larger than the two of you need could change that."
Once that conversation hit a dead end, Marie and Jessica were sent out to the reception area so the doctor could take a look at Joe and make sure there were no lingering concerns from the fall he'd taken. Then they did the paperwork to remove David Broussard from their forms and make Rick and Jessica their emergency contacts, with her listed as the next of kin.
As they walked out of the office together, the mood seemed a little grim, and Jessica wasn't sure what to say. She wasn't sure there was anything she could say. She'd come here with the intention of treating Joe and Marie like clients. After explaining their most beneficial option was to sell the house and move into something more sensible in both a management and financial sense, she would help them implement the plan and then keep her thumb on the process long-distance from San Diego.
But they weren't clients. They were her grandparents and they didn't want her there to make spreadsheets and bar graphs. All Joe and Marie wanted was to get to know her. They wanted a granddaughter and if she pushed too hard, she'd disappoint them.
With a heavy sigh, Jessica climbed into the backseat of Marie's car. Judging by the increasingly volatile voice mails from her father, she was already letting him down. The last thing she wanted to do was disappoint her grandparents, too.
Looking out the window, she watched the neighborhood go by as they drove back to the house in silence. Maybe what she needed was another sweaty, stress-busting workout with Rick. But thinking about Rick made her think about kissing him.
Shit.
Not exactly the first word a woman wanted to hear after a kiss that had raised the bar so high she wasn't sure she'd ever be kissed like that again. But he said he shouldn't have done it and they should forget it.
She was trying like hell, but that just wasn't going to happen.
* * *
"Goddamn this fucking job." Chris Eriksson whipped his helmet so hard it bounced off the brick wall of a delicatessen and rolled back to him.
Rick sat against the front bumper of L-37 with his elbows propped on his knees and his head hung low. What an incredibly shitty way to kick off the tour. "We saved a kid, Chris."
"Maybe. Maybe we saved the kid. And if he wakes up, they get to tell him his mom and his baby sister are dead because the douche bag driving was shooting up heroin at ten-thirty in the morning."
Rick tried to think of something to say. It was part of his job to keep the guys' heads right, but he had nothing. The toddler wearing a pink nightgown under an obviously handed-down blue coat hadn't been strapped into her car seat correctly and she'd been DOA. The mother probably wouldn't be declared until the emergency room, but she wasn't going to make it.
They'd focused on the boy, extricating him from the mangled wreck sitting in the middle of a major intersection while the EMTs administered Narcan to the driver. That bastard was going to make it, while the boy was touch and go, and emotion swelled in Rick's throat. Their job was to save everybody they could, and they did, and let the justice system deal with the aftermath. But sometimes it was a hard pill to swallow.
When Chris sat against the truck's tire and dropped his head into his hands, Rick didn't miss the way Aidan and Scott closed ranks in front of him. They looked like they were shooting the breeze, big bunker coats dangling from their hands, but the news cameras across the street wouldn't be able to capture the firefighter sitting on the ground.
Danny Walsh joined Rick on the bumper. "Victims have cleared the scene. They're bringing in the ramp trucks now."
Rick nodded. The guy who'd been driving the sanitation truck was an acquaintance of his, and he knew an ambulance had taken him, too. He hadn't been hurt when he T-boned the car and drove it into a pole before he could stop, but he'd been so shaken up there was nothing else they could do with him. And they'd have to run blood tests in the hospital for the paperwork.
They watched Jeff Porter walk over to Chris and hold a bottled water out to him. After pulling his T-shirt up to scrub at his face, the other man took it and unscrewed the cap. Jeff put his hand on his shoulder for a moment and then picked up the tossed helmet to set on the truck.
Chris and Jeff both had kids, and scenes like this one tended to hit them particularly hard. He knew they'd eventually turn their minds to the boy who would hopefully survive, but right now all they saw was the little girl who hadn't.
Danny pulled out his phone and looked at the screen for a few seconds before cursing and shoving it back in its holster. "You're not going to believe this."
"You won the lottery and we're all retiring on your dime."
"Jesus, that sounds nice. Cobb just got another phone call about the damn decorations. We're the only house that doesn't have our decorations up yet and there's been some question about our community spirit."
Rick snorted. The freaking Christmas decorations were turning into a major pain in the ass. Usually hanging them was no big deal, or even an enjoyable way to break the monotony. But when they'd changed up the way tours were scheduled, everything got messed up, including it being a lot easier to leave odd jobs for one of the other crews to take care of. Throw in a busy early winter and the decorations were still sitting in boxes in the storage room.
"The guys are pretty emotional right now. I don't know if hanging Christmas decorations would help cheer them up or make them focus even more on losing a kid today," Rick said.
Chris was on his feet now, talking to Jeff, Aidan and Scott. He'd seen Gavin and Grant talking to a couple of the cops earlier, though he couldn't see them now. Those two were younger and tended to bounce back emotionally a little faster so maybe he could pawn the job off on them.
"Cobb wants it done today," Danny said. "Said he doesn't give a shit if the entire city burns down. He hates talking on the phone and doesn't want another complaint call."
"We'll see how they are once we're back at the house. Routine helps. And food doesn't hurt. I think they'll be okay by afternoon."
"Even if it's Christmas related, busywork's better than sitting around dwelling on it, too."
"I agree," Rick said, standing and stretching his back. "We'll be here a while yet, but let's get them moving."
Routine did help and as the guys moved around the accident scene, cleaning up and repacking the trucks, Rick felt his anger at the situation seeping away. Maybe hanging Christmas decorations wouldn't be a bad way to spend the afternoon. It was fairly mindless work, but it was also hard not to feel festive when light strings were involved. And people in the community would come to watch and share stories, which served as a nice distraction.
As he worked, Rick found himself thinking about the upcoming holiday. He'd work Christmas Day, since it was his usual shift. But even when shifts rotated, he often worked the holidays, covering for guys with kids whenever he could. If possible, he went to his parents' house for Christmas Eve to celebrate with them and his brother's family. If not, he visited with Joe and Marie.
He'd bet money Jess's plans for Christmas Day were already weighing heavily on Marie's mind. They'd been alone in their house for decades, making do with celebrating with friends. In the past they'd traveled to his sister's house or one of her brothers to visit with their nieces and nephews until spending hours in the car at their age sucked the festivity out of the occasion. But now they had a granddaughter and it didn't sound as if Davey was a really festive kind of guy. Would Jessica stay that long? Or come back to the East Coast to spend Christmas with them?
According to Joe, Marie had wanted to get the Christmas tree just to share the experience with Jessica, but they weren't going to put any pressure on her to spend the holidays with them. Because Davey was such an ass, they didn't want to make her feel in any way as if she was in some kind of family tug-of-war game.
Maybe he should get her a present just in case she was around. A nice wool scarf, maybe. It was a good gift for a friend of the family and God knew she needed cold-weather stuff. But kissing her complicated things. Would she see the gift as a token because of their mutual relationship with Joe and Marie, or would she take it as a sign of something more from the man who'd kissed her?
That was a question for another day. Rick climbed up in the cab of L-37 to back it up a few feet, out of the way of a ramp truck, and tried to put Jessica out of his mind. Things were definitely changing around him, but he had a job to do.
* * *
Once they reached the house, Marie decided she wanted to go to the craft store, which Joe wanted no part of. "Bad enough I had to listen to a damn lecture from that old quack. I'm not watching you stand around and yap with your friends in the craft store."
"I'll go with you," Jessica volunteered, the words leaving her mouth before she really gave any thought to them. She should work, even though she was kind of on vacation.
Just thinking about the phone call from her father that morning made her shudder. Sharon must have finally told him Jessica was using vacation time to extend her stay long enough to go to the charity hockey game, and he'd not only been very angry, but also still intoxicated.
"You know one of us needs to be on top of things," he'd said when a simple demand she return hadn't gotten the result he wanted. "The clients depend on it."
She'd taken a deep breath and then closed her eyes. "Then maybe you should have a pot of coffee and take a shower and go in to the office."
He'd hung up on her, which he rarely did, and hadn't called her back right away as he had in the past. She'd sat with her phone for almost fifteen minutes, battling an urge to call him back and tell him she'd fly out as soon as the medical meeting was over. Then she'd turned it off and turned her focus to getting ready to meet Joe's doctor.
It was the first time she'd ever called her father on being unavailable or not done what he needed her to do, but she was tired of it. She wanted to go to a stupid hockey game with her grandparents.
Marie drove, which was hard on Jessica's nerves. She had to admit, though, she wasn't sure her doing the driving would have been any easier. At least Marie knew where they were going and found a parking space without too much trouble.
"We should see if the boys are around."
Jessica was confused. "Boys? What boys?"
When Marie stopped walking and waved her hand toward the other side of the street, Jessica realized she meant men. And one man in particular.
They were across from a tall brick building that looked pretty old, and had two big openings on the first floor. The huge garage doors were up and Jessica could see the fire trucks, each parked inside with a plaque screwed to the arched brickwork. Engine 59. Ladder 37.
Great. She needed more Rick Gullotti in her life, since tossing and turning and trying not to think about kissing him wasn't torture enough. Watching his muscles flex and hearing the soft grunting sounds he made in the gym had been torture, and she hadn't been able to ride the bike hard enough to sweat the desire out. And that was before he'd kissed her. The feelings that seeing him triggered in her body were escalating from want to need at an alarming rate.
"I thought you wanted to go to the craft store," Jessica said.
But then a man in a Boston Fire T-shirt walked around the front of Engine 59 and happened to glance across the street in their direction. After a few seconds, he smiled and lifted a hand in greeting. "Hey, Mrs. Broussard! How you doing?"
Jessica surrendered to the inevitable and followed Marie across the street, waiting patiently while her grandmother accepted a kiss on the cheek from the firefighter. He looked younger than Rick, and had short dark hair and brown eyes.
"Jessica, this Scott Kincaid. And this is my granddaughter, Jessica."
She shook his hand. "It's nice to meet you."
"You, too. I've heard a little about you." Rick had talked about her? Jessica clamped her mouth shut, not wanting to embarrass herself by asking what he'd said. "Let me grab Rick for you."
She expected him to walk away, but he took out his cell phone and dashed off a text. A few seconds later, his phone chimed and he read the reply. "He'll be down in a few minutes. How do you like Boston so far, Jessica?"
"It's definitely different from San Diego. And colder."
He laughed and waved them inside the big bay. "It's not even cold yet, so we've got the doors open. We love fresh air and we'll keep the door at the top of the stairs closed if we've got heat on upstairs, but we keep these open as much as possible."
Jessica looked at the fire truck practically gleaming wherever the sun or the overhead lights touched it. They obviously took good care of them. "They're so huge. How do you drive them around this city without hitting anything?"
"We pretty much always have the right of way, and let's just say we usually drive them around the city without hitting anything."
He was walking as he talked, and she followed him around the truck, enjoying the close-up view. "Are those pictures on the internet of fire hoses run through broken windows on cars real?"
"They're real." He shrugged. "We try not to damage things because the paperwork sucks, but we're also not going to let somebody get hurt or lose half a city block because some jerk parked in front of a hydrant."
"Guess they don't do it again."
"You'd be surprised. And you should ask Gullotti about the time a call came in that there were kids trapped on the top floor of a burning three-decker. When he got there, some rookie cop trying to help secure the area had parked his cruiser in the way and wasn't right there to move it."
Jessica gave him a look of disbelief. "Don't tell me he pushed a police car out of the way."
"He pretty much wrecked the hell out of that cruiser. But that's the LT. Until he gets the ladder up, we can't save anybody, so he doesn't mess around."
"LT?"
"Lieutenant. For Ladder 37, anyway. I'm with Engine 59, so Danny Walsh gets to boss me around, but we always roll together. One pumper engine and one ladder truck."
"What about the lieutenant?"
Jessica whirled at the sound of Rick's voice, hoping the rush of heat she felt didn't show on her face. He was wearing the same navy Boston Fire T-shirt as Scott, tucked into blue uniform pants. The shirt was snug and her gaze traveled over delicious biceps before jerking back to his face.
"I was telling her what a pain in the ass you are," Scott said. "And how bossy you are."
"Yeah. Speaking of being bossy, everybody's eating. Go grab something because then we're dragging the boxes out and, unless we get called out, nobody's leaving until the decorating's done."
Scott rolled his eyes. "Good to see you, Mrs. Broussard. And it was nice to meet you, Jessica. A bunch of us are going to play pool tomorrow night at my old man's bar. You should have Gullotti bring you."
"I don't know, but thanks for the invite." Jessica watched him disappear into a back corner, where she assumed there was a set of stairs, and then turned back to Rick. "I asked him how you guys drive these massive trucks around and he told me to ask you about the police car."
"That wasn't my fault," Rick said sternly, but the corners of his mouth quirked up. "They buried me in paperwork, let me tell you."
"I'm going to go to the craft store while you two chat," Marie said. "Give her a bit of a tour and I'll be back in a few minutes."
She was gone before Jessica could protest, moving fast for a woman who was supposed to be in declining health. Not sure what to do, she turned back to Rick. "You don't have to give me a tour. You should go eat and I'll catch up with Marie."
"Already ate. And, trust me, you don't want to be at the craft store with her. The owner's one of her best friends and they can literally talk for hours. I took her one time, when she was on a medication that banned her from driving, and I actually fell asleep in a wooden chair with my head on a pile of quilt squares on the table."
"I don't have a lot of patience for crafts, so that sounds like a nightmare. What are you decorating?"
"For Christmas. We put up lights, including some cool big ones around the bay doors. There's a big wreath we always hang above the plaques. Electric candles in the windows that face the street. That sort of thing. And each of the trucks has a wreath for the front grill."
He showed her around the trucks and all the gear, which was more interesting than she would have thought. Or maybe it was just liking the sound of his voice. Then he brought her to the second floor, where there were offices and a room where the officers slept. They skipped over most of it, though he described the rooms as they passed them.
He sent and received a text, and then led her up another flight of stairs to what she assumed was the top floor. "They know you're coming."
She heard the noise right away. Men's voices—a lot of them—and laughter rang through the third floor, making her smile. "It looks like an apartment."
"It essentially is. In the back we have a shower room and some workout equipment, along with a couple of bathrooms. This, as you can see, is the living room, and the bunk room's through that door."
There were a couple of long couches, as well as comfortable and battered-looking chairs set wherever they fit. All of them faced a large television screen, which was currently off. She was surprised by how neat everything was and said so.
He laughed. "There are a lot of guys sharing this space. Not only the ones you can hear in the kitchen, but the other shifts that are here when we aren't. Just one person not cleaning up after himself can be a problem."
When she followed him into a huge kitchen and dining space, her eyes widened. He wasn't kidding about a lot of guys sharing the space. In chairs, leaning against counters, rummaging in the fridge. The room was full of men and Rick went quickly through their names, probably not expecting her to remember any of them.
Scott Kincaid, she'd met. And Danny Walsh, Aidan Hunt and Grant Cutter were assigned to Engine 59 with him. And with Rick were Jeff Porter—who was even bigger than Rick—Gavin Boudreau, and Chris Eriksson, who she thought was older than Rick if she judged by the gray in his beard. There was also an older man he only called Chief, who'd just bitten into a thick sandwich and waved to her from the head of the table.
Jessica smiled and gave a general wave in everybody's direction, and then followed Rick back downstairs with a sigh of relief. She didn't like being the center of attention and they'd definitely turned all eyes on her when she walked into the kitchen. Whether it was idle curiosity or whether they were trying to figure out what—if anything—she was to Rick, she didn't know, but she'd felt awkward.
"Do you think you'll get all the decorations up before you get called out?" she asked when they were back on the ground floor.
He shrugged and leaned against the side of his ladder truck. "I hope so, just so we can check it off the list. They're supposed to be up by now, but when the alarm's struck, we've gotta go. Tuesdays don't usually get too wild and crazy, but you never know."
"Thanks for the tour," she said, suddenly feeling shy. It was stupid because it wasn't as if this had been a date or anything, but she still wasn't sure how to say goodbye. "I guess I'll go find Marie and let her introduce me to her friend. I'm probably quite the gossip fodder this week."
"But the good kind," he said, not bothering to deny it. "I grabbed you a ticket to the hockey game, by the way. We can all ride over together."
"I'm looking forward to it."
"Do you want to go to Kincaid's tomorrow right? I heard Scotty ask you."
"I don't know." Did he mean as a date? Or was he only following up on an invitation Scott had technically extended to her. "I've never played pool."
"We can teach you." She was going to respond to that when an alarm sounded and Rick's entire body language changed in the blink of an eye. "Gotta go."
"I'll get out of the way."
"When you go out, stay on this side of the street." He talked to her as he stepped into tall boots with some kind of pants scrunched into them. Once his feet were in, he pulled up the pants and looped suspenders over his shoulders. "We swing wide coming out and we've been known to hop the opposite curb a time or two."
He winked at her, and then turned to grab a heavy-looking coat with reflective stripes on it and a helmet. She yelled a goodbye as guys flooded into the bay and then stepped out onto the sidewalk, under a flashing red light over the bay doors. She noticed the other pedestrians stopped, none of them passing in front of the fire station, and a car down the street stopped.
Faster than she would have thought possible, a siren wailed and Engine 59's nose appeared. It pulled out into a right turn, not quite going up on the curb, but she saw the entire stretch was marked for no parking. The men waved to her as they went by, and so did the guys from Ladder 37 when it pulled out. Rick was in the shotgun seat and he gave her a grin along with the wave.
She wasn't sure if it was the excitement of the lights and sirens or Rick's grin that made her heart pound in her chest, but Jessica watched the two trucks until they turned out of sight and then went to find her grandmother.
Chapter Eight
Jessica was propped against her pillow with her tablet, reading an article about a possible product recall and considering what impact it could have on stock prices, when her cell phone vibrated. It was sitting on the bed next to her, and she frowned when she saw her father's name on the screen.
Apparently he was going to be the first to flinch in their game of telephone chicken. She was surprised he'd made it all the way to Wednesday afternoon without calling with another attempt to bend her to his will. Or to tell her he'd emptied her office and all of her stuff was in cardboard boxes on the sidewalk. She took a deep breath to calm her nerves, swiped to answer and said hello.
"Jessica, it's your father."
Which she knew since her phone—the same model as his—had told her so. And she realized at that moment he never referred to himself as Dad. Hey, honey, it's Dad. Maybe that was why she rarely thought of him that way. He was always her father in her mind. But she called him Dad when she spoke to him directly because calling him Father would sound cold and awkward, especially in front of others. Never, even as a little girl, had she ever called him Daddy, though. "Hi, Dad."
"I'm in the office today and I don't know if I should be pleased or insulted by how well things have been handled while you're away."
She wasn't sure if it was a compliment or a prelude to letting her know he didn't need her anymore, but at least he sounded sober. He must be, since he never drank at the office. "You already know Sharon's amazing and, like I told you before, my laptop and phone don't care where I am when I use them."
"How are things in Boston?"
Did he mean her? His parents? The weather? "Fine. We met with Joe and Marie's doctor yesterday and they're in generally good health, though he thinks they should strongly consider downsizing now, rather than later when one of them has a crisis."
"Are you putting the house on the market?"
She laughed. "I can't, since it's not my house. And they're not sold on the idea yet. They're understandably reluctant to leave their home, and they have Rick upstairs."
"So there's nothing you can do there right now, then."
"I'm going to a hockey game Saturday," she said, knowing that was not at all what he'd meant.
"A hockey game." He was quiet for a few seconds, and she pictured him staring out the window as he considered what to say next. "Our holiday party is a week from Saturday."
"I know. I've been working on it with Alicia, who has been assisting me with it for the last five years, so it's all under control. And I'll be home for that. I'm not sure what day yet, but you know I wouldn't miss it."
"I wasn't sure at all. I don't know if they've turned you against me."
Jessica sighed and wiggled down the mattress until she could lie on her pillow and stare at the ceiling. "They're not like that, Dad. They're really nice and they haven't said a bad word about you. They know you're my father and they wouldn't put me in the middle like that. They just want to get to know me."
"You shouldn't get emotionally involved with them. It muddies the business."
"They're my grandparents." Not that family seemed to mean much to him. "You know, I've seen their wedding photos and some pictures when Marie was about my age. I'm like a clone of her."
"Trust me, I know. Let me know as soon as you know which day you're flying home, okay?"
"Okay." She wanted to dig at him over the phone. To push him into revealing some emotional response to her looking so much like his mother. But he was at work, he was sober, and he was giving her some space, no matter how reluctantly. She didn't want to rock that boat. "And I'm going to forward you a few articles to read, too. We might need to strategize about a few accounts when I get back to San Diego."
"I'm looking forward to it," he said, sounding a little more chipper. "That's what we do best. We're a good team."
She smiled when she hung up, glad he was choosing to be reasonable in the face of not getting his way. He seemed content to wait until she came home for the holiday party, but she knew he'd play hardball if she tried to leave again. And that was a problem.
Every Christmas Eve, she and her father—along with his wife if he had one that year—had dinner together and exchanged gifts. He spent Christmas Day at his country club, where a charity golf game had become a tradition with the less than jolly set. She spent the day being lazy in her pajamas and watching any movies she could find that didn't revolve around Christmas and families with moms and dads.
This year she found herself wondering what it would be like to come down the stairs and have breakfast with her family and then open presents under the tree. Or maybe they were the kind of people who opened presents first and then ate breakfast.
She couldn't be here for Christmas this year. She knew that, even though it was tempting to imagine getting on a flight back to Boston once the party was over so she could spend the holidays with her grandparents. But her father sounded as if he was making an effort to accept that she was going to have a relationship with his parents, and he was also going to be alone for the first time in four or five years, since he was divorcing. Maybe next Christmas she could get away without too much guilt.
A knock startled her and she looked over to see Rick standing in her open doorway. He leaned against the jamb and crossed his arms, giving her a crooked grin. "Working hard?"
The fact he was filling her bedroom doorway while she was flat on her back on her bed threatened to put all kinds of ideas in her head, but she just smiled. "It's a strategic brainstorming session."
That made him chuckle. "Sure it is. Marie sent me up here to change the lightbulb in your bathroom, as long as it won't disturb you."
"Oh, I changed it earlier. I found some bulbs in the pantry and grabbed one." She sat up and swung her legs over the edge of the bed. "While you're here, can I ask you a question?"
He raised his eyebrow. "You can ask."
"There was an accident on the news. The guy who overdosed while driving with his girlfriend and her kids. Joe said you responded to that." He nodded, his expression shifting slightly. "How do you even do that job? I mean, how do you just forget that and move on?"
"We never forget. Ever." He shifted his weight against the doorjamb. "But we save a lot more than we lose and somebody has to do it. If we start dwelling on the times we fail, we might hesitate and we can't do that."
"Sorry. I guess that was a personal question. It's just that it was only a few hours later Marie and I were at the station and you introduced me to everybody and...I don't know. There was food and Christmas decorations."
"It was a rough morning, but we had to get through the shift, you know? Every guy handles the hard days differently, but that's for later, at home."
"How do you handle the hard days?" Way to follow up a personal question with an even more personal question, she thought, wishing she could take the question back.
But Rick gave her a slow, sexy smile that made her sock-covered toes curl into the carpet. "There are a lot of ways to cope with stress."
"I... Oh." It was so tempting to say something provocative and get him to cross the few feet between the door and the bed, but her grandparents were downstairs.
Then he chuckled again, and she hoped it wasn't because he could read her thoughts on her face. "I went to the gym, actually, and beat the shit out of the heavy bag for a while."
"That sounds very satisfying." Not as satisfying as where her mind had taken her, but hitting the heavy bag sounded like a good way to vent. "What are Joe and Marie up to?"
"Marie's putting together her grocery shopping list for tomorrow and going through her coupons, so that means Joe's probably hiding in the garage. I told him I'd give him a hand changing the belt on the snowblower."
Jessica grabbed her phone and stood up. "I'll go hang out with Marie, then. Keep her company, at least."
"You still planning to go to the pub with me tonight? Maybe learn how to play some pool? And they have pretty good burgers, too."
"That sounds fun. It'd be nice to get out for a little while."
"Good. I'm pretty sure Marie has beef stew in the slow cooker, so if you're not there for supper it just means more leftovers for Joe tomorrow."
"I'll let her know I'm going out." Assuming the conversation was over, she started toward the door.
Rick didn't move right away, though, and Jessica had to stop short to keep from plowing into him. He looked at her for what felt like a long time, his expression frustratingly unreadable, before standing aside and waving for her to go first.
She walked down the hallway, hoping he wasn't watching her ass. Or maybe hoping he was. She wasn't sure.
* * *
Rick had brought quite a few women into Tommy's bar with him over the years, so he couldn't explain the low-level anxiety he felt as he opened the door to Kincaid's Pub and gestured for Jess to go in.
She was definitely a white-collar woman and Kincaid's was a blue-collar kind of bar, but he wanted her to like it. And he wanted her to like the guys, too, though he wasn't really sure why that mattered so much. He already knew that, no matter what, the guys would be nice to her and make her feel welcome.
Lydia was behind the bar, so he made that their first stop. "Hey, Lydia, this is Joe and Marie's granddaughter, Jessica."
"Hey, I heard you were in town." Lydia reached across the bar to shake her hand. "It's good to meet you. It's too bad my dad's not here. He and Fitzy had a wake to go to."
"I'm sorry to hear that," Jessica said.
"Thanks. Though, to be honest, I'm not even sure how well they knew the guy. I think they just go to see if any old flames are back on the market."
"Jesus, Lydia." Rick groaned. "I don't even want to go there."
"At the rate you're going, you'll be widow-hunting with them before you know it." Lydia threw him a saucy wink and then looked back at Jessica. "Maybe scaring him with horrific glimpses into his future will get him to settle down."
Rick gave Lydia a quelling look. The problem with all of them being like a family was that, like with real family, they didn't always know when to shut their mouths. "Funny. We're going out back. Jess, you want a beer? Or...what else is there besides beer? Soda. Coffee. Water. Juice?"
"Hey, let me do my job," Lydia said. "I'm better at it than you."
Jess said she'd have whatever Rick was drinking so once they each had a chilled mug of beer, he led the way through the mostly empty tables to the alcove where the pool table sat.
Only three guys had shown up—Aidan, Scott and Gavin—and she remembered their names, but he introduced her again just to be polite. "You all remember Jessica, the Broussards' granddaughter? She stopped by the station yesterday with Marie."
"Glad you came," Scott said, shaking her hand. "You want to play? You can take over for me if you want."
Rick watched Jess smile at Scott and give a little shake of her head. "I think I'll watch for a while. Maybe I'll figure it out."
"Okay, but don't watch Gavin over there. He sucks."
"Noted."
Rick pulled out a chair at one of the small round tables against the wall and gestured for her to sit. Then he sat across from her and set his beer down as he rocked the chair back onto two legs out of habit. For some reason most of them did it and Tommy had learned years ago not to buy flimsy chairs.
"You don't have to sit here with me, you know," Jess said. "You can play pool with your friends and I'll watch."
"I can see them from here and I'd rather sit with you. You're prettier than they are."
"I don't know. Gavin's kind of pretty," Aidan said from his spot by the rack of cues. "But you're right. Jessica is prettier."
"Screw you, Hunt." Gavin blushed and looked over Jessica. "Excuse the language, ma'am."
"Ouch," Jessica muttered. "Ma'am? Really?"
"He can't help it," Scott said. "He calls every woman ma'am. He even called the cashier at the gas station ma'am earlier and I'd be surprised if she was seventeen years old."
"There's nothing wrong with manners," Jessica said, smiling at Gavin.
Rick watched the conversation as it went on, enjoying the way she interacted with the guys. They genuinely seemed to like her, and she looked as though she was having a good time.
When she was about halfway through her beer, she excused herself to go to the restroom and Rick couldn't stop himself from watching her walk away. The jeans, which he was pretty sure were new since she'd arrived in Boston, made her ass look amazing. Or maybe her ass made the jeans look amazing. Either way, they were a combination he couldn't look away from.
"I like her," Scotty said once she was out of earshot.
"Me, too," Aidan added.
"I swear you two are like twins. And you don't even know her."
Scotty shrugged. "It's a vibe. My gut says I like her."
"As long as that's the only part of you that likes her."
All three men looked at him, and Aidan gave a low whistle. "I guess we don't have to ask if you like her."
"I wouldn't have brought her here to hang out if I didn't like her, dumb-ass."
"I don't know. You keep talking about out how she's Joe and Marie's granddaughter. Like really making a point of it, so it sounds like you're stressing that you're hanging out with your landlords' granddaughter. Maybe you should just introduce her as Jessica, who's with you."
"You all know Joe and Marie, so it makes sense to let you know she's their granddaughter."
Scotty chalked the end of his pool stick. "Yeah, and you did that yesterday. And now you did again. You reminding us or yourself?"
"What the hell is wrong with you guys? No more afternoon talk shows for you three."
They went back to their game, and Rick sipped his beer, waiting for Jess to come back. Then he replayed his introductions over in his head—yesterday's and tonight's—and had to admit they might have a point. And if they did, then Scott's question was also valid.
Why did he feel such a strong, subliminal need to keep that distance between them? And was he protecting her, them or himself?
* * *
On her way back from the restroom, Jessica had to pass by the bar, so when Lydia waved, it seemed only polite to stop and chat for a minute.
"How's the pool playing going?" the other woman asked.
"Good, I guess. I think Scott's winning, though I'm not sure."
"I'm not sure how, but my brother usually wins. I think he talks so much nobody else can concentrate."
Jessica laughed. "Possibly. Who is that a picture of on the wall? It looks signed and...is it really screwed right to the wall?"
"I was going to give you hell, but I guess you get a pass since you're from California. That's Bobby Orr, one of the greatest Bruins to ever play hockey. One of the greatest of anybody to play hockey."
"I'll remember that. I'm going to the charity hockey game with them tomorrow, and it'll be the first time I've ever seen a game."
"With them? Oh, you mean Joe and Marie?"
"Yeah, and Rick, too. I guess we'll all go together."
"It'll be fun. I'll probably see you there, since everybody from the station sits together and the Broussards sit with Rick, so you will, too."
Of course, since she was their granddaughter and nobody could possibly miss that fact.
It was hard not to notice how carefully he introduced her as Joe and Marie's granddaughter every single time. Not as his friend. Not as a friend of the family. Certainly not as his date. He was keeping her at a distance, as if he was just doing his landlords a favor by showing her the town.
"We should go out sometime," Lydia said. "Like a girls' night out. I like to do that every once in a while since it feels like every night is a boy's night out at the bar."
"That would be fun. I don't have much free time before I go back to California, but maybe the next time I come back."
"Sounds good."
"I should get back there and see how it's going," Jessica said.
"Make them show you how to play," Lydia said. "Oh, and do me a favor and let them know I'm not walking all the way back there to take their orders. If anybody's hungry, they can come order at the bar."
When Jessica walked into the back room, she saw that Rick had abandoned the table and was leaning against the wall talking to Gavin. He smiled when he saw her and nodded his head for her to join them.
"You ready for a lesson?"
"I guess so. And Lydia said if anybody's hungry, go to the bar because she's not coming back here."
"I'll take everybody's orders up," Aidan said, and then he shrugged when they all looked at him. "Hey, I never pass up an excuse to talk to my future wife."
Jessica enjoyed the good-natured ribbing the guys gave him, though she wondered how it was that Scott and Aidan appeared to be the best of friends and she knew they worked together, but Aidan was marrying Scott's sister. She thought there was some kind of man code about dating your best friend's sister.
Showed how much she knew about men.
After they'd given Aidan a list of how they wanted their burgers, he disappeared and Rick handed Jess a pool stick. "I'm guessing you've figured out the basic rules by now."
"Use the stick to knock the white ball into the other balls to make them go in the nets around the table."
"Close enough."
Usually Jessica didn't like learning new skills with an audience. She felt awkward and she didn't like the pressure. But these guys were fun and she'd spent enough time with them tonight to know they'd definitely laugh, but they'd be laughing with her and not at her.
She tried to put her hand like she'd seen Scott do and set the stick across her knuckles. Then she jabbed it and it caught on the green table, not even hitting the white ball.
"Jesus," Rick said, "if you rip the felt, Tommy will put my balls on display in a pickle jar on the bar."
Jessica snorted. "God forbid. I'll do my best not to wreck the table, then, although I blame you since you're supposed to be teaching me how to do this."
"Tommy wouldn't waste a pickle jar on your balls, old man," Scott said. "Probably just use a shot glass."
Jessica laughed—she couldn't help it—but then squealed when Rick wrapped his arm around her waist and hauled her close.
"Think that's funny, do you?"
She playfully jabbed at his stomach with her elbow. "A little bit, yeah."
"Since I prefer my balls where they are, let me show you how to hold the cue before you get us both in trouble."
Jessica was pretty sure the way he leaned over the table with her, molding his body against hers was what would get them both in trouble. He was tall enough so her ass wasn't actually nestled against his crotch, but she knew there wasn't much space between them. And his chest was pressed against her back as he took her left hand in his and stretched their arms out on the table. After showing her how to hold her fingers, he closed his right hand over hers and they went over how to hold and move the cue.
She wasn't going to remember a single word he said. All of her focus was on the way his body covered hers and the feel of his hands on hers and his breath on her cheek.
"I need to hit the head," she heard Gavin say.
"We need refills," Scott said. "I'll go now so you can help carry them back."
Jessica gave a laugh that sounded breathy and full of anxiety. "Was that their subtle way of leaving us alone?"
"Incredibly subtle," Rick said before he pressed a kiss to her neck that made her entire body shiver. "This was not one of my better plans for not kissing a woman."
"And yet neither of us have moved."
When he stood up straight, she regretted saying the words. Even if it was a dumb idea, she liked the feel of his body so close to hers. But when she laid the pool cue across the felt and straightened up, he put his hand on her elbow and turned her to face him.
"It's still a bad idea," he said, his voice low.
"I agree." She stepped into the curve of his arm. "So after you kiss me, we should make another agreement about how we won't do it again."
That seemed to be all the urging he needed. His mouth closed over hers so swiftly she gasped against his lips. It was hot and urgent and she stood on her toes, arching her back to get more of him.
His hands were at her hips, holding her against his body, and she wrapped her arms around his neck so he couldn't pull away. He nipped at her bottom lip and she moaned, wishing they were alone—really alone—and that maybe this time they wouldn't stop.
"The burgers will be—oh. Sorry."
They broke apart to see Aidan standing on the other side of the pool table, his body language making it clear he wasn't sure if he should stay or go back out to the bar.
Jessica stepped back, feeling a hot flush over her face. "Rick's showing me how to play pool."
She had to respect Aidan for holding the straight face as long as he did, for about fifteen seconds before he laughed. "I was going to tell Gavin he should ask Rick for some pointers on how to beat Scotty, but he might want to watch some YouTube videos instead."
Before either of them could respond, Scott and Gavin came around the corner, each carrying a tray of beer mugs.
"You better tip me," Scott said when he'd managed to set the tray on one of the tables without spilling the beer.
"Rick can handle that," Aidan said. "He gives good tips."
Jessica blushed again, but Rick only laughed. "Here's a tip. Don't eat the yellow snow."
As the other guys each claimed a fresh beer, Rick stepped close to Jessica and leaned close so only she could hear him. "Is this where we agree not to do this again?"
Sadly, she nodded. "That's what we said."
"Remind me next time not to agree to that in advance." He winked at her and then joined the others. "Come get your beer before one of these guys chugs it, Jess. You'll want it when the burgers come out."
She claimed her mug, and then took a seat at the table again since she didn't think she'd survive any more pool lessons tonight. But as she watched Rick joke around with his friends, she ran his words through her head again.
Remind me next time not to agree to that in advance.
Next time.
Chapter Nine
By the time the hockey game rolled around on Saturday, Rick was almost numb with exhaustion. The normal twenty-four-hour shifts didn't bother him. He could usually get enough sleep to function just fine and he liked having five days to play with. Sometimes he'd cover a tour for somebody else, but he'd been starting to play with the idea of a second job. He just needed to figure out what he wanted to do.
But sometimes the stars were out of alignment or the moon was full or maybe there was something in the water, but they ran their asses off for twenty-four hours and made do with battle naps when and where they could grab them. That had been Friday.
He'd managed to sleep for a few hours in his own bed, but what he really wanted to do was close the room-darkening blinds and hibernate for the entire weekend.
Instead, it was time to go downstairs and see if the Broussards were ready to head to the rink. There were guys from a few different stations playing, since they couldn't all play, but any firefighters from the representing stations who didn't show up with a toy better have a good excuse.
As soon as he stepped out onto his deck, he winced. The weather was turning and there was a cold snap in the forecast. Trying not to imagine all the space heater, woodstove and chimney disasters in the city's future, he made his way down the stairs and walked through their back door.
The second he stepped into the kitchen, he realized he wasn't as exhausted as he thought. Jessica was in jeans, with her hair in a ponytail and only the lightest touch of makeup on her face. And she was wearing a navy blue sweatshirt that was too big for her, but said Boston Fire across the chest.
He wanted to back her up against the counter and kiss those strawberry-tinted lips until their legs wouldn't support them anymore and they slid to the tile floor in a tangle of arms and legs.
"Hey, Rick," Joe said, and Rick jerked his attention to the older man standing in the doorway. "I already put the toys in the trunk. I swear Marie thinks she's Santa Claus."
Rick swallowed hard and managed a smile. "It's all for a good cause. And I see you also got Jess a sweatshirt."
"Yup. Marie only has the one she wears, but I have two, so I lent her one. People need to know which side she's on."
His side, Rick thought. As beat as he'd felt this afternoon, he almost wished he was playing just so he could spot Jess in the crowd, cheering for him and calling his name.
Marie walked into the kitchen wearing a sweatshirt that matched the other two. Rick was wearing his T-shirt with a hoodie over it because he tended to run hot and crew neck sweatshirts drove him crazy. And since he hadn't totally cooled off yet from the passionate, if imaginary, kiss with Jessica, he was ready to get back outside.
"We need to get going," he said. "It's Jess's first time, so we want good seats."
And so started a debate between Joe and Marie that lasted all the way to the rink as to what constituted good seats. And since Joe was in the shotgun seat next to Rick and Marie was in the backseat with Jessica, it meant the older man spent a lot of time turned in his seat, yelling past Rick's ear.
"You sit close and you can hear everything and practically smell the sweat," Joe said.
"But if you sit near the top, you can see everything," Marie argued.
"I'd like to see everything," Jessica said, "but it's hard to resist the smell of sweat."
She said it so sincerely, Rick had to choke back a laugh. He didn't offer an opinion, though. Truth be told, the other guys from the station and their families would have staked out a section already. They'd take the best four seats that were left together.
Getting inside, dropping the bags of toys in the collection box, and making their way to those seats was probably quite the adventure for Jess, though. It didn't seem as if they could walk ten feet before she had to be introduced to somebody else. When they finally found the guys from his station, though, he was pleasantly surprised by how many of their names she remembered.
He knew she'd met some of them a couple of times—at the station when Marie stopped by, and then at the bar—but the first time had been nothing but a barrage of names, and some of them hadn't been at Kincaid's the other night. Remembering names was probably a skill that helped make her good at her job, he thought. It made people feel valued.
Another thing that surprised him was how often he had to stop himself from touching her. He wanted to put his hand on the small of her back to guide her through the crowd. Or lace his fingers through hers when they were talking to people. And maybe if she'd been any other woman, he would have. But whenever the temptation got too strong, he'd remember they were there with Joe and Marie. And they were trying not to do the kissing thing anymore, by mutual agreement.
"Aidan!" he heard her say, and watched her shake the other guy's hand. "It's good to see you again."
Then he watched her greet everybody else, amazingly able to keep them all straight. Aidan was engaged to Lydia, who was Scott's sister. And their dad, Tommy, was in attendance, as well. And Scott and Lydia's sister, Ashley, who was married to Danny Walsh. He wondered if she'd made a spreadsheet or something in advance and studied it.
"I had a great time at your bar the other night," she told Tommy, who grinned at the praise. "And the burger was to die for."
"I'm glad you liked it," the old, retired firefighter said, his chest puffing a little. "Heard you played a little pool with the guys."
Rick hoped that was all he'd heard, but Jess just smiled. "Did you close down to come for the game? It looks like you're all here."
"Nope. Karen's watching the place for us tonight. She's a friend of Rick's."
"We should find some seats," Rick said before that conversation could go any deeper. "Good to see you, Tommy."
They finally found an empty stretch of bleacher long enough for the four of them. Rick had assumed Jessica would sit between her grandparents, but Joe went in first, with Marie on his heels. Jessica sat next to her, leaving Rick at her side on the end. It wasn't a big deal, except for the fact he was going to spend the entire game with his thigh pressed against hers.
"How do you remember everybody's names?" he asked her while they waited for things to get under way.
"It's just a knack I have. I tend to remember details about people pretty easily if I'm trying. When I was growing up, I always wanted to be an event planner. Even now, one of my favorite things to do is plan the annual holiday party we throw for our employees and some of our clients. Doing it at a distance isn't as fun, but I still get to handle the details."
"How come you didn't do that, then, if that's what you wanted to be and you're good at it?"
She laughed. "My father would never have gone for that. And what's the point of building a successful business if your only child is organizing baby shower party favors or wedding venues?"
"So you have to live his dream because you're an only child?"
She gave him a sharp sideways look and he shut his mouth. Luckily, the announcer chose that moment to turn their attention to the ice, and Jessica not only laughed but clapped her hands for the guy dressed as Santa on skates.
As always, the crowd noise rose to earsplitting decibels during the team introductions. The police department's team skated out first, followed by the fire department. Judging by the explosion of sound when Aidan and Scott were introduced, Lydia and Ashley were sitting not too far behind them and off to the left.
Once the game started, it quieted a little, though. He watched the play, but he was also very aware of Jess's long leg pressed against his. Especially when, about halfway through the game, she tilted the rest of her body toward his to ask a question.
"How come they're not punching each other?"
Rick leaned closer, because there was no way he could have heard her correctly. "Did you ask me why they're not punching each other?"
"Yes, or hitting each other with the sticks or something. Except for sometimes pushing each other into the walls, they're not hitting each other at all."
So he had heard her correctly. "I have to say, you keep this bloodthirsty side of yourself pretty well hidden."
"I've seen highlights during news broadcasts, of course, and when I found out we were coming here, I read up on hockey on the internet. I thought there would be fighting and blood and stuff."
"It's a charity game being played by guys whose calling in life is to help people."
"Yes, I get that. Protect and serve and all that." She waved her hand. "But this is hockey. I had expectations."
He forced himself not to laugh at her, because he didn't want her to feel foolish. But it was hard when that pretty face seemed so genuinely dismayed by the lack of violence. "Sorry to disappoint you, but the game's not over yet. There's still a chance Kincaid could get riled up. He's got a bit of a temper."
"Scott and Aidan are both from your station, so how come you didn't play?" She gave him a slow once-over that would have made his cheeks red if he was the blushing kind. "Not good enough?"
His eyebrow arched and he held her gaze until she proved she was the blushing kind. "Oh, I'm good enough, honey. Trust me."
"Don't call me honey."
"Don't question my manhood."
She really blushed then. "I didn't question your...manhood. I questioned your ability to play hockey."
"Same thing."
"What an incredibly guy thing to say."
"Thank you."
She turned her attention back to the ice with an exasperated sigh, and he had to stifle a chuckle. When he nudged her knee with his, she crossed her arms and tilted her chin a little, so it was clear she was ignoring him. But a smile played with the corner of her mouth and that was good enough.
He might have been tired earlier in the day but now, as far as he was concerned, this game could go on all night.
* * *
Jessica couldn't remember enjoying an event as much as she enjoyed the hockey game. Of course it didn't hurt that the fire department won and the crowd had managed to set a new record for tickets sold and the number of toys collected for charity.
She probably shouldn't have had a steamed hot dog, though. Or the nachos. Or the popcorn or the cotton candy. And she didn't even want to think about how much soda she'd had. But as best she could tell, questionable food choices and sports were a package deal.
When it was time to leave, she stood and put her hands to her back to stretch. She had no idea how Joe and Marie managed to sit on the bleachers for so long, but maybe they were just used to it. And her leg felt cold without the constant hot pressure of Rick's thigh against hers. At first, she'd felt compelled to draw her knee away to give him more space and to save herself from the distraction, but there simply hadn't been enough room.
When the crowd they were being swept along with finally got through the exit, the cold night air was like a slap in Jessica's face. It hadn't been cold when they left the house and she had a shirt on under the sweatshirt, so she hadn't bothered with a coat.
"I'll go get the car," Rick told Joe and Marie, who were chatting with friends on the sidewalk. "Save you the walk. And it'll take a bit in this traffic, so you'll have time to visit."
"Take Jessica with you," Marie said. "I can tell she's already freezing."
While she wanted to protest and try to at least pretend she could hang with the native New Englanders, the idea of a warm car and a nicely contoured seat was too much temptation to resist. She walked up the street next to Rick, trying to remember how far away the car was parked and hoping they got there before she embarrassed him by freezing to death in the midst of a bunch of people who didn't even look cold.
Then she stepped in a shallow puddle. Or rather, she stepped onto it. Her foot slipped on the ice and she would have landed hard on her ass if not for Rick's quick reflexes. In a flash, his arm was looped under hers and he yanked her upright before she could actually fall. It wasn't comfortable, but it was better than a busted tailbone.
"Thanks. I guess I should add ice being slippery to my journal of things I learned in Boston."
"I can throw you over my shoulder if you want. I've had professional training."
She gave him a look that might have scorched that fancy fire coat he wore. "Do they give classes on being a caveman at the fire academy?"
"That part just comes naturally to some of us. But they have to teach us how to lift with our legs and not with our backs, I guess."
This was ridiculous. Jessica was freezing, and now Rick had threatened to throw her over his shoulder.
Okay, that wasn't really fair, she forced herself to admit. He'd offered to throw her over his shoulder. A small distinction, but one he'd probably consider an important one. "I think I can manage on my feet, thank you."
"Let me know if you change your mind. Practice keeps the skills sharp."
She realized they'd continued on walking, and his arm was still tucked under hers, with his hand curled around her elbow. Whether he was afraid she'd slip on ice or if he just liked it there, she couldn't be sure. But she certainly didn't mind it, so she just kept walking.
"Did you have a good time tonight?"
"I definitely did, though I ate a lot of foods I don't think were meant to go together."
"Any food eaten while watching sports is exempt from any kind of nutritional standards." He smiled at her, his teeth gleaming in the dim light. "Especially if you're actually at the arena or stadium in person. Ask any sports fan."
"I should watch a game on TV. With professional hockey players, I mean."
"Just so you know, there's not always fighting and blood in professional hockey, either."
She rolled her eyes at him. "Not for the fighting. I had a really good time tonight and I think I could become a fan."
"It's too bad you're not staying longer. I could probably get Bruins tickets."
"I'm not sure which team I'm supposed to root for. I think I've heard the news talk about Ducks. Does that sound right?"
His expression made it clear he didn't think much of that. "You've gotta be a Boston Bruins fan. Your family's from here. This is where you were introduced to hockey. And why have a duck when you can have a bear?"
"There probably aren't many Bruins fans in San Diego."
"That just makes you exceptional in a city with very little taste."
She laughed and bumped against him, taking him off guard and knocking him a couple of steps sideways. Since his arm was hooked in hers, she went with him. He nudged her back and she was tempted to rest her head against his arm as they walked. It was nice, just the two of them.
But then she saw the car and it brought reality back. This wasn't a date. It was a family outing planned before she'd even arrived. And they'd agreed they weren't going to kiss anymore, so there would be no kiss good-night.
Once she was in the car and the engine started generating enough heat to spare some for the vents, Jessica sighed and snuggled into the comfort. Considering how many people had told her it wasn't even cold yet and she should experience January and February, she should probably be thankful she was going home soon.
"The Bruins play Monday night," Rick said as he worked the car through the traffic toward where they'd left Joe and Marie. "I'll be watching it if you want to come up and watch it with me."
Okay, that sounded almost like a date. "You don't watch the games with Joe?"
He shook his head. "He follows the Bruins box scores and he likes the charity games, like tonight, but he doesn't watch many of the games until postseason. He's more of a baseball guy."
"Okay." She probably should have said no, but that wasn't what came out of her mouth. "I'll bring some junk food."
"Sounds like a plan."
A plan. Not a date. Just a plan, like friends would make. But it was a plan she was already looking forward to.
Chapter Ten
On Sunday afternoon, Rick glanced out his window and happened to notice the side door to the garage was open. Since he was too bored to watch television, but not bored enough to reach out to his friends and see if anybody was doing anything interesting, he decided he'd take a walk down and see what Joe was up to.
Rick found him in front of the work bench along the back wall of the garage, rummaging through one of the many chock-full drawers in the various storage towers and toolboxes. "Hey, Joe. Looking for something specific?"
"No, just looking. When did I accumulate all this junk?"
Rick shook his head. "Before I got here, although I've probably helped contribute to the collection since I moved in."
"Why would I ever think I needed to save some of this?" He held up a bolt that had seen better days, judging by the fact the nut screwed onto it looked rusted right to the threads.
"You never know what you might need." Rick was pretty sure it was just a guy thing. He'd never met a garage-owning man yet who didn't have jars and coffee cans full of miscellaneous metal things. "Soak it in some oil and it'll work just fine."
"Can you imagine how long it would take us to muck out this house if we sold it?"
"Are you considering selling?" He wasn't sure if the subject had been temporarily tabled or if he just wasn't in the loop, but he hadn't heard much about it since they filled him in after the appointment with the doctor.
Joe shrugged and opened another drawer. "We're not not considering it."
Rick wondered if Jessica had been talking to them, or if they were just naturally working themselves toward that direction. "It's a big house. Needs a lot of upkeep and I know the utility bills and the taxes must be a bitch."
"I hate surrendering to it. It's like an admission I might be getting old."
"Hell, half the people you grew up with sold their houses and bought condos in Florida ten years ago or more."
Joe snorted. "We went to Florida once. I hated it."
"Well, there are plenty of options around here. Maybe something away from the city, where it's quiet, and you and Marie can sit on a doublewide swing and listen to the birds sing or whatever."
"I like this neighborhood. We know where everything is and we can walk to almost anything we want. I'm a city boy at heart. But it is a big house, and Marie's not getting any younger."
It struck Rick how Joe wouldn't admit to being too old to take care of the house, but maybe he'd consider selling it for Marie's sake. "How does she feel about it?"
"Same as I do, mostly. We like it here and don't really want to move. But we also don't want the other to end up in a bind if something happens, you know?"
"You don't have to rush into anything. And the most important thing is that the two of you are on the same page about it and screw what anybody else thinks."
"Yeah. Jessica helped Marie set up an appointment with a real estate agent for Wednesday. She says we can't really think about our options without a solid idea of what we might be able to get for the house."
"She's probably right." She hadn't mentioned the real estate agent to him at all, and it had to have been set up before the hockey game. Even if they'd made the appointment on Friday while he was working, it seemed odd she didn't say anything to him.
"I guess Jessica's flying back to San Diego on Thursday," Joe continued. "Marie's going to be heartbroken, even though she knew it was coming eventually, but I told her we'd finally break down and get her one of those smartphones. Waste of money if you ask me, but they can send text messages to each other and do that video chat thing."
She was leaving Thursday. She hadn't mentioned that to him, either. He knew she'd go soon, since her company was having its Christmas party, but he hadn't known which day. "I don't think you have to worry about Jess not staying in touch. It's meant a lot to her, getting to know you."
"We wish she didn't live so damn far away."
"I know, but I bet she'll fly out a few times a year to visit. If not more."
"I hope so. It's been just Marie and I for so long, so it's nice to have family. Even if Davey had come back, it would've been hard to forgive him for the hell he's put his mother through. But Jessica, she's not like him. And we're both so thankful to have her now, even if we won't get to see her all the time. It's nice knowing we have somebody good out there in the world with our name."
Rick had no doubt the Broussards would be changing their wills any day, if they hadn't already started the process, and he didn't blame them a bit. They were the kind of people who would have left the house and any money they had to their son simply because he was their son. But knowing they'd be gifting their property to Jessica would make them much happier.
"What do you suppose this went to?" Joe asked, holding up an oddly shaped chunk of metal.
Rick took it from him to give it a closer look. "I have no idea."
He and Joe ended up killing almost two hours in the garage, looking through containers and sorting piles of junk. They didn't really accomplish anything, but sometimes that wasn't the point. Especially on a lazy, early winter Sunday. The days of digging out hydrants, snow-related accidents, clearing roofs and extra shifts were right around the corner. Spending a couple of hours with a good friend, sorting bolts and talking about everything and nothing was a good way to relax.
Of course he managed to get dirty enough so when it was time to call it quits, he needed a shower. After stripping down and tossing his dirty clothes in the hamper, Rick turned on the shower and set it to pretty damn hot. The shower was huge, as was the entire master bathroom because Joe and Marie had given him the freedom to remodel however he wanted as long as he paid for it. And being a big guy who'd gone from growing up in a house with a small shower to renting an apartment with an even smaller shower, the bathroom was where he'd spent the most money.
He let the hot water beat against his skin for a couple of minutes before grabbing the shampoo and scrubbing his hair. Even as his muscles relaxed, his mind turned to Jess and the fact she'd be leaving Thursday. In just a few days, she'd be getting on a plane to San Diego and he didn't know when—or really even if—she'd be back.
Last night had almost gotten the better of him. When he'd taken her arm to keep her from falling, he should have let go as soon as she was steady and kept his hands to himself. But they'd just kind of kept talking and kept walking, and he liked the contact. Then she'd gotten flirty, bumping his shoulder because he was being funny, and he'd thought about kissing her.
He thought about kissing her a lot. Her mouth fascinated him, and he could only imagine how it would feel to press his lips against hers. The feel of her hair tangled in his fingers. The softness of her skin.
After rinsing the shampoo out of his hair, Rick grabbed the bar of soap and lathered up. He scrubbed hard at his arms, which he'd managed to get greasy as well as grimy, and then rinsed the soap away. It wasn't so easy to push the image of Jessica away, though, and he found himself wishing she was in the shower with him. It was definitely built for two.
Rick braced one hand against the tile wall and closed the fingers of his other hand around his dick as he imagined the hot water running down her naked body. The steam would make the hair around her face damp and her skin would be tender and flushed. He'd kiss her while pressing her back against the tiles, and he didn't have to imagine that. He knew how her mouth felt and how she tasted. But now, in his mind, he slid his hand between her legs.
Picturing her blue eyes widening, he wondered if she'd be shy and whisper his name. Or maybe she'd be bold and demanding, her head thrown back with abandon.
Stroking himself harder, he imagined the feel of her slick, hot flesh under his palm. The sounds she'd make when he slid his fingers over her clit. Her nipples would be hard in his mouth, and he'd suck each in turn until she squirmed against his hand. He'd whisper in her ear, telling her all the things he was going to do to her, while she came. And then, when the last shudder faded away, he'd turn her around and have her brace her hands against the tile as he slowly eased his cock into her.
With a groan, Rick came, his dick pulsing in his hand as he stroked and the fantasy faded. Then he leaned his forehead against the shower wall and closed his eyes. The relief would be temporary, he knew. He wanted the real Jessica, naked and under him, and jerking off to imaginary her wasn't enough anymore.
* * *
After climbing the two flights of exterior stairs to Rick's small deck, Jessica raised her fist to knock on his glass sliding door, only to realize she didn't have to knock. She could see him, and he was looking straight at her.
He was also almost naked.
Obviously fresh out of the shower, he had a towel wrapped around his waist, but there was plenty of skin for her to look at. And look she did. She wasn't sure if it was his job or the gym or both, but his body was so well toned he could be on a magazine cover. Not so ripped it would be a vanity point, but he was definitely in shape.
And his calves were almost as impressive as his biceps. That was probably from going up and down the huge ladder on his truck and climbing stairs, but she'd never really noticed a man's legs before. She certainly noticed his.
Whose bright idea had it been for them to put an end to any making out when they'd only gotten as far as kissing?
Then she realized he was waving her in and felt stupid. Gawking at the man through his door wasn't one of her finer moments and she hoped he'd let it go without commenting. She opened the door and stepped inside, sliding it closed behind her.
"Sorry," he said. "I wasn't expecting company."
"Marie sent me up to ask you for...a thing."
"A thing?" He grinned. "A specific thing, or can I just grab whatever's at hand and give it to you?"
She sighed and held up her hands in defeat. "Look, you know you're an attractive guy. You know I like kissing you but we're not doing that anymore. You also know you're only wearing a towel and you smell delicious. I don't think you need to mock a woman for forgetting what she came here for."
"I wasn't mocking you. Just clarifying whether or not any old thing would do."
"Who takes showers at this time of day, anyway?"
"People who get dirty digging around in decades' worth of crap in the garage."
She gave a self-deprecating laugh. "Forget it. You can take showers whenever you want. Just bad timing on my part."
"Okay, to get back to the thing at hand, what was Marie doing when she asked you to come up here?"
"She was looking through a plastic box that had a bunch of recipes in... Oh! She said you borrowed her big slow cooker a few weeks ago and she wants to make a roast this week. It won't fit in her everyday one." She held up a hand. "I didn't even know they came in different sizes or that there was such a thing as an everyday slow cooker."
"She bought a small one because she's usually only cooking for the two of them and it's mostly soups and stews. And she makes a mean chili." He frowned in the direction of the kitchen, as if trying to remember where he'd left it. "Let me throw some clothes on and then I'll dig it out for you."
"Okay." She would have been more than happy to watch him rummage around his kitchen in just the towel, but she could imagine he might feel awkward. Especially if the towel slipped and ended up on the floor.
Flushed, Jessica waited until he'd closed his bedroom door and then looked around the apartment. It had been remodeled at some point, far more recently than the downstairs, and she guessed he'd sunk his own money into it. It was as open concept as the old house's structure allowed, with the living room area flowing naturally into the kitchen. He had a table and chairs set up near the slider, and everything was leather or chrome and glass. The kitchen island and countertops were a dark granite, and he had nice stainless steel appliances. There was a half bath off the kitchen, but she only saw one bedroom door. Considering the footprint of the house, it must be one hell of a master suite.
There were two big bookcases in the living room, stuffed full of books, and some family pictures sat in frames on top of them. She guessed one of the photos was of Rick's brother, along with a woman and two young boys taken on a boat. There was a strong resemblance between him, his brother and their dad, and his mom was pretty, too. She was smiling at the camera in one framed picture, with her husband's arm around her and her two sons bookending them. Her pride was evident in her body language and expression, and Jessica sighed.
She'd be lying if she said her mom abandoning her when she was little didn't hurt. Usually it was a dull, barely there ache. But sometimes it was a gaping wound she knew would never heal. Even though she'd learned with age—and through multiple failed marriages on her father's part—to accept that her mother had probably been leaving her father and not just her, she'd always wondered why she hadn't fought for her. Even if she couldn't handle full custody, she could have stuck around for visitations.
As with his own parents, Jessica's mother wasn't a topic her father wanted to discuss. He'd claimed she was unstable and they were better off without her. But he'd said a lot of bad things about her grandparents, too, and now she knew better. Or at the very least, they weren't the same people they'd been while raising their son. So maybe her dad was wrong about her mom, too.
The big difference, as far as Jessica was concerned, was simple. Joe and Marie hadn't fought to see her because they hadn't known she existed. Her mother had known. She'd made a conscious choice to remove herself from Jessica's life, and no amount of therapy or logical pep talks could ever make her understand.
"You okay?"
She jumped at the sound of Rick's voice, since she hadn't heard his bedroom door open. "What?"
"You look lost in space, and that's not a happy expression."
"I was just looking at your family pictures. You all look really happy. Is that your brother's boat?"
"It's my dad's, actually. But my brother uses it more than anybody else, I guess."
"I've been out on a boat a few times. It's nice at first, and it can be relaxing, but then I get bored and want to go for a walk or something. That doesn't work so well."
He chuckled, but his eyes remained serious. "I'm guessing boat envy didn't put that sadness in your eyes."
"I never have sad eyes."
"Yeah, you do. Maybe you just don't know it."
She turned back to the photos so he couldn't see her face anymore. "Sometimes it's hard seeing family pictures with such happy, obviously proud mothers. And then there's the obvious reaction to seeing three generations smiling together. I guess that's what families are supposed to look like."
"Families look a lot of different ways, and there's no right or wrong or supposed to about any of them."
"Mine has always been just me and my father, and most of the time he seems more like my boss or a business partner. Nobody's ever used a photo of us to sell picture frames, that's for sure."
"I can turn them around if you want."
"You can't be serious," she said, turning to face him with an incredulous look. When she saw his face, she realized he definitely wasn't serious and was simply trying to lighten the mood. Maybe he'd guessed she wasn't used to people being able to see her emotions that way, and that was true. Nobody had ever told her she had sad eyes before.
"Are you coming down for dinner tonight?" she asked, wanting to change the subject.
He shook his head, and she felt a pang of disappointment. "I'm going to meet some of the guys tonight. I don't usually eat with Joe and Marie, actually, even though it probably looks that way. I think having you around made all the meals into special occasions, so she kept inviting me."
"And you accepted so you could keep an eye on me." She realized belatedly how that sounded and felt heat in her cheeks. "So you could make sure I'm not fleecing them out of all their money, I mean."
"I don't think that. I mean, I think your job makes you more inclined to make decisions based on financial reasons and not their emotional well-being, but I don't think you'd ever fleece anybody, never mind Joe and Marie."
She tilted her head. "You don't think I can balance fiscal responsibility with their emotional well-being?"
"I think you can, but it's probably not your first instinct." He walked to a cabinet in the kitchen and pulled out a slow cooker, which he set on the island. "Joe told me you set up a meeting with a real estate agent."
"Marie did, actually. And she discussed it with him first."
"But you had a hand in it."
"They're not listing it with her. She's giving them a fair market value appraisal and that's all unless they decide to move forward. How are they supposed to make informed decisions without determining the worth of their most valuable asset?" She walked over to the island. "Unless you're worried about having to find a new place to live."
"I think we already had this conversation. I'm not worried about me. My only concern is Joe and Marie."
"Mine, too." She picked up the slow cooker. "I'll give this to Marie."
"Tell her I said thank you." She nodded and was almost to the door when he spoke again. "Hey. You still going to watch the game with me tomorrow? I'll order pizza."
Maybe if he was questioning her motives because he was a jerk, she'd reconsider. But it was hard to hold his concern for her grandparents against him, especially since she'd know he was still there for them when she left. Just as he'd been there for them before she arrived. "Of course. I'm looking forward to it."
He gave her a look she couldn't quite decipher, but that sent a sizzle through her body. "I am, too."
"Don't look at me like that. We're agreed we're not moving in the direction that look wants me to go, remember?"
"I remember. And I'm sure if I think long and hard enough, I can remember why we agreed to that."
For once, Jessica was grateful when she stepped outside and the cold air instantly chilled her body. Maybe if she left her window open, she'd actually get some sleep tonight.
* * *
Rick walked into Kincaid's Pub, ready for a beer and a burger and maybe a few games of pool in the back room. Ashley was behind the bar tonight, and that meant Aidan probably wouldn't show up. Lydia worked a lot of nights, so when she took off and he wasn't at the station, he was with her.
"Hey, Rick, where's your girl?" Tommy bellowed from the back corner of the bar, and everybody turned to look at him.
"She's not my girl," he said to the room in general, since most of them were people he knew. "She was at the game with her grandparents. So was I. That was the extent of us being there together."
When he reached the bar, Ashley set a frosted mug on a coaster in front of him. "Yeah, we could see you guys from where we were sitting. And I heard about your pool lesson the other night. She might not be your girl yet, but that's totally happening."
"She's going back to San Diego on Thursday." It wasn't a total denial of any chemistry they may have witnessed at the bar or at the hockey game, but it should put an end to the speculation.
"Oh." She frowned at him, her expression of annoyance so like Scott's, he almost laughed at her. Scott and Lydia had the fiery Kincaid temperament, like their old man. Ashley tended to be levelheaded and calm, though she'd pop off if she was pushed hard enough. "That's what sexting is for."
"Don't you think I'm a little old for sexting?"
"Of course not." She paused, and then smiled. "Although, if you send her a dick pic, you might want to use a filter."
It's a good thing he hadn't taken a drink of his beer yet, or he might have choked on it. "Gee, thanks a lot."
"And a zoom lens."
Rick turned to see Danny stepping up to the bar. He hadn't seen Ashley's husband when he came in, but he should have known he'd be here. "You two are freaking hilarious tonight."
"I'm not sure why you and my wife are talking about your dick, but that makes you fair game for any insult I want to throw your way." He set his empty mug on the bar. "You didn't bring Jessica with you?"
"And that's why we were talking about his dick," Ashley said as she replaced his empty mug with a full one.
"Okay." Danny took a sip of his beer and then turned to face Rick. "Since she's not here with you, are we talking about your literal dick or are you being a dick?"
"I'm done with this conversation." Rick picked up his beer and headed for the alcove at the back of the bar where the pool table was.
A game was going on, but there were quite a few guys gathered around to watch. Rick shook hands with Scott and Gavin, plus there were several guys from a nearby station and two police officers he recognized. Of course Danny followed him back, but luckily didn't bring up the conversation they'd been having at the bar.
Once Scott lost and no longer had the run of the table, he grabbed his half-empty beer mug and walked over to sit with Rick at the small round table he'd claimed. "How's it going?"
Rick shrugged. "It's going, I guess."
"You going to play tonight?"
"I don't think so. Not really in the mood. Is Aidan with Lydia tonight?"
"Yeah, since we'd talked about getting together tonight, Danny wanted in. And Ashley said if Danny was going to be here, she may as well work and Lydia could have a night off. It's not like Sundays are hectic." Scott tipped his chair back onto two legs so it balanced against the wall. "How are things going with Jessica and the Broussards?"
Rick had an instant flashback to the moment he'd spotted Jess outside his door, her hand frozen in mid-air, just shy of knocking. Considering what he'd just been up to in the shower, he wasn't surprised to feel a rush of heat in his cheeks but, luckily, she hadn't been looking at his face.
"Things are good," he said. "But she's leaving Thursday."
Maybe if he said it enough times, he'd be able to figure out how he felt about it. He should be thankful things would go back to normal around the house. But the idea of not seeing Jessica whenever he popped in downstairs didn't sit well.
"For good?"
Rick shrugged. "I'm sure she'll come back to visit Joe and Marie, so I'll see her around here and there. You seeing anybody new?"
Scott sighed and dropped his chair back onto four legs. "No. I think I'm going to stop trying to see anybody for a while."
"Interesting. And unlike you."
"Yeah. But maybe I spend too much time with women who don't see marriage in our future—or see marriage for the wrong reasons—and it's keeping me from finding the right one."
Rick knew he'd been burned recently by a young woman who'd pressured Scott for a ring, but only because she wanted access to his benefits and was even willing to get pregnant to get him to the altar. She hadn't been, but he was pretty sure that girl was a turning point for Scott.
"I don't think I have a right one," Rick said before taking a long swig of his beer.
"From what I hear, she might be right under your nose. Literally, like she's sleeping one floor below you."
He had to guess Scott had gotten that little tidbit from his sisters, since he'd been on the ice the entire game and hadn't seen them together. And Scott hadn't seen the kiss in the pool room, but Aidan had probably filled him in. "Pretty sure the right woman for me would at least live in the same time zone."
"Good point." Scott sighed and raised his mug. "Here's to being not the marrying kind."
Rick touched his mug to Scott's and then took another drink. "Well. This is kind of depressing."
"Yup." Scott nodded. "Another night alone in our apartments to look forward to. Might as well order some burgers and then stay here until they throw us out."
Rick agreed, but he was thinking about tomorrow night, when he wouldn't be alone in his apartment. And he might feel low at the moment and she might be leaving on Thursday, but Jess, hockey and junk food was a potent combination he'd spend the next twenty-four hours thinking about.
Chapter Eleven
Jessica did a little yoga in her room after wrapping up her work late Monday afternoon. And she even threw in some bonus planking just because she hadn't been back to the gym with Rick but had continued to stuff herself with Marie's cooking.
The fact it was a good excuse to take a shower at that time of day was purely coincidental, she told herself. And since she was taking a shower anyway, she went ahead and shaved her legs while she was in there. A little scented body cream and she was ready to go watch a hockey game.
She felt silly, though, when she had to go down to the kitchen to get the bag of junk food she'd bought at the corner market during her lunch break. Marie was there, preparing dinner for her and Joe, and she breathed in deeply.
"You smell lovely, honey."
"Thanks. I took a shower." That sounded awkward. "I was doing yoga in my room and I'm out of shape so I got sweaty, so I wanted to clean up."
Marie gave her a knowing look. "You don't want to watch hockey if you don't smell pretty."
Jessica laughed and took a seat at the table. "It's good to take a shower after yoga. It keeps the muscles limber. Honest."
"Mmm-hmm." Marie hit the button to turn the oven off and cracked the door to let the accumulated heat escape. "Are you sure you don't want to eat with us?"
"Rick said he'd order pizza. You're supposed to eat junk food with hockey."
"Pizza's not junk food," Joe said as he walked into the kitchen.
"I should make homemade pizza soon," her grandmother said. "I haven't done that in ages."
Jessica frowned. "Isn't the whole idea of pizza that not only does somebody else make it, but they deliver it to your house?"
"Mine's better."
She didn't doubt that. Marie was an amazing cook and Jessica had never realized how much her own skills were lacking until she came here. She might look like a younger version of her grandmother, but the cooking gene had definitely skipped her. "I'm going to go answer some last-minute emails while you guys eat. Everybody's trying to clear their desks before the holiday party on Saturday kicks off the holiday pseudo-break."
"Pseudo-break?" Joe asked, pulling out his chair. "That doesn't sound like time off, exactly."
"It's the closest we get to time off. We close the office for the week Christmas falls in, but the market doesn't take time off, so we all have to stay on top of things from home. And we have a lot of top clients who expect us to be as dedicated to work as they are."
"All work and no play is no way to live," Joe said sternly.
"Hey, you're talking to a woman who's in Boston about to watch her first Bruins game while her office in San Diego is going nuts. I'm learning." She'd been walking past him, but on impulse, she leaned over and kissed his cheek.
He smiled and grabbed her hand, giving it a little squeeze. Then he let it go without saying anything. By now Jessica knew he wasn't emotionally demonstrative, so the small gesture spoke volumes.
"You going to watch the news with us before you go upstairs?" he asked.
"Of course." She gave them a small wave and then took her phone out to the couch, where she dealt with emails and read articles she'd bookmarked until it was time for the news.
She had trouble focusing on the broadcast because her gaze kept straying to the time displayed on their cable box. The snow in the forecast caught her attention, but it didn't look as if it would amount to very much in New England terms, so she didn't worry about it. If Joe didn't comment on the forecast, she knew there was nothing to worry about.
When it was finally time, she said good-night to her grandparents, since they'd already be in bed by the time the game ended, and grabbed the paper bag of snacks. Then, after taking a deep breath—telling herself a hockey game was no big deal, really—she went up the stairs to Rick's apartment.
As soon as he opened the door, Jessica knew they'd reached a point of no return. They were either going to drop the futile game they'd been playing, dancing around their attraction for each other, or they were going to fully commit to absolutely no more kissing or touching and mean it.
She doubted he'd been doing yoga in his bedroom but Rick had definitely just gotten out of the shower. And he was freshly shaven. That was slightly disappointing since she'd spent a lot of time wondering how that scruffy jaw would feel against her skin.
But both of them taking showers late in the day meant eating pizza in front of the television wasn't the only thing on their minds.
"Black sweater?" He grinned as he stepped back to let her in. "Good choice."
She hooked her thumb under the yellow bra strap and pulled it out far enough so he could catch a glimpse of it. "It was the closest I could come to black and yellow."
His gaze seemed locked on the thin strip of her bra and his jaw tightened. "Black and gold."
She frowned and released the bra strap, which slid back into place. "The colors look like black and yellow."
"I know. But it's gold. Trust me." The grin reappeared. "But that bra definitely qualifies."
"Oh, good. I'd hate to get disqualified from fandom for the wrong color underwear." She handed him the paper bag. "I brought junk food to go with the pizza."
He led the way to the island, where he unloaded the bag. She'd bought chips and dip and a bag of pretzels. Two different kinds of cookies. A variety of artificially flavored cupcakes packaged in pairs. And at the bottom of the bag, a bottle of liquefied cheese in a squeeze bottle.
"You really know how to bring the junk food." He held up the squeezable cheese. "What are you going to do with this?"
"I'm not sure. Maybe dip the pretzels in it?"
"Maybe I should have ordered a small pizza."
She laughed. "We don't have to eat it all. I only brought it because you're supposed to eat junk food with sports, though I would have preferred nachos. And cotton candy. I bet if you bring it to work, it won't go to waste."
"Go to waste? I'll be lucky if it makes it to the kitchen. What do you want to drink with your pizza? Beer, soda or water?"
"I'll take a soda." She wasn't a big beer drinker and the two she'd had at Kincaid's Pub filled her quota for a while.
He went to the fridge and grabbed them each a soda. "The pizza's already on the coffee table and the game's about to start."
Jessica ate a slice of pizza while they went through a big lead-in for the game. The announcers talked about a lot of people she'd never heard of and used terms that meant nothing to her, but she enjoyed watching them skate around on the ice before they got ready to drop the puck, as the guy on TV said.
Then, after all the talking, the referee dropped the puck between two hockey players and they all erupted into action. It was so fast-paced she had a hard time keeping track of the puck and she winced every time somebody got slammed into the glass enclosure, but it was still fun. Every once in a while Rick would yell at the television, making her smile.
Several times, she opened the browser on her phone to search for the definition of a term so she could get a better handle on what was happening. She'd first done it after the tenth or so time she heard icing, and she was looking up penalty kill when Rick caught her.
"What are you doing? You're not working during a game, are you? That's absolutely not allowed."
She looked up from her phone. "No, I'm not working. I'm looking things up on Google."
"Things?"
"Yes, things. I don't know what the announcers are talking about and that makes it hard for me to follow. If I look up some of the terms as they use them, it's easier to understand the game."
"You could always just ask me." He winked. "I know what most of the words mean."
She lowered the phone to her lap and shrugged. "Isn't it annoying if you're trying to watch the game and somebody's asking you really basic questions about it the whole time?"
"Or maybe I look at it like I'm sharing my knowledge of the game so somebody else will learn to love it, too. Sharing the passion, so to speak."
Laughing, she locked her phone screen and tossed it onto the coffee table, trying not to think too much about Rick sharing his passion with her. Hockey. He'd been talking about hockey. "If I was some guy who'd come over to watch the game with you and had to ask what icing means, would you feel the same way?"
"No." He didn't even pretend to think about it. "I'd tell him to shut up and read a book about the game before he came back again."
"So I guess I'm special, then?"
"Absolutely." He gave her a look that seemed to say whether they were talking about hockey or anything else, he did think she was special.
The sizzle of sexual attraction she expected every time she and Rick locked eyes came, but this time it was wrapped in a warm, emotional feeling that scared the crap out of her.
Not at all sure how to respond—either to his words or to her reaction to them—she turned back to the game. As confusing at it was, at least she knew the final objective of the men on the screen. Get the puck in the other team's goal more times than they got it in yours.
But in real life, she wasn't sure what the objective was anymore. Maybe she needed men in suits with microphones to narrate it for her.
* * *
Something had spooked her, Rick thought, and based on the way her expression had changed, he had to guess it was when he said she was special.
What he couldn't figure out was why. She had to know he liked her. It wasn't a coincidence they'd both showered before this game—something he'd noticed as soon as he opened the door. Hell, he couldn't keep himself from kissing her, even when they'd agreed not to, and he wasn't in the habit of kissing women he didn't like.
Maybe she didn't know how to take a compliment like that, which was sad. She was smart and funny and beautiful and caring and the people in her life should have been telling her that all along.
"Do you want another slice of pizza?" he asked, hoping to get her to look at him so he could see her face.
"I don't know. I'd hate to be too full to try that liquid-cheese-in-a-bottle stuff." She turned to look at him and he was relieved to see her humor reached her eyes. Whatever had bothered her seemed to have been fleeting.
"I can't believe you've never had it."
"I can't believe you have."
"We all have things in our past we're not proud of." He paused to watch the Bruins' goalie block a dangerous shot, and then turned back to her. "Right?"
She shrugged. "My closet is skeleton and squeezable cheese free, I think."
"Oh, come on. You must have had a rebellious phase or dated questionable guys or something."
"Not that I recall. I think my father has even approved of all the boyfriends I've had, and I guess that's probably unusual."
It was probably unusual for most women, he thought, but not for her. She didn't seem to like making waves with her father. "I have a feeling I don't fit into your usual taste in men."
"No, not really. I usually date younger guys who wear suits and ties to work. They drive environmentally conscious compact cars or sedans that mimic the look of luxury cars they hope to afford someday."
"Younger guys, huh?" Great. No pressure or anything.
"Not a lot younger, but a few years, usually."
"I guess younger guys probably have a lot of stamina."
"I will neither confirm nor deny," she said in a prim voice before amusement ruined the effect. "I think it's mostly because my friends are a little younger, too, so I meet younger guys. And in my field, a lot of the guys my age are already married. Plus, like you said, there's the stamina thing."
"Funny." He took her hand and ran his thumb across her palm. "I guess there's something to be said for stamina. Although there's probably something to be said for experience, too. You learn things as you go along."
"Really? What kind of things?"
"I guess I could list them for you." He stopped making circles against her palm and linked his fingers with hers. "Or I could show you."
She arched an eyebrow at him. "And miss the rest of the hockey game?"
"There's a game on?"
She laughed and then it faded away to a sigh. "And are we supposed to agree in advance it won't happen again?"
Rick already knew there was a good chance he'd want her again before he'd even caught his breath from the first time. "How about we agree to just enjoy each other's company until you have to get on the plane back to San Diego?"
"That sounds like a plan we can actually stick to," she said, giving him a smile that took his breath away.
He let go of her hand to hit the power button on the television remote before standing up. Then, when she stood, he put one arm around her back and the other behind her knees to sweep her off her feet.
When she squealed and wrapped her arms around his neck, he gave her a quick kiss. "First on that list of things we old guys have figured out? Women like romantic gestures."
* * *
Jessica had never been carried before, and certainly never into a man's bedroom. It was thrilling and sweet, and she ran her fingertips over the nape of his neck as he turned sideways to get her legs through the doorway.
There was just enough light from the street shining through the curtain to show her she'd been right about him having one hell of a master suite. The room was big, with a lot of open floor space dominated by a huge bed.
When he set her down on the edge of the mattress, she actually sighed. "Comfy."
"See? Number two is knowing making out on the couch is a lot less comfortable than it looks on TV."
She laughed as he pulled his shirt over his head and tossed it aside. "Is this list written down somewhere?"
"Nope." He stepped between her legs and she ran her hands over his naked stomach and chest. The muscles twitched, and she skimmed her fingernails down his abs just to see them clench.
Rick cupped her face in his hands and bent to kiss her. It was slow, simmering, and she felt her pulse quicken in anticipation. His tongue flicked over her lips and she opened her mouth to him. She felt as though she could kiss him forever and never grow tired of it.
Then his hands left her face, skimming down over her shoulders and arms before his hands cupped her breasts through the soft knit of her sweater. She sucked in a breath when his thumbs brushed over her nipples and she felt his smile against her mouth.
Jessica took matters into her own hands and grabbed the hem of the sweater. She had to break off the kiss to pull it over her head, but she didn't want fabric between her body and Rick's hands anymore.
"God, you're beautiful," he told her, his voice rough. He tucked his hands under her arms and lifted her so she was farther back on the bed before sliding one finger under the strap of her bra. "And I do love this color."
She touched her fingertip to his mouth, intending to trace his lip, but he caught it between his teeth before sucking gently. When he swirled his tongue over the tip, she moaned and pulled her finger free so she could kiss him.
"You do love kissing," he said after a few minutes.
"I've never really thought much about it," she confessed. "I think it's you."
"Good." He nipped at her lip. "I like kissing you."
He kissed her mouth again before moving to her jaw and the hollow at the base of her throat. Then he unhooked the yellow bra and slid it over her arms so he could kiss her breasts. She dug her fingers into his biceps when he closed his mouth over one nipple and then the other, sucking hard enough to make her squirm.
His fingers skimmed over her stomach to the button of her jeans. He popped it with one hand and then worked the zipper down. "I don't want to stop touching you long enough to get these off of you."
"The sooner you get them off, and get yours off, the sooner you can be inside of me."
He groaned against her breast, then ran his tongue over a nipple. "Are you in a hurry?"
"For you to be naked? Absolutely."
He chuckled before pushing up off the bed. It only took him seconds to pull her jeans and socks off, though he did take a moment to appreciate the yellow lace panties before adding them to the pile of clothes on the floor.
Then he stripped out of his jeans and boxer briefs, and Jessica had a moment to fully appreciate his ruggedly built body before he stretched out alongside her. He ran his hand over the flat of her stomach and she laughed.
"You'd think a man as experienced as yourself would know better than to touch a woman's stomach like that. It makes us self-conscious."
"I love every inch of your body." He rolled so he was looking down at her. "Did you just call me old?"
She reached down and ran her fingers up the length of his erection, watching his jaw clench. "Definitely not."
"Jesus, Jess. You can't do that."
She closed her fingers around him. "I want you now."
"I think I'm supposed to be proving a point about experience versus stamina or something and you are not helping."
Still stroking him, but not with too much pressure, she captured his bottom lip between her lips and kissed him again. "Just kissing you feels amazing. I don't want to wait anymore."
He shifted his weight so he could reach his nightstand and open the drawer to pull out a condom. Jessica reluctantly stopped touching him long enough for him to put it on, and then he was kissing her again. Rough and demanding, he kept kissing her while reaching between their bodies to guide his cock into her.
Jessica dug her fingernails into his shoulders as he slowly filled her, each slow stroke deeper than the one before until she'd taken him completely. He pushed her hair away from her face and his gaze locked with hers as he moved his hips.
She said his name and he smiled, a slow grin that made her smile back at him. "That feels so good."
"You feel so good." He cupped one of her breasts and lowered his mouth to hers, kissing her just the way she liked as he moved between her thighs.
Jessica wrapped her legs over his, her heels digging into his calves. "Please, Rick."
"Please what?" he said against her mouth.
"Faster."
He obliged and she scraped her fingernails over his back as he thrust into her. Her back arched off the bed as she came, biting down on the side of her hand to keep from screaming his name.
He hooked his hand under her knee and lifted so he could thrust harder and it was only a moment before he found his own release with a guttural moan.
Jessica, still breathless from her own orgasm, ran her hands over his back as he came. Then he buried his face in her neck and pressed small kisses against her skin. Smiling, she stroked his hair as their breathing slowly returned to normal.
Then he tossed the condom in the trash basket under the nightstand and rolled onto his back, pulling her with him. She pressed her body along the length of his, loving the feel of his naked skin.
"I'm so glad we didn't agree not to do this again," she murmured after a few minutes.
He ran his fingers through her hair, making her shiver. "No shit. Although, even if we had, I'd be trying to break it in about fifteen minutes or so."
She kissed his chest. "I should come watch hockey with you more often."
"God, I hope they play Wednesday."
Chapter Twelve
"Hey." Thump. Thump.
Rick opened his eyes to see Jeff Porter standing over him, kicking the base of the couch. The station was relatively quiet from what he could hear, so resting his eyes hadn't turned into sleeping through alarms. "What?"
"You have company," Jeff said. "Downstairs."
Swinging his feet to the floor, Rick sat up so fast he was surprised he didn't get dizzy. "Jess is here?"
"Jess? You mean your landlords' granddaughter?" Jeff narrowed his eyes. "You sure jumped to that conclusion pretty quickly."
It made sense he'd wake up thinking of Jess, since he'd nodded off thinking about her. "Is it her or not?"
"Sorry, guy, but it's not her."
Rick scrubbed his hands over his face, trying to shake off not only the lingering sleepiness, but the disappointment, as well. It was probably stupid to be sorry she hadn't shown up at the station when he'd spent last night with her. And if she was smart, she'd told Joe and Marie she had a lot of work to do and holed up in her bedroom to catch up on her sleep. They hadn't gotten much last night.
"Who is it?" he asked Jeff after pushing himself to his feet.
"As much as we'd all get a kick out of seeing your face when you're caught off guard, I like you today so I'll give you the heads-up. It's the friendly lady with the nightgown and the false alarms."
"Shit." It had been a couple of weeks since the last time she'd called for help she didn't need, so he'd put her out of his mind.
"She's carrying a plastic container, so there might be food in it for us. Maybe even the freshly-baked-from-scratch kind of food."
"If she has baked goods, I'm surprised she didn't leave them in the oven long enough to set her smoke detectors off."
Jeff laughed, but Rick wasn't amused as he walked down to the ground floor and toward the front of the open bay. Aidan and Scott were making a big deal out of inspecting a length of hose, but Rick suspected it was their way of keeping an eye on the woman while having a good excuse not to make conversation with her. While their house hadn't had any problems, there were cases of people taking their infatuation with firefighters to dangerous levels.
The woman in question was clutching her plastic container and looking at everything she could see from the doorway with wide eyes. She was pretty and she'd seemed nice in the past, but Rick knew he had to shut her down as firmly as possible without upsetting her.
"Hi," he said, stepping up to her. "I heard you're looking for me?"
She smiled, her face flushing as she nodded. "I just wanted to stop by and thank you again for helping me."
"We're just doing our jobs, ma'am." He managed to deliver the corny—but true—line with a straight face.
She giggled. "It's Deena. And I still want to thank you. I baked you some muffins. I wasn't sure what kind you'd like, so there are blueberry and cranberry and chocolate chip."
"We're not really supposed to accept gifts, so—"
"I make good muffins," she interrupted, "but they're not worth more than fifty dollars."
So she knew the rules. Maybe she'd looked them up just to be on the safe side, or maybe she had some experience in showing up with muffins for firefighters. He couldn't be sure, but he knew there was no way out of accepting the token thanks without hurting her feelings, which he'd hate to do. And next she'd introduce the possibility of returning soon for the plastic container while dropping strong hints that she wouldn't mind if he dropped it by her place.
"Okay," he said, accepting the box. "We'll all enjoy these, I'm sure."
"I hope so. Maybe I can stop by in a couple of days or whenever you're here and pick up the container."
"Sure." He looked down at the box in his hands. "They won't last that long, though. I'll probably have to hide a cranberry one in my truck to sneak it home to my girlfriend. She loves them."
"Oh, definitely. Like I said, I'm pretty good at muffins." She gave him a smile. "On second thought, the container's one of those disposable ones, so you can just toss it when they're gone. It's not really worth coming back for."
He thanked her again, hoping the relief at extricating himself didn't show on his face, and she seemed happy enough when she waved goodbye to the guys behind him and walked away. As soon as she turned the corner out of sight, Aidan and Scott appeared next to him.
"Did she say chocolate chip?" Scott asked.
"I'm not sharing."
"You told her we'd all enjoy them," Aidan pointed out.
"You two didn't seem to be in a hurry to help me out. An urgent call I needed to take or something would have been nice."
Aidan shrugged and grabbed the container while Rick was off guard. "You were holding your own. If she hadn't taken the girlfriend hint, we would have stepped in."
"Pretty slick, the way you threw that lie in there," Scott added.
He supposed it had technically been a lie. Though he'd spent the night with Jess and would like to spend a lot more nights with her, she was going back to San Diego on Thursday and he had no idea if or when she'd be back. "If anybody knows that claiming to have a girlfriend is a standard way of getting out of a situation, it would be you, Kincaid."
"I'm reformed." When Rick just stared at him, Scott finally snorted. "Okay, I'm thinking about reforming."
"I think that translates to him running out of women to date," Aidan added.
Rick listened to the two best friends swap insults for a few seconds, and then pulled out his phone to type a message to Jess. Hey, what's your favorite kind of muffin? Blueberry, cranberry or chocolate chip?
The response came so quickly, he guessed she was probably working with her phone in hand. Blueberry.
"Hold on," he said before the guys could wander off. "Give me one of the blueberry ones."
"I thought you liked cranberry," Aidan said. One of the problems with practically living with a bunch of guys was that they got to know you too well, making it hard to put one over on them. "And you said your imaginary girlfriend likes cranberry, too, so who's it for?"
"Don't worry about it. Maybe I just want something different."
His phone chimed in his hand and he looked down at the screen to see another text from Jess. Why?
Leaving the guys to the muffins, Rick walked around to the far side of Ladder 37 and sat in one of the metal folding chairs that were always kicking around. We got a thank-you batch of muffins and I'm saving you one.
I wouldn't bet money on its chances of making it out of the station.
He laughed, and then immediately stifled it because he didn't want to attract attention. If the guys figured something had finally happened between him and his landlords' granddaughter, it would go one of two ways. They'd think he was just getting lucky with a pretty woman who was sleeping under the same roof, and that sounded cheap. He didn't like that. Or they'd think he was getting into a real relationship and, since he didn't know how to respond to that, he'd rather not open the door.
Seniority means nothing if I can't keep one muffin off-limits.
He imagined her smiling at her phone and wanted to call her so he could hear the smile in her voice, instead. But he didn't want to talk to her with the guys around, and he had no idea if she was in her room alone or downstairs with Joe and Marie nearby.
If you declare two of them off-limits, we can have a muffin date tomorrow.
"Hey, Hunt! Kincaid!" Rick stood and rushed back across the bays and caught the two of them halfway up the stairs. "Give me one of the cranberry ones, too."
Once he'd secured the second muffin, he cradled the two of them in one hand while texting with the other. I've got two, so it's a date.
He'd never had a muffin date with a woman before, but as the remainder of the tour loomed in front of him, Rick thought maybe he'd never looked forward to a date more.
* * *
Jessica not only set her phone down on the table when Marie walked into the kitchen, but she put it facedown, as if she'd been doing something she didn't want her grandmother to see. It was silly, of course. She and Rick were adults and they were having a conversation about muffins. There was nothing wrong with that.
Still, she couldn't help but feel it was best if her grandparents didn't know she and Rick had taken their relationship to an intimate level. Joe was still something of a closed book to her, but she had no doubt Marie would start making them all one big, happy family in her mind. And when it didn't turn out that way, she might feel torn between the two of them and Jessica didn't want that.
"You're still working?" Marie asked, heading to the refrigerator. "You've been sitting at that table for hours."
"We had to hire a new caterer for the holiday party this year and Alicia had some concerns, so she forwarded the contract and a few email chains to me. It looks like they want to charge us top-shelf prices for the bar, but they're not serving top-shelf liquor."
"You serve alcohol? That's not a problem?"
Though Jessica hadn't said too much about her father's life because she felt caught in a weird conflict of loyalties, she'd said enough so Marie knew alcohol was an issue. "Dad never drinks in the office and especially not at the holiday party. Sometimes he'll have a drink if he's out in a social situation, but he never drinks too much unless he's at home."
"That's good. I hope you get your catering issues sorted out so you can enjoy the party."
"I've got a handle on it, I think. I made a couple of phone calls and then followed them up with an email. I think they got the hint."
"I'm going to miss you when you leave," Marie said, sitting at the table with her. "I've gotten used to you being here."
"I'm going to miss you, too. If not for the party, it would have been nice to stay for Christmas." She picked up the pen next to her laptop and fiddled with it. "Maybe I could come back."
"Oh, honey, you are welcome to come back anytime you want. And I'd love to spend the holidays with you. You know that. But I don't want Davey alone, either. We've obviously had troubles, but I'm still his mother and he's been going through a hard time. It would break my heart to imagine him all alone for Christmas."
"This particular hard time is of his own making," Jessica pointed out.
"That's true, but he's going through a divorce and you coming out here hasn't been easy for him, I'm sure, on either a personal or a professional level. You don't want to throw too much at him at one time. Not that his well-being is your responsibility, but I know you've always worried about him."
Marie's immediate willingness to sacrifice her own happiness for her son's despite the fact he'd abandoned them and broken her heart was something Jessica had a hard time understanding. Her father loved her and he'd done his best raising her, but his needs came first. Maybe it was because his needs were so often wrapped up with the needs of Broussard Financial Services, but Jessica couldn't remember the last time she'd been number one on the family priority list.
"You're right, I guess," she said finally. "Before I leave, though, I'd like to get you a smartphone for Christmas. So we can keep in touch."
Marie laughed. "Even that fancy phone of yours will accept calls from an old landline."
"True. But it would be nice to be able to text sometimes, too. And send each other pictures."
"Joe did say maybe I should get one so we can do that."
"I'd love for it to be my gift to the both of you, even though I know you'll have to twist his arm to get him to do a video chat. It would mean a lot to me to keep in touch."
Her grandmother wrapped her arms around her. "I don't care if we have to send carrier pigeons back and forth across the country. We're not losing touch with you."
Jessica blinked back tears and squeezed Marie. "I totally agree. A smartphone would definitely be faster, though. We can go get one today so I can show you the basics before I leave."
"That does sound like fun. And I want you to be able to send me pictures of your office and your home. I want to see what your life looks like." She let go of Jessica and wiped at her eyes. "Maybe you could sneak me a picture of Davey. Not the fancy picture on his website, but a candid shot from the party or something like that."
"I can try," Jessica said, hedging. She wouldn't try to sneak a picture of her father to send to his parents because that felt like an invasion of his privacy. But she would try her best to get him to agree to let her send one. It seemed like the least he could do at that point. "And I want you to promise you'll keep me in the loop when it comes to your health and any decisions you make about the house."
"Of course we will. I think Joe and I have a lot of conversations about it in our near future, but it's a hard thing to wrap your mind around."
"You can call me anytime to talk about it. It's a big decision."
"The biggest." Marie was quiet for a few seconds, and then she clapped her hands together. "Okay, we only have today and tomorrow left together, so that's enough with tears. Let's go have some fun."
* * *
Rick tried to stifle a yawn, but he was bone-tired and he couldn't stop it, though he did cast an apologetic glance at Jessica. "Sorry. I swear it's not you."
"Usually somebody yawning every five minutes in my company would give me a complex, but you had a busy night according to the morning news. And you didn't get a lot of sleep Monday night."
"I don't regret a single second of Monday night," he said, putting his empty paper plate on the coffee table before sitting up straighter on the couch. "How's your muffin?"
"I feel like I shouldn't be enjoying it so much since it was baked for you by a woman who was so infatuated with you she was using false alarms to get close to you." She sighed deeply. "It's a really good muffin, though."
"Maybe I should have tried the muffin before I told her I had a girlfriend."
She gave him a stern look that only lasted a few seconds. "I'd throw this at you for saying that, but then I wouldn't be able to eat the rest of it."
"I'll look away while you lick the crumbs off the plate if you want."
"That's sweet of you to leave me some dignity like that." She chewed and swallowed another bite. "So you told her you have a girlfriend."
"Yeah."
"Is that the standard defense when a woman shows up at the station looking for attention from one of you?"
He shrugged, not really sure how to answer that. "We use it a lot, I guess."
"So it's not really specific to you?"
"Considering you'd spent the night before in my bed, it might have been a little specific, I guess."
When she smiled before popping the last bite of muffin into her mouth, Rick realized he'd managed to say the right thing.
"So if you were introducing me to people now, would you still introduce me as your landlords' granddaughter?"
"No. I mean, it might come up, but I want everybody to know you're with me." He watched her smile and dreaded the upcoming separation even more. "Are we going to do the long-distance thing?"
She sighed. "It's a very long distance. And even though I'm going to be visiting Joe and Marie, it's not like I'll be coming back to Boston multiple weekends per month or anything. Maybe it doesn't need definition. We'll just keep in touch and see how it goes."
"I hope it goes good," he said, and then yawned again. "I'm going to miss you."
"I'll miss you, too."
"What time is your flight tomorrow?"
"It takes off at 5:45."
He winced. "In the morning?"
"Yeah, so I'm leaving here about four. I took the early flight so I can get there and run by my house, but still have time to get into the office. I didn't leave myself a lot of time to oversee the last-minute stuff, plus I'm sure my father will want to have a catch-up meeting."
"I can set my alarm and go with you."
She laced her fingers through his and rested her head against the couch. "You don't have to. I still have the rental, anyway, so it's not like you could drive me."
"I could drive the rental and then take a cab back."
"Rick, they won't let you past security, anyway. There's no sense in you losing sleep and throwing your schedule off to drive me to the airport for no reason."
"I don't think there has to be a reason." The reason was spending every last minute he could with her.
She inhaled deeply and then turned her head to look him in the eye. "It's going to be hard enough leaving Joe and Marie. I don't know if I could stand an airport goodbye with you. I'm not very good at that sort of thing."
"So it's easier if you just get up and get in your car and go?"
"Honestly, yes."
He nodded. "Then that's what you should do. But text me when you land so I know you got to California okay."
"I think if we don't get there okay, you'll hear about it on the news."
"Don't say that." Once she'd put her plate down, he pulled her closer so she was tucked against his side and kissed the top of her head. "Just text me when you get there."
"You want me to sit like this so I can't see you yawn, don't you?"
"I swear it's not personal."
"Why don't you go to bed for a while and then you can come down and have dinner with us."
"Sounds good. Why don't you come to bed with me?"
She laughed, and he felt her head shake even though he'd closed his eyes. "I think it would be hard not to take those yawns personally."
"Just to lay down with me," he muttered. He was losing the sleep battle and he wanted to go stretch out on his bed. But he didn't want to lose a minute with Jess, even if he was asleep for it. "I'll sleep better."
She got to her feet and grabbed his hands to help him to his feet. "I still have some packing to do and I have to figure out what I'm leaving here, but I'll lay down with you for a little while."
His last thought before he slipped into a heavy sleep with his body curled around Jessica's was how much he didn't want her to go.
Chapter Thirteen
Even though she practically tiptoed through the house, Jessica wasn't surprised when Marie shuffled into the kitchen in her well-worn slippers and bathrobe just before four o'clock the next morning.
"I almost missed you."
Her throat tight, Jessica took her hand off the doorknob and set her carry-on bag on top of the suitcase. "I told you not to bother getting up."
"You're not leaving this house without a hug from your grandmother. But I know you didn't want a big emotional scene so I made sure I only left enough time for a quick goodbye."
They met in the middle of the kitchen, and tears spilled over Jessica's cheeks as her grandmother wrapped her arms around her. "I'm going to come back and visit as soon as I can."
"I can't wait. I love you, honey."
"I love you too...Gram."
"I like the sound of that." After a big sniffle, Marie pulled back and wiped the tears off Jessica's face. "Your grandfather's not good at this sort of thing, so I let him sleep, but he loves you, too."
"Give him a kiss from me when he wakes up."
"I will. Now you go before you miss your plane. We don't want to have to do this again tomorrow."
After another quick hug, Jessica slung her carry-on over her shoulder and grabbed her suitcase. "Bye, Gram."
She managed to get the door closed and walk down the ramp without her eyes welling up again, but she stopped walking when she reached the driveway.
Rick was leaning against the rental, his shoulders hunched against the cold. The frigid air made his eyes sparkle and his cheeks were pink, and he looked utterly delicious for four o'clock in the morning. His smile was full of warmth and understanding as he uncrossed his legs and made his way across the driveway to take her suitcase.
"I take it Marie got up to see you off," he said, swiping at a leftover tear on her jaw with his thumb.
"Neither of you listen worth a damn." She was going for light and funny, but there was too much emotion clogging her throat to pull it off. "You should go back to bed."
"I might, after your plane takes off. I'm going to spend every minute I can with you before you go, so give me your keys."
He started the car before popping the trunk and made her get into the passenger seat to keep warm while he stowed her bags. Then he slid the driver's seat back a few more inches and got in.
"Got everything?"
She nodded because she wasn't sure she'd be able to speak. This was exactly what she'd been hoping to avoid when she told everybody to stay in bed. If she hadn't hugged Marie or felt that kick in her chest when she saw Rick leaning against the car, maybe she would have been able to lie to herself about how much she didn't want to leave.
Once he'd backed the car out onto the street and put it in gear, Rick put his left hand on the steering wheel and reached his free hand across to take hers. After lacing their fingers together, he set their joined hands on the center console and navigated through the still sleepy streets.
Jessica leaned her head against the headrest, trying to blink back tears as his thumb stroked her index finger. How the hell had this happened? She'd come here to fulfill her curiosity about her father's parents and help them plan for their future, and her entire life had changed.
She had grandparents now—real grandparents—and she had this guy she'd only known a couple of weeks, but already couldn't imagine not seeing tomorrow. Or the next day. Or any day in the near future.
"Hey." His voice was soft and he didn't say anything else until she turned her head to look at him. "You're going to text me to let me know you landed okay, right?"
"Yeah."
"Good. And I know you like to text because it's fast and easy, but I want to hear your voice sometimes, too."
Some of the tightness in her chest eased. "So you really think we can make a long-distance thing work?"
"Honestly, I don't know." He squeezed her hand. "I hope so. But I do know I'm not ready to just say 'hey, that was fun, thanks' and not talk to you again."
"I'm not, either."
He took his eyes off the road for a second to smile at her. "We'll figure it out as we go along, then."
They talked a little bit, mostly about Marie and Joe, until they reached the airport, where the conversation turned to where the hell they were supposed to be going.
"I thought you'd have this, being from here," she teased.
"I'm not from the airport." He moved over a lane so abruptly she almost squealed. "Trust me. Nobody has any clue where they're going here."
He finally navigated successfully to the rental car building, then waited with her luggage while she turned the car in. During the shuttle ride to the terminal, he was quiet. But he held her hand and she liked that. She'd forgotten how comforting the gesture could be, and she tried to draw strength from it so she wouldn't cry when it was time for him to leave.
After checking her suitcase at the curb, they went inside and she decided they had time to grab a coffee together before she had to get in the security line. Even if it was only a few minutes, she'd take them.
"Did you leave your car at the airport on the other end?" he asked when they'd found a spot to drink their coffees and watch people.
"I wasn't sure how long I'd be, so I took a cab. I'll just take one to the house and drop off my luggage before I go to the office."
"How do you think your father's going to be?"
She shrugged. "I think he's had time to come to terms with what happened."
"He shouldn't really have to come to terms with you visiting your grandparents, you know."
"I do know that." She shrugged again. "But he didn't want me to know them and I knew that when I got on the plane here. And I stayed when he wanted me to go back. I guess the details aren't important. What matters is that I did something I knew would upset him and complicate his life."
Rick frowned. "You defied him."
"It's not that, exactly. That makes him sound like a tyrant, when in reality, the manipulation is much more subtle. And it's as much my fear of making him unhappy as it is control on his part."
He reached over and squeezed her knee. "That's not really how family's supposed to work, Jess."
"Aren't you the one who told me families look a lot of different ways, and there's no right or wrong or supposed to about any of them?"
He nodded, his mouth curving in a smile. "You got me there."
Usually when she was in an airport, waiting for a flight, time seemed to slow to a crawl, but the minutes flew by and all too soon they had to throw away their empty cups and head toward the security checkpoint.
"Don't forget to text me when you land."
"I won't forget to text you." She'd probably do nothing but think about him for the entire flight, so it wasn't likely she wouldn't be thinking about him when the plane touched down.
He put his hand on the back of her neck and kissed her gently. She sucked in a breath, trying to shove down the emotion, but tears blurred her vision.
"No tears," he murmured against her lips. Then he pulled back and gave her a crooked grin. "No sad eyes."
"I told you it would be harder for me to leave if you were here."
"Good." His hand fisted in her hair, tilting her head back so she looked into his eyes. "I want it to be hard for you to leave me because it's sure as hell hard to let you go."
"We were supposed to just enjoy each other's company until I got on the plane."
"We did. We enjoyed each other's company a lot, I guess."
She sighed, letting herself imagine for a few crazy seconds what would happen if she just didn't get on the plane. They'd go back to the house and after spending a few minutes with Marie and Joe, they'd go upstairs to Rick's apartment. It was so tempting she almost opened her mouth to tell him she wasn't leaving.
But even if she took her father out of the equation, she had a life in San Diego. And responsibilities. She was supposed to be hosting the company Christmas party in two days for people who'd worked hard for her and her father for years and who'd helped make it possible for her to spend the past two weeks in Boston without any prior notice.
And, whether he should be part of the equation or not, she had to consider her father. He was probably on shaky ground and the holidays were coming. If Marie could worry about him being alone after all he'd put her through, Jessica wasn't going to beat herself up about doing the same.
"You have to go," Rick said quietly, and she realized she'd been staring up at him, saying nothing.
As much as she wanted to share her reluctance with him, it wouldn't change anything and would only make it harder. "I'll text you."
He kissed her one more time and then ran his finger down her cheek. "Bye, Jess."
"Bye," she whispered as he turned and walked away.
She watched him until he disappeared from her sight, knowing by the set of his head and shoulders he wouldn't look back. If he did, he'd probably come back and want to kiss her again and she'd never leave.
Smiling, Jessica moved into the line and waited her turn for the security screeners. Once she was through, she'd buy a muffin and some fruit, and then open her laptop to work until it was time to board. Hopefully between the work waiting for her and the last-minute party details, she could distract herself long enough to get on the plane without any more tears.
Hours later, her mind addled by the time zones and her heart heavy, Jessica unlocked the door to her condo and stepped into what had been her home for several years. With nobody to please but herself, it was decorated in a simple, classic style with warm colors and an eye for comfort over style. An end unit, it had a lot of light, a tandem garage and access to a pool.
It felt empty now, but she knew that feeling would fade. This was her home and when she was in her favorite pajamas, curled up on the couch with a drink and a book, she'd remember everything she loved about this house and her life.
But right now, with the brief text exchanges to let Marie and Rick know she'd landed still fresh in her mind, all she could think about was what she'd left behind in Boston.
* * *
Rick made it through most of Friday without taking anybody's heads off their shoulders—although Gavin came close when he balked at doing some housekeeping—but he started getting restless as the sun went down.
He hadn't slept well the night before because he'd been tossing and turning, thinking about Jess. He'd known he would miss her. He hadn't guessed just how much, though.
Rather than stare at the television screen or listen to whatever conversations the other guys were having, he went down one floor to the office space he shared with Danny Walsh and tried to catch up on some paperwork. But his mind kept wandering and finally he just rocked back in his chair and closed his eyes.
Jess hadn't wanted to leave him. He'd seen it in her eyes and felt it in the way she'd kissed him goodbye. Maybe she'd been caught up in the moment, though. Coming to Boston had been quite the emotional trip for her and maybe their relationship had gotten tangled up in that.
Back in San Diego, surrounded by her everyday life, maybe she felt differently. The intensity of her feelings—whatever they might be—would fade and eventually so would the memories. She'd realize long-distance relationships seldom worked out. He didn't like to think that, but this wasn't his first rodeo and he knew it was a possibility.
Or maybe she was all the way on the other side of the country thinking about him. Maybe she was even wondering if he'd started moving on the minute she got on the plane. After a moment of hesitation, he pulled out his cell phone.
Hey, you busy?
He waited for the dialogue bubble to pop up, letting him know she was typing a reply, but instead her name flashed on the caller ID as the phone vibrated in his hand. "Hello."
"Hi. I figured since you sent a text, you must not be too busy to talk. You're working, right?"
Just the sound of her voice soothed his ragged nerves and he smiled for the first time that day. "I'm in the office pretending to do paperwork."
"I'm in my office, too, but I'm pretending to read emails."
"What are you really doing?"
He heard her small, breathless laugh. "Honestly? I'm staring out my huge window at San Diego, wishing I was still in Boston."
"I wish you were still in Boston, too," he confessed, feeling the tightness in his chest ease. There was no cooling off in her voice—no sign she was ready to put some distance between them and move on.
"Why did my dad have to choose California? Why not Connecticut? Or New York, if he really wanted a city?"
He chuckled. "I don't know but, even with the gas mileage in that truck of mine, I'd be driving to see you."
There was silence for a few seconds and then she sighed. "That would have been nice. Have you seen Joe and Marie since I left?"
"I saw them yesterday, and I talked to Joe on the phone today. They miss you, of course, but Marie's telling everybody about the fancy phone her granddaughter bought her and showing off the pictures she already has stored on it."
"I sent her a few I had on my phone, just so she'd have them. She wants me to send her a picture of my father, too."
"Will you?"
"I'll ask him. I think he should, really, but I'm leaving it up to him. I can't get in the middle of their relationship and start playing mediator. I have him and I have them. I hope they don't always have to be separate, but I'm not sacrificing one for the other."
"Good for you." He was glad she seemed confident about balancing those relationships. Especially since it was good for her to have Joe and Marie in her life. "How's the party planning going?"
She told him all about it and while normally it wasn't the kind of thing he cared about, he was content to listen to her talk. Because it was something she enjoyed, her voice was animated and he smiled as he listened to her.
Then, because it was just his luck, the alarm sounded. She must have heard it because she paused in midsentence. "You have to go."
"I'm sorry."
"Will you call me tomorrow?"
He was on his feet, but he'd called on his cell phone, so he took it with him. "Isn't your party tomorrow? Call me when it's over."
"It doesn't usually wrap up until eight. By the time I get home it'll be almost midnight for you."
"I'll take a nap. I want to hear about your party."
"Okay, I'll call you. But you can't put on your gear one-handed, so you have to hang up now."
She was right. "I'll talk to you tomorrow."
"Good night, Rick."
After reluctantly shoving the phone in his pocket, he stepped into his boots. Then he pulled up the pants and yanked the suspenders over his shoulders, before grabbing the coat and his helmet.
He crossed paths with Scott on their way to their respective trucks. "Giving the long-distance thing a shot, huh?"
Rick shrugged. "Since she lives in California, it's the only shot I've got."
As he climbed up into the cab of L-37, he was a mess of mixed emotions. On the one hand, she'd asked him to call her tomorrow. There was no out of sight, out of mind thing going on with her.
But on the other, it really sucked that he'd finally met a woman who turned him so inside out she had to be the one, and she was on the other side of the country.
Chapter Fourteen
As usual, the Broussard Financial Services holiday party went off without a hitch. Jessica sipped the cranberry margarita that would be her one and only drink for the night and watched her coworkers mingle. The clients who'd attended had already made their exits and now that it was just the BFS employees, the atmosphere was very relaxed.
"You did a wonderful job, as usual."
She turned to face her father, who she hadn't heard approaching thanks to the expensive carpet. "Thank you."
"I shouldn't have doubted you'd pull it off, even from Boston."
She was surprised he mentioned Boston. Between her arrival at the office late Thursday afternoon and now, he'd managed to avoid the topic, as if she'd simply been on vacation or out sick. "Like I said before, as long as I have my phone and my laptop, it doesn't matter where I am."
He pulled out his phone and started tapping the screen. "I'm texting you a photo."
"Why? I'm standing right here, so just show it to me."
"You'll see."
When her phone chimed, she pulled up the text to find a picture of her and her father, taken earlier in the night. They'd obviously been having a discussion, but had turned toward whoever took the picture. They were both smiling and Jessica was surprised to find herself a little choked up. They might not be a picture-frame-selling family, but this was probably the first genuine, happy family photo of them.
"Derek took that and when I saw it, I asked him to text it to me. It's a nice picture of us."
"It is." She saved it to her phone's photo album.
"You asked me earlier if you could send a picture to your grandparents—to my mother—and I think she'd like that one."
Jessica nodded, smiling when she imagined Marie's reaction to the image. And it was a moment she really wanted to share with her grandparents, and with Rick. David Broussard might not have been a good son and he certainly had some faults as a parent, but she wanted them to know she and her father did have a good relationship overall. "She'll love it. Thank you."
"Let me know. What she says, I mean."
"I will." She glanced around and saw that they were almost alone, but not quite. But her father seemed vulnerable tonight—maybe even nostalgic—and she had a question she wanted to ask him. "Can we step into your office for a minute?"
"Of course."
His mouth tightened and she knew he was bracing himself for something unpleasant. Maybe he thought she was going to tell him she was leaving the company and heading back to Boston for good.
As he closed the door behind them, she wondered what his response would be. And for a moment, she was actually tempted to say the words. But then her father was staring at her expectantly, and she lost the nerve to turn her life upside down.
"I have a question for you," she began, "and I want you to answer it. And not like you've answered my questions in the past. I don't want you to deflect or try to make me feel bad for asking or anything else."
"I'll try."
She took a deep breath. "How come my mother didn't fight for custody of me? Or at least visitation?"
Even though he had to know something serious was on her mind, her father still looked taken aback by the question. "I'd rather not discuss this, Jessica. I don't like talking about that time in my life."
"Yeah? Well, I didn't like growing up without a mother and I'd really like to know why I did."
He blinked, clearly surprised by her tone, but she didn't apologize for it or try to make excuses. She simply waited him out.
"It was because of me," he said finally. She'd already guessed that much and was going to push him for more, but then he spoke again. "I discovered cocaine in college. It brought your mother and I together and, if I'm being honest, is a big part of the wedge between my parents and I, even though I'm quite sure they never knew about the drugs."
"They don't know about that, no." She wasn't sure about Marie, but Joe would have mentioned it.
"She managed to clean up a little when she got pregnant with you, but not totally. It's a miracle you were born so perfect. But it didn't last and we were destroying ourselves and each other and we were going to destroy you in the process. I lost a great job at a financial firm and it was a wake-up call. I got clean, but she couldn't and eventually she took off. It was hard to stay clean but it was easier without her, so I let her go."
Jessica stared at the liquid in her glass, swirling it a little as tears blurred her eyes. "Why couldn't you just tell me that?"
"What father wants his daughter to know he was a cocaine addict?"
"I would rather have known my father did drugs in college than spend my entire life wondering why my mother didn't want me."
He flinched. "I'm sorry, Jessica. I didn't...I don't know what else to say. I've done some thinking lately and I'm a self-centered person, I guess."
She took a sip of her drink to hide the snarky really? smile as that thought popped into her head. But then she forced herself to let it go because it wouldn't help. "I guess if you're aware of it now, you can work on it."
"I'm going to try."
There was a knock on the door and Sharon poked her head in. "Everybody's getting ready to leave and they want to say goodbye."
Her father surprised her by giving her a quick hug, and then they went out to close out the holiday party. She was exhausted, but she smiled and wished everybody a happy holiday as they trickled toward the door. Once they were gone, she could go home. And then she could talk to Rick.
She managed to lock her door and kick her heels off, but she had her phone in her hand when she sank onto her couch. Even her pajamas could wait. She pulled up the photo of her and her father and sent it in a text to Rick.
Can't send this to Marie until tomorrow because it's late, but it was a nice night. Calling now but wanted to send pic.
He answered on the second ring, and she felt the familiar thrill at hearing his voice. "You look happy in that picture. You had a good time?"
"I did. I always enjoy the party, but it was also great that my father said I could send that picture to Marie and Joe. I thought about having it printed and framed for them for Christmas, but she was so hopeful I'd get a picture of him at the party that I can't handle how disappointed she'd be between now and then."
"I wouldn't wait, either. And just having it will be a great Christmas gift for her, even if it's a little early. And, hey, if it's in a frame, it's awkward to carry around the neighborhood, showing all her friends."
Jessica laughed, turning sideways on the couch to put her feet up. "Did you do anything fun today?"
"I slept. Then I shoveled snow. Did some errands. Basically, no. Until now, of course."
"I couldn't wait for everybody to leave so I could get home and send you the picture. And talk to you."
"I love the picture. I won't even crop Davey out of it," he said. She laughed, shaking her head even though he couldn't see her. "Anything else happen?"
"My father and I talked a little bit, but that's not fun stuff. The cranberry margaritas were delicious. I wish I didn't have a one-drink rule at the company parties."
"Jess." The way he said her name cut off her chatter. "You can tell me the not-fun stuff, too. I want to hear about it."
She smiled and pulled the throw blanket off the back of her couch to cuddle with. It wasn't Rick, but at least it would keep her bare legs warm. "I asked him about my mom."
They talked for a while about her mother, and about her father's seemingly sudden bout of self-awareness. Then he caught her up on hockey news. The sport still didn't make a lot of sense to her, but she was learning and she loved hearing him talk about it.
Until she heard him trying to stifle a yawn and realized it was now the middle of the night in Boston. "You should go to bed."
"Yeah. You have plans for tomorrow?"
"Not really. I might hit the fitness center for a while in the morning and then I'll probably hang around here. Go through my mail. Spend some quality time with Netflix."
"I'll call you tomorrow, then. Probably late afternoon your time."
"I'll be here. Good night."
"Sweet dreams, Jess."
They definitely would be. Sweet, agonizingly sexy dreams that would make her wake in the morning feeling unsettled and longing for him. Long-distance relationships were hell on sleep.
* * *
On Christmas Day, Rick sat on the battered couch in the basement of his parents' house, once the playroom and then the teen hangout and finally the man cave. But when John's boys came along, it had circled back to playroom again.
He usually worked the holiday, but at the last minute the LT from a nearby station had ended up in a bind. Due to a traveling in-law situation, a very pregnant wife and the potential for more family drama than any one guy should have to handle, his family had to celebrate on the twenty-third. If he couldn't get the day off, he was probably going to have to run away from home. So Rick had worked his tour, spent Christmas Eve with the Broussards and had landed with his family for the big day itself.
Presents had been opened and there was wrapping paper everywhere. And Rick had managed to get himself on the family shit list by gifting his nephews big superhero Lego sets. They'd been opened over their mother's objection and were now strewn from one end of their grandparents' house to the other. Now he, his brother and the kids were banished to the basement to digest dinner while their dad napped in his recliner and the women relaxed before dessert.
"When are you going to settle down, Rick?" John waved a hand at his sons. "It's time for my boys to have some cousins with the same last name as them because they are seriously outnumbered by my in-law's kids right now."
He laughed. "I'm not getting married just so there can be even teams in backyard football games."
"Are you at least seeing anybody?" John took a sip of his soda. "I was pretty surprised you and Karen broke up, to be honest."
"We're just good friends. And I've been seeing somebody for a few weeks. Kind of. Right now I'm not seeing much of her in the literal sense." John frowned and made a hand motion for him to continue. "She's Joe and Marie's granddaughter and she came out from San Diego to meet them and help them with some financial stuff."
"Ah. And now she's back in San Diego, so you're seeing her, but not literally. Got it."
Rick pulled out his phone and checked the time. "I'll be seeing her literally in a few minutes. I sent a note with her Christmas present that she couldn't open it until we were in a video chat so I can see it. She should be calling anytime."
"You sent her a Christmas present?"
"I would have sent it home with her, but I'm surprised her suitcase didn't explode at the seams as it was."
"Did she send you a Christmas present?"
"She left one with Joe and Marie and sent me the other." He smiled at the memory of the awkwardly wrapped snow shovel, with the bent handle that was supposedly better for the back.
That had made him laugh, but the book that came in the mail had touched him. His favorite mystery series was in hardcover on the top shelf of his bookcase, except for the first book, which he'd picked up in paperback on a whim at a yard sale his mom had dragged him to. He'd bought the rest in hardcover but never got around to hunting down the hard-to-find first one. Jessica must have noticed, because she'd sent it to him.
"Did you open them already?"
"Yeah. There was no note with mine saying not to."
"You would have anyway. Growing up, you always found your presents and peeked because you couldn't stand waiting."
Rick laughed, and then his phone rang. He saw that it was Jess. He tried to get to the steps so he could go up and find a private spot, but the boys were blocking the way and then he stepped on a Lego. That hurt like a son of a bitch in stocking feet and he wasted a few precious seconds trying not to swear in front of the kids. Rather than risk missing her call, he answered it where he stood.
"Merry Christmas," she said, and he could tell she was on her laptop, sitting at her table.
"Merry Christmas, Jess. I'm trying to find a quiet spot, but my brother and my nephews are in the room at the moment."
"Hi, Jess," John called out, so Rick had to turn his phone so Jess could "meet" him. And then the boys each had to talk to the pretty lady, including telling her every single thing they got for Christmas. He cut them off when they started gearing up to tell her everything they'd eaten, though.
"Okay, it's my turn to talk to her."
"Upstairs," John said. "It must be almost time for you two to top off your sugar highs."
"Sorry about that," he said once he was finally alone in the basement. He sat back on the couch, trying to find a comfortable spot. It was a little awkward holding the phone at the right angle, but he didn't care.
"They're so cute. And very excited about their gifts."
"They can be a handful, but they're not usually this wound up. Christmas does that to them, I guess." He shifted so his knee was helping to support his hand. If he'd been thinking, he would have brought his laptop. "John's after me to have some kids. He says it's so they're not outnumbered by the cousins on my sister-in-law's side, but I think he just wants me to suffer with him."
"Do you want kids?"
He could tell by the way she didn't look directly at the camera that it wasn't just a polite question. "Yeah, I want to be a dad. I've always assumed I would be someday, though I guess I should start watching the clock pretty soon."
She laughed. "You're not that old."
"What about you?"
"I'm not that old, either." He arched his eyebrow at her, making her smile. "I think I want kids. For a long time I guess I've been afraid of being a parent because I don't really have stellar role models and I used my career as an excuse, but now I feel like the person I am matters more than the people my parents are."
"I like the person you are."
"Is that why you're tormenting me by not letting me open my Christmas present?"
He laughed. "Yes. I only torment people I like."
"You also cheat and open your gifts before Christmas."
"It was Christmas Eve, and I still love my book." He'd called her once she'd texted her dinner with her father was over and confessed that he'd opened his gift early. "Go ahead and open yours now."
She held up the box, which she must have had set just out of the camera's view and then ripped open the paper. When she'd sliced the tape and lifted the lid of the box, her laughter came through his phone's speakers so clearly, it was almost like being in the room with her.
Almost, but not quite.
"This is perfect." She lifted out the copy of Hockey For Dummies he'd bought her, and flipped through the pages before setting it aside. Then she held up the Bruins hockey jersey the book had been resting on.
"If you're going to be a Bruins fan, you should look the part."
"Thank you." She pulled the jersey over the V-neck tank top she was wearing and then blew him a kiss.
"You have no idea how sexy you look right now."
"I'll wear it to watch hockey games," she said. Then her smile turned decidedly naughty. "Maybe it's all I'll wear to watch hockey games."
He groaned and dropped his head back against the couch. "You're killing me, Jess."
"Maybe I'll send you a picture of me in my jersey later."
There was no telling where that conversation might have gone—especially considering it was a video chat—if he hadn't heard the thump of his nephews' feet on the stairs. "I'm about to have juvenile company again, but definitely send me that picture."
With privacy and quiet out of the question, they ended the chat with a promise to talk the next day. Ignoring the knowing look his brother shot him as he joined them again, Rick told his nephews he was in the mood to build some Lego sets.
A week later, when he should have been sleeping in preparation for a busy New Year's Day tour, he made sure he was awake at midnight. Usually he and Jessica talked early enough so he didn't stay up late, so he had to set an alarm. He punched in the text message so when the clock ticked over to the New Year, it was ready to send.
Happy New Year, Jess.
A few seconds later, he got a response. Happy New Year to you, too! Shouldn't you be sleeping?
They say what you're doing when the clock strikes midnight is what you'll be doing all year. I was thinking of you.
Long seconds ticked away as he watched the bubble with the dots indicating she was typing a response. I was thinking of you, too. I would have sent you a text letting you know that, but I know you work tomorrow, so I thought you'd be sleeping and I didn't want to wake you.
Are you at home right now? He hated this impersonal way of communicating.
The phone rang in his hand and he answered it. "Hey, you."
"Happy New Year."
Her voice was quiet, but it didn't sound as if he'd awakened her. "Happy New Year. You didn't have to call. I know it's late."
She laughed. "Not here."
"Oh, that's right. Well, at midnight in Boston, I was thinking of you."
"And at midnight in San Diego, I'll be thinking of you, too. But I'm telling you now because you said New Year's Day is always busy and you need to sleep now, so I won't text you."
"I do need to sleep. But I'll text you tomorrow. And maybe call on Sunday."
"Okay. Good night, Rick. And I hope you're right about doing all year what you were doing at midnight."
"I am. Good night, Jess."
When he hung up, he plugged his phone in and pulled the blanket up over his head. He hoped he was a little right about that tradition. He had no doubt he'd spend a good chunk of the New Year thinking about Jess.
He just hoped he wouldn't have to spend the entire year doing it from the opposite coast.
Maybe it would be easier if he just cut her loose and didn't contact her. The text messages and phone calls just made him miss her more and with every conversation he was reminded he couldn't hold her and she was living a life thousands of miles from his. But he couldn't imagine not talking to her at all so, for now, he'd take what he could get.
Chapter Fifteen
Jessica wasn't surprised when Marie's name flashed on the screen of her cell phone. Her grandmother had really taken to her new smartphone and they'd talked—or at least sent texts back and forth—every day over the five weeks since she'd returned to San Diego.
Leaning back in her office chair, Jessica hit the button to answer the video call. She knew Marie got a kick out of seeing her dressed up, with the view of San Diego over her shoulder. "Hi, Gram."
"Hi, honey. Are you busy? I hate to bother you at work."
"It's fine. Honestly. I told you before if I'm too busy to talk, I'll just send you to voice mail and call you back when I get the chance. How's everything going?"
"Good." Jessica tried not to visibly wince as Marie moved on the couch and everything blurred for a few seconds. "But your grandfather and I have been talking and we think we're going to have the real estate agent put the house on the market."
For a few seconds, she didn't respond and she hoped her reaction didn't show on her face. Helping them make the decision to sell their house had been her objective when she went to Boston in the first place, but it had been nothing but a building at the time. Now it was her grandparents' home and she felt an emotional connection to it she never saw coming. "And you both want that?"
"Yes. Since you left, the house has felt more empty and it's helped us see how ridiculous it is for the two of us to rattle around in it alone."
She missed rattling around in it with them. "Do you have something else in mind already?"
"Maybe. Joe was talking to some of his buddies at the market and one of them moved into a new senior facility not too far away. It has elevators and you can get housekeeping services if you want. With Social Security and the money from the house, we should be able to afford it."
"You can't make a decision like that unless you know you can afford it," Jessica said, because fiscal responsibility had been ingrained in her practically from birth. Then she smiled. "How about I fly out and take a look at the place with you? We can figure out the cost, including any fees above the standard lease amount they might not tell you about up front, and we'll go over your financials again together."
Marie's face lit up. "That would be wonderful. Even though we know about what we can expect to get for the house, what we do with that money is important and you can help us figure that out. And you did a lot more research into the housing market than we did, so maybe you could be here when the real estate agent comes back, too?"
"I'll need a couple of days to make arrangements," she said. "Today's Friday, so I'll need the Monday to wrap things up, too. I'll fly out Tuesday and we'll go from there."
When they'd ended the call, Jessica sighed and leaned her head back against her chair. In a few days she'd be back in Boston and she couldn't wait to tell Rick she was coming.
Just like with her grandmother, she and Rick had talked every day over the past few weeks. Sometimes it was only quick text messages and sometimes they did video chats, but usually there were long phone calls at the end of the day.
She'd gotten so she could hear in his voice how a shift had gone. She could hear when he was smiling or when he was so tired she knew talking to her was the only reason he wasn't already asleep. The time zones were a challenge, and she'd gotten in the habit of leaving on time every day, willing to bring her work home so she'd be free to talk to Rick before it was too late on his end.
They'd watched a movie together, their phones on speaker next to them, and last week they'd finally caught a Bruins game on television at just the right time. She ached to physically be with him and touch him, but Rick on the phone was better than no Rick at all.
A sound caught her attention and she lifted her head to see her father standing in the doorway. Judging by the look on his face, he'd been there for at least a few minutes. "Hi, Dad."
"That was my mother."
It wasn't a question, but she nodded. "How much did you hear?"
"Enough to know you're going back to Boston. Do you know how long you'll be away this time?"
So he wasn't going to be stubborn about her going. "I don't know. A couple of weeks, maybe. You already know I can make that work."
"Are they still doing okay, though? Why did they suddenly decide to sell?"
"I don't think it's sudden. It's a conversation they've been having since Joe fell and after so many years in one place, it takes a while to work your way around to the change, I guess." She took a deep breath. "You could have said hello, you know. She would have liked to see your face."
He shook his head so quickly that she guessed it was something he'd considered but already dismissed. "I've been an ass for so many years and I don't know how I'd begin to come back from that. How do I do that, Jessica?"
She smiled. "You knock on the door and when they open it, you say I've been an ass and I'm sorry and you go forward from there. You could come with me, you know."
The fact he thought about it, and seriously judging by his expression, was heartening. "I'm not ready quite yet. I want to feel stronger in case it doesn't go well. Even when I was a kid, my old man and I butted heads a lot."
"I think you've both mellowed. And maybe you won't be as close as some fathers and sons, but being able to get together for Thanksgiving or Christmas once in a while would be a great start. If I ever get married, I'm going to want all three of you there. And if I have a family, I want my kids to have all of you in their lives together."
"I hope that day comes. I really do." He gave her a sad smile. "For now, I'll stay out of the way and let you build your relationship with them, and I'll start by not being too put out you're leaving the office again."
"I'm taking the office with me," she reminded him, waving her hand at the laptop and phone on her desk.
"At this rate, it would probably be more efficient to open an office in Boston and expand the business," he said before walking away, and she couldn't tell if he was being snide or sincere.
After grabbing a juice from the minifridge in the corner of her office, Jessica pulled up the ongoing text message thread with Rick and started typing. I'm coming back for a visit.
It didn't take him long to respond, so she pictured him at the fire station, either working in the engine bay or hanging out in the living space upstairs. When?
Probably Tuesday night.
Shit. I'm spending the weekend at my brother's to dog sit for them and going straight to the station on Tuesday. I'll trade shifts and pick you up at the airport.
Then you'll have to work an extra shift while I'm there. You know Marie's not going to let me out of her sight for hours, so get your shift in while I'm with my grandparents. Then I'll sneak upstairs when you get home.
Naked?
She laughed, shaking her head at her phone. I won't sneak up there naked in case I get caught, but I won't wear anything with too many buttons.
I can't wait. I've missed you.
I've missed you, too. Just a few more days.
* * *
"I swear, one of these years I'm going to carry wads of ten-dollar bills around with me and pay the neighborhood teenagers to shovel." Rick stood straight and stretched his back, glaring at the fire hydrant they'd just shoveled clear of snow. "I'm too old for this shit."
"I know you're old," Scott said, "but it's still pretty damn sad that you think a teenager will pick up a snow shovel for ten bucks."
"Speaking of old, I think Eriksson can shovel out the next one and I'll sit my ass up in the truck." Aidan glared at Chris, who gave them a cheerful wave from behind the wheel.
"Let's go." Rick pulled the map and a pen out of his pocket to check off the hydrant and then they started up the sidewalk to the next one, the truck creeping up the street behind them.
They'd all tossed their bunker coats in the cab a long time ago and were working in just light sweatshirts. Rick probably would have pulled that off, too, except for the fact doing sweaty work in a T-shirt in the cold could get a body in trouble.
Every time it snowed, every single hydrant had to be cleared of snow to below the valve for the hose attachment and at least a couple of feet out. It only took a few minutes to do each one, but there were a lot of hydrants in their neighborhood. Even if the snow was deep, the guys from Ladder 37 could find most of them just by memory, but Rick still used the map. If he ever missed one and they were delayed knocking down a fire and somebody got hurt or worse because of the time it took them to get water, he'd never forgive himself.
It wasn't bad today. It was their first time out and it was a nice change of pace. The shoveling part sucked, but it wasn't a bad workout and they shot the shit while they worked. Later in the winter, when the snow came more often and it was bitterly cold and they were doing it for the sixth or seventh time, the mood would be a lot more grim.
When they turned one corner, Rick was heartened to see an exposed hydrant. A pack of kids, with the help of a few adults, were going down the street and shoveling them out. The guys made sure to thank them profusely, and Chris gave them a quick salute with the lights and siren before they trudged on to the next block, where the residents weren't quite as civic-minded.
"You heard from Jessica?" Jeff asked, leaning on his shovel while Rick marked off another cleared hydrant.
"About an hour ago," he said. "She texted me to let me know she finally landed. The weather caused some delays, but she's finally on her way to the house."
"Your house."
"Her grandparents' house."
Jeff shrugged. "Same roof. She just back for a visit?"
"I guess Joe and Marie are ready to talk about selling the house. I haven't talked to them since I heard she was coming back because I was down at my brother's, so I don't know the specifics."
"Oh man, that sucks. That was a wicked good setup for you. Great apartment and low rent."
Rick shrugged as they started toward the next hydrant. "I'll miss the shower, but I can find another place. Hopefully the remodel I did on the place will up the value and help pay them back for all they've done for me over the years."
"I think you did as much for them."
Rick sighed. It had been a great setup and not just because he had a great apartment for reasonable rent. Joe and Marie had become like family to him and he actually liked taking care of the place for them. The thought of them selling the house felt a lot like he imagined it would feel if his parents decided to sell the house he and his brother had grown up in. But he also believed it was best for Joe and Marie to downsize, and their well-being came first.
"I might buy a place," Rick said, though it hadn't really crossed his mind until just that second. "I can fix it up the way I want and not worry about whether or not I like the new landlords."
Scott, who'd been listening without comment, paused in the act of stabbing the shovel end into a particularly large snowbank, looking for the clank of metal that would pinpoint exactly where the hydrant was. "You know as soon as you get it just the way you want it, you'll start seeing some awesome woman and you'll want to marry her and then she'll tell you she hates your house. Or the schools suck or there's no place to get her nails done."
"Pretty sure we have more nail places than we do bars now," Jeff said.
That sent the two of them off onto a tangent about manicure and tanning places, and that suited Rick just fine. He didn't want to continue a conversation that included trying to imagine some faceless woman in a strange house she didn't like. Since the beginning of December, the only woman he pictured himself being with was Jess.
He wanted this damn shift to be over so he could see her. The text messages and the phone calls had been nice, but they didn't make him feel the way being in the same room as Jess did. He wanted to see her smile and get her naked in his kitchen.
Eriksson yelled at them from the truck, pounding the outside of the door. "Hey, we've gotta go!"
Rick took a second to get his bearings and circled the hydrant on the map so he'd know where they left off, and then he jogged to the truck. Eriksson hit the siren as they climbed in and pulled away from the curb.
"Looks like we have an electric space heater and shitty extension cord situation," he told them as they pulled on their coats in the confines of the cab. "Home owner thinks it's out, but he's worried about the wall."
Hopefully it would be a quick in and out, Rick thought as Eriksson guided the truck through streets that were even more narrow than usual thanks to the snowbanks. He wasn't sorry to have a break from shoveling, but he needed something to do besides hang around the station and watch the clock.
He still had a lot of hours to kill before he saw Jess again.
* * *
"Good morning, honey," Marie said when Jessica wandered into the kitchen in search of coffee the next morning. "It's so good to have you back."
"It's good to be back." And it was. Joe and Marie had greeted her with warm hugs yesterday, and then they'd taken her to their favorite Italian restaurant because they decided she needed a big meal after the travel headaches she'd dealt with.
"We got some snow during the night." Marie poured her a mug of coffee and set it on the counter so Jess could add milk and sugar. "Not a lot, but enough so it'll have to be cleaned up. I wonder if I should make a big breakfast. Rick just got home and he should go to bed, but if I know him, he'll be out there with a shovel so Joe doesn't try to do it."
"He, uh... I'm going upstairs to have breakfast with him," Jessica said because she didn't see any way around telling her. "He sent me a text a few minutes ago to let me know he's going to take a shower and then start cooking."
"Oh." The corners of Marie's mouth lifted just enough to give away her amusement. "I'm sure you two have some catching up to do since you've been gone for weeks."
"We'll probably talk about the house. He knows a lot about it, of course."
"I'm sure that must be it." And then her grandmother winked at her.
With no idea what to say to that, Jessica took her coffee to the window and looked out at the new snow. It was pretty, she had to admit, but it was definitely nicer from this side of the window. Late January in Boston was very different than December and, even though she'd dressed for the weather, the cold had threatened to steal her breath when she walked out of the terminal.
She'd just finished her coffee when she heard Rick's truck pull into the driveway, so she rinsed the mug and kissed Marie's cheek. "I won't be too long. You're probably right about Rick wanting to get out there and shovel."
Rather than get bundled up to go outside, Jessica used the staircase up to the third floor that was at the end of the hall opposite her bedroom. They rarely used it because the stairs were steep and narrow, but Joe had told her they hadn't locked the door since Rick moved in, and they liked knowing it could be a fire exit for him if necessary.
After a quick knock, she opened the door and stepped into Rick's kitchen. He'd obviously just walked in because he was still pulling off his sweatshirt. She'd been anxious walking up the stairs, afraid that somehow it wouldn't be the same now that she'd been gone, despite the fact they'd talked every day. But the look in his eyes and the warm smile made something shift inside of her. He'd missed her. She could see it and she could see that he was as happy to see her as she was to see him.
Tossing the sweatshirt aside, he strode across the room to her and hauled her into his arms. He kissed her, his mouth hot and demanding, until she was breathless and her knees were weak.
"Jesus, I've missed you," he said against her mouth.
She backed away enough to peel off her shirt and bra, then slid the yoga pants she'd thrown on to the floor. He was even faster and by the time she was free of her clothes, he was not only naked but had rolled on the condom he must have stuck in his pocket for this moment.
"I swear that felt like the longest shift I've ever worked," he said, pulling her body hard against his.
"That was the longest five weeks of my life." She wrapped her arms around his neck and lifted her mouth to his.
His hands cupped her breasts, thumbs running across her nipples. "This time I'll just tell you up front I'm going to kiss you and touch you every chance I get."
Jessica ran her hands down his back to the curve of his ass. "While naked, as often as possible."
When he slid his hand between her legs, her knees weakened and they went to the floor. His free hand was between her head and the tile as he kissed her. With a moan she opened her legs as he slipped a finger deep inside her.
"I can't wait," she said, sliding her heels up the back of his legs.
"Good, because I can't, either."
She gasped when his cock drove into her, her back arching off the floor. His fingertips bit into her left hip as he moved, and he leaned on his other arm so he could look down at her. His gaze as he watched her was intense until he raised his eyebrow in that way she found so sexy, and she smiled.
"What?" he asked.
"You have the sexiest eyebrows."
He rocked his hips in a lazy rhythm. "I don't think anybody's ever told me that before. As a matter of fact, I'm sure of it."
"They were one of the first things I noticed about you the day I got here. We were still outside on the sidewalk and I was distracted by what great eyebrows you have."
"I noticed everything about you. Especially your eyes." He grinned. "And your ass. And your legs."
Jessica wrapped those legs around his hips and he thrust deep enough so she moaned. When she ran her hands up his back, the muscles were tight under her palms. The tension in her body built and her breath quickened along with his thrusts.
She cried out as she came, finding the release her body had been wanting for weeks, and it wasn't long before Rick's body stiffened and she felt his orgasm pulsing through his body.
When he collapsed on top of her, his breath hot and ragged against her neck, Jessica wrapped her arms and legs around him, holding him close. For the first time in weeks, she was totally content and the warm rush of happiness was almost as potent as the post-orgasm glow.
But once she'd caught her breath, Jessica became aware of how hard the floor was under her body. She moved a little, and saw the wince on Rick's face when he took some of his weight off of her by shifting it to his arm and hip.
"Are we cuddling," she asked, "or just trying not to admit we might be too old to have sex on a tile floor?"
He chuckled, and then groaned as he pushed himself off the floor. "I was hoping you wouldn't notice my reluctance to try to get up."
He helped her up and, after kissing her again, walked to his bedroom. Just as she finished putting her clothes back on, he returned. He'd thrown on a pair of sweatpants, but skipped the shirt. She didn't mind at all.
"I'll pick up my clothes later," he said when she looked at them scattered on the floor. "Now that my biggest hunger is taken care of for now, it's time for breakfast. I'm starving. You want an omelet?"
"That sounds delicious." So did the idea of watching her shirtless man cooking her breakfast.
"I missed talking to you," he said, taking a carton of eggs out of the fridge. He set them next to a big mixing bowl and cracked the first egg open. "The phone just isn't the same."
"I missed you, too. What can I do to help?"
"You can grab us some coffee and then sit and talk to me," he said. "And then later, I'm going to drag you outside and teach you how to shovel snow."
She laughed and went to his coffeemaker and poured them each a coffee. "Good luck with that. And Marie was right. She said you should go to bed, but you'd probably shovel snow instead. She was going to make you a nice breakfast to keep you going."
"You should have told me that before I started cracking the eggs." He winked at her. "So I guess you told her you were coming up here to have breakfast with me?"
"I didn't really have a choice." She sat on a bar stool and watched him drop a blob of butter into a frying pan. "Is that okay?"
"Of course. I mean, I guess all I can do is hope they're okay with it. Did you not want to tell her?"
Jessica shrugged. "Joe and Marie are a lot of things, but stupid isn't one of them."
"Very true. So how was your father about you coming out here again? Did he give you a hard time?"
"No, he was really good about it, actually, and said he didn't want to get in the way of me building a relationship with Marie and Joe. I think he's starting to regret a lot of the choices he's made in his life."
"As he should," Rick said, pouring the egg mixture into the pan.
Jessica fought back the automatic reflex to defend her father and said nothing instead. She knew nothing Rick had ever heard about David Broussard would inspire him to like her father, so he'd believe it when he saw it, so to speak.
She changed the subject to the weather while he cooked their breakfasts. It surprised her that a guy who spent as much time outside in the cold as he did wouldn't be jealous of her home city's temperate climate. "You wouldn't even need to own a coat."
He laughed. "I don't mind owning a coat. And I need four seasons. Without cold and snow, how do you know when it's time to start singing Christmas carols?"
"Oh, the department stores will let you know."
The omelets were delicious, but she balked when he told her to go downstairs and borrow some good boots, along with a coat and gloves, from her grandmother. "I have boots and a coat."
"Not for shoveling snow, you don't."
"I don't mind watching out the window."
He laughed and nudged her toward the door. "Meet me outside in ten minutes. You'll have fun, I promise."
Shoveling snow didn't sound at all fun to Jessica, but spending time with him did. And since he was going to be outside in the cold, she did as he suggested and borrowed Marie's coat and boots. Her grandmother also loaned her some brightly patterned wool mittens with a matching hat.
None of which saved her from the first shock of stepping out the door. It felt even colder than yesterday, when she'd arrived in the city, and she wouldn't have thought that was possible.
When Rick stepped out of the garage with two snow shovels, she shook her head. He was wearing a zip-up hoodie—albeit a thick one—and had gloves on, but no big coat or hat. She knew it was just a matter of her being out of her natural climate, but she thought he might be showing off a little, too.
And there was no way she'd let him—or Mother Nature—get the better of her.
Chapter Sixteen
Rick had to admit, Jess was either a lot tougher or a lot more stubborn than he'd given her credit for. He would have bet money she'd make it about fifteen minutes before she gave him his shovel back and went inside.
But she made it almost an hour, shoveling snow in the name of working out, and then practicing her snowball-making skills. The first few she tossed in his direction disintegrated on impact, but he made the mistake of showing her how to take the loose snow and really pack it down, breathing on it to help make it sticky. She managed to make a few small snowballs that actually stung a little bit.
"You have a good arm," he said. "If this was good snowball snow, you could hurt somebody."
"Isn't all snow good snowball snow? How can there be bad snow for snowballs?"
He laughed and explained the different between the dry and fluffy snowflakes that fell when it was really cold and wouldn't stick together, and the wet, heavy snow that fell in warmer temperatures and could be packed into snowballs that were practically lethal.
"I think my friends would be a little surprised if they could see me right now." She laughed, a short self-deprecating sound, while brushing snow off the wool mittens.
"You look beautiful. Your cheeks are all flushed and your nose is red. It's cute."
She gave him a look that let him know she thought he was crazy. "Yes, red noses are all the rage right now. So tell me, what do you do after you shovel snow?"
"Usually, unless I have errands to do, I read for a while. After being outside, it's nice to curl up on the couch with a blanket and a book and relax."
"I just happened to borrow a book from Marie's bookshelf last night."
"Grab it and we'll go snuggle on my couch and read for a while."
The smile she gave him seemed to grab hold of something deep inside of him and squeeze. And that scared him. It had sucked when she'd gone back to California the first time and, even though she'd only been back in Boston a day, he already knew it was going to suck even worse when she left again. Though he was pretty sure it was already too late, he should be trying to put more distance between them, not getting closer.
Rolling his eyes at himself, he put the shovels away. He was pretty sure it was too late. He knew it was. And even though he knew living on two opposite coasts was going to be a serious problem they'd have to solve in the future, he didn't see himself giving up on this relationship.
A little while later, they couldn't get any closer and he wouldn't have it any other way. Because she'd caught a chill, she'd curled up on his lap and covered them both with the fleece blanket he kept on the back of the couch. When he'd stretched his legs out, she'd stayed put, using him like a heated recliner.
It was awkward, holding his book open with one hand and turning the pages with his thumb, but he managed because it was worth the effort to have Jess stretched out on top of him. Concentrating wasn't easy, either, and every time he started losing himself in the story, she'd sigh or shift slightly and become the center of his awareness again.
Then she snorted. "I think if I was running for my life, hiding from the bad guys in a warehouse, I wouldn't be in the mood to have sex."
"Duly noted."
She laughed. "How's your book?"
"Not as interesting as the sounds you make reading yours."
"I do not make sounds."
His arm was wrapped around her waist, and he squeezed as he kissed the side of her neck. "Oh, you definitely do. A couple of chuckles. A few sighs. And a snort which, judging by the timing, was your opinion of being turned on when somebody's trying to kill you."
"It's so dumb. I mean, if you were in a house that was on fire and you just happened to stumble on a room with no smoke in it, would you feel compelled to stop and have sex in it?"
"No." He paused. "A blow job, maybe, but not sex."
She elbowed him hard enough to make him grunt. "You would not."
"Of course not. With the amount of gear we wear, by the time I could get my dick out, the flames would be knocked down and the guys would be in the truck, laying on the horn."
"Funny." Sighing, she closed the book and tossed it onto the coffee table without getting up. "That's not a very good book."
His was, but she was more interesting to him than any work of fiction, so he did the same. "How long are you planning to stay this trip?"
She thought about it for a few seconds, and then he felt her shrug. "I'm not sure. Marie's left two messages for the real estate agent, but they're playing phone tag. And I'm helping her go through the boxes that represent decades of the worst filing system ever. She has receipts for everything, like the heating system and stuff like that, and prospective buyers will want to know the dates. We just have to find the paperwork."
"So probably at least a week, then?"
"At least. Probably more like two. I told my father it might be a couple of weeks."
Two weeks...maybe. And he had a feeling that they'd be spending a lot of time together over the course of the two weeks. When the time was up and she had to go back to her life in California, he was going to have one hell of a hard time letting her go.
"I should probably go downstairs and let you get some rest or something. You said you shoveled out fire hydrants all day yesterday and then you shoveled snow today. You must be exhausted."
"You're right. I should go to bed." He kissed her neck again, and then gave it a gentle bite. "You should go with me. We can pretend bad guys are chasing us."
She laughed and rolled so she was straddling him on the couch. "Let's see if you're hero material, then."
* * *
Since they'd kept in touch a little by way of Facebook and they'd reached out to her when they found out she was back in the city, Jessica met Lydia and Ashley at a tiny Chinese restaurant they said was within walking distance of Joe and Marie's since parking was almost impossible in that area in the winter. Unfortunately, their idea of walking distance didn't factor in the weather and by the time she stepped through the front door, she wished she'd at least called a cab.
They were already there and they waved her over when they saw her. Jessica didn't even take off her coat before sitting down. Her gloved hands were freezing and she rubbed them together, hoping she could get a hot cup of coffee here.
"You look like you want to cry," Ashley said, sympathy heavy in her voice. "I should have told you to take a cab, but I haven't really paid attention to the weather forecasts lately. It's winter. Winter sucks."
"I'd cry, but I'm afraid my eyeballs would freeze if they get that wet." Both women laughed, probably not realizing she was serious.
"So Rick must be glad to have you back on this coast," Lydia said once they'd all ordered coffees and Jessica had requested a few minutes to thaw out before deciding what she wanted to eat.
She pressed her gloved hands to her cheeks, trying to warm them enough to manage normal facial expressions. "I think so. He seems to be."
"Are you happy to be back?"
"That's...not an easy question to answer," she said honestly. "It's kind of a mess."
"That's why you have friends to talk to," Ashley said. "Sometimes things aren't as messy as they seem when you're the one in it."
"I'm glad to be back because I've missed Joe and Marie. And Rick. But being back also makes it harder because it starts to feel like real life and I like it. But my real life is actually in San Diego, so it screws with my head."
Lydia held up her hands. "This might be overly simplistic, but if you prefer this life to your so-called real life, why not make this your real life? We have financial advisors in Boston. Good ones, even, or so I'm told."
"But it's not just switching jobs," Ashley said. "She built that business with her father and she's probably meant to take it over."
"Yes," Jessica said. "Besides my house and my friends, there's loyalty not only to my father and everybody who works for us, but to the plan I had, you know? I mean, I guess it was mostly his plan, but I've invested most of my life into it."
"I guess we know how that feels," Lydia said. She gave Jessica a sympathetic look. "My dad just assumed Ashley and I would run the bar and I hated that. I hated the whole firefighter thing, too. I even left Massachusetts to get away from it all, but I came back to help Ashley out while she and Danny went through their rough patch. Then Aidan was all smoking hot and sweet and sexy and...well, here I am. But this time it's my choice and I have no regrets."
"What do you mean you hated the whole firefighter thing?" Jessica asked because if there was something specifically bad about firefighters, it would probably be helpful to know that now.
Lydia shrugged. "It's a close community, and that's obviously a good thing. But they're a brotherhood and sometimes it feels like they come first, before families. And it can also be claustrophobic at times. My first husband was a firefighter, too, and I took a lot of shit when I got fed up and divorced him. Forgiving the community for circling the wagons around him took me a while."
"But Aidan's not like that?"
"He is to a point. Their lives depend on each other so they have each other's backs to a degree not a lot of people can understand. They're truly brothers."
"And sisters," Ashley added.
"And sisters. But I know Aidan doesn't put anybody else before me and that matters."
"Wow." Jessica drank another gulp of coffee and then peeled her gloves off. She'd give the coat a few more minutes. "I had no idea. So far my experience with...being involved with a firefighter is trying to spot him on the news, which is dumb, of course. I don't know enough about the fire stations to even know what locations he'd respond to."
"It probably seems fun at first," Ashley said. "Trying to spot him on the news, I guess. But it's easy to become obsessed with that. With knowing he's okay, I mean. It makes the waiting harder."
"I agree," Lydia said. "Everything's on social media. People are live-tweeting fires on Twitter and there are Facebook statuses and videos on Snapchat and Instagram. They're actually streaming as it's happening, and that means it's almost like being there at the fire with them, but being helpless to do anything but stand there and watch."
"I don't follow any of it," Ashley said. "If something happens, they'll tell me. Otherwise, the only way to get through each shift is to assume everything's going okay and his training and experience is keeping him safe."
"But you both chose firefighters anyway?" It was a lot to think about.
"The heart wants what it wants," Ashley said.
"My heart isn't the only part of my body that decided it wouldn't settle for less than Aidan Hunt," Lydia added, her snarky smile looking so much like her brother's.
Jessica finally took her coat off to hang on the back of her chair, shoving her hat and scarf into one of the sleeves like she'd seen her grandmother do. "Firefighters seem to have good...endurance."
Lydia and Ashley laughed, but their server chose that moment to see if they were ready to order, probably signaled by Jessica removing her coat. After a quick scan of the menu, they ordered a variety of dishes to share.
"So I take it things are going really well with Rick, then?" Lydia asked once they were alone again.
"We're....yeah." She smiled. "Things are going well. It's almost like we weren't even apart for weeks."
Ashley smiled. "I'm glad. He's such a good guy."
"So what's holding you back?" Lydia asked Jessica.
"We're kind of thrown together because of my grandparents. Once they're settled, what brings us together? How many times, realistically, can I travel to Boston in a year?"
"Ideally it'll be love that brings you together. But I don't think many relationships could survive that kind of distance. I mean some do, but probably not many. One of you will have to make a hard choice eventually. And eventually might not be that far away."
And it would be her who made the hard choice. Even though she'd lived in San Diego her entire life, she couldn't picture Rick living there. This place—this community—was a part of who he was and that bond was part of what she loved about him.
"Shit."
Lydia and Ashley both looked at her, but it was Lydia who spoke. Jessica had already figured out she was the more vocal of the two sisters. "What?"
"Nothing."
"No, that was a particularly vehement shit. And we're bartenders. We're awesome at picking up those kinds of signals and when somebody hisses shit in that way, something's usually wrong."
Jessica sighed. "I think I just realized—like really admitted to myself for the first time—that I'm in love with him."
"Honey, if ever there was cause to bust out a four-letter word, it's being in love with a firefighter," Ashley said, and they both raised their coffee mugs in a toast to her.
* * *
Usually Saturday night wouldn't be a good time to hit Kincaid's Pub looking for company and a game of pool, but the wind chill was pretty fierce and Rick thought it would keep a lot of people from leaving their homes just to get a beer and some wings.
He sent out a group text that yielded the information he was the only single guy at the station who didn't have a date that night. Aidan had said he might stop by, but Rick didn't see him when he walked in. That was fine. The television was offering up sports highlights and they were company enough.
Karen was covering the bar tonight, which didn't surprise him. He already knew Lydia and Ashley had the night off, since they were going out with Jess, so they had to get somebody to fill in. Even though he owned the place, Tommy preferred sitting at the bar with Fitz to actually working the bar and preferred not to do any of the actual manual labor unless it was an unavoidable situation.
"Hey, Rick," she said, setting a beer in front of him, making the lights flash off her engagement ring. "How you been?"
"Not too bad. Looks like a slow night."
"Trust me, that doesn't break my heart." He knew what she meant. This time of year kept first responders and emergency rooms hopping, so tending the bar on a slow night probably felt like a vacation to her. "You all alone tonight?"
"Everybody had plans or didn't feel like going out in the cold, I guess."
"It wouldn't be so bad if the wind would die down."
"Let me ask you something. The day Joe and Marie were in the ER because he fell and you showed me your engagement ring, I said it happened fast. And you said 'when it's right, it's right.' What did you mean by that? I mean, how did you know it was right?" When she looked at him as if he just asked her something outrageous, like her bra size, he looked down and traced the condensation on his mug. "Forget I said anything."
"Rick Gullotti, if you just hinted around that you're even thinking about settling down, this moment is unforgettable."
"Exaggerate much?"
"No." She tilted her head to give him a considering look. "I guess you asking that question when your landlords' granddaughter just happened to return to Boston is a coincidence?"
"She won't be my landlords' granddaughter very long since they're selling the house."
"Nice deflection, Gullotti. But I'll let it slide. How do you feel about them selling? You put so much work into your apartment and now you might have to find a new place and start all over."
He shrugged. "It's just an apartment. And nothing says I'll have to start over. Whoever buys it might want to keep me there, although I'm sure they'll raise the rent. Or there's a possibility we can have the property deed changed so I can buy the apartment as a condo, separate from the rest of the house. I think. I haven't really looked into it yet because I thought I'd have a little more time."
And because it wasn't something he'd wanted to deal with. Not only because he hated paperwork, but because he was afraid if he broached the subject to Joe and Marie, they'd realize he really didn't want to move and they might factor that into their own decision. Now that they'd made their decision, it might be time to look into it.
"So how long is Jessica in town for?"
"A couple of weeks, probably." He took a sip of his beer. "How do you know so much about what's going on?"
She smiled. "You think Aidan catching you kissing a woman in the pool room isn't going to get talked about?"
"It's ridiculous. The city's a little big for such a small-town grapevine."
"The important thing here is that she came back. Focus on that."
"She wants to check out the place Joe and Marie are considering moving into. I guess places like that sometimes tack on a shitload of hidden fees you don't find out about until you've already started the process. She wants to make sure they can afford it."
"She could have done that by phone," Karen pointed out.
He wanted to believe seeing him had factored into Jess's decision to fly all the way across the country again to handle business that probably could be handled by phone and online. "She knows it makes her grandparents feel better having her out here."
"It's nice of her to come all the way to Boston to help them with this. It must be a daunting decision for them."
"Yup."
"But she's getting to know them, right? I mean, she'll still come out here even after the business end is settled?"
Rick knew exactly where Karen was heading with that. "Yeah, she will. Though it's hard to say how often."
"You think she'd consider moving out here? With the right incentive, of course." And she tilted her hand again, letting the light refract off her diamond.
"I doubt it. She's pretty thankful to have Joe and Marie in her life, but it doesn't change the fact she's spent her entire life in San Diego. She has a home and friends and not only does she have a job, but it's a family business. That's not easy to walk away from."
"You're a great guy, Rick. You have no idea how much I used to wish we had that something special I was looking for. If you think she's the one, you need to let her know that. How can she make decisions for her future if she thinks what you two have is just casual sex?"
Just the thought of having that conversation made his chest ache. What if she thought it was just casual sex? He'd been told so many times that he wasn't the marrying kind and he'd laughed it off. Hell, there was a time in his life when it had been some kind of misguided badge of honor. But he had a gut feeling hearing Jess say those words to him would hurt like hell.
"She knows it's not just casual," he said. Moment of panicked doubt aside, he was certain Jessica knew she was more to him than a casual fling.
"Have you thought about moving to San Diego?" Karen asked.
"Maybe I should have gone to a different bar," he muttered.
"In other words, you have."
"I don't really see myself doing that. I love it here. I can't imagine leaving Ladder 37. My family's nearby. And there's Joe and Marie. Just because they're moving into a smaller place doesn't mean they don't need somebody looking after them a bit."
Karen shook her head, tossing her bar towel over her shoulder. "You sure are full of reasons why a relationship with Jessica won't work."
"It's called being practical."
"Or being scared."
She walked away before he could argue with that, which was probably a good thing because the words to deny it wouldn't come to him.
Chapter Seventeen
The next Monday, Rick looked around his apartment and decided it was as good as it would get. He knew the real estate agent was already downstairs with Joe, Marie and Jess. It was only a matter of time before they'd make their way to his place, so he'd given it a quick cleaning and made sure there was no clutter lying around.
The agent would need to take pictures, he supposed. And probably a ton of notes. And he still hadn't decided if he wanted to broach the subject of restructuring the property so he could buy the top floor. The more he thought about it, the less attractive an option it seemed.
Sure, he'd sunk quite a bit into the renovation. He could afford to since the Broussards lowballed his rent. But he wasn't sure he had the heart to live there without them. Another family living under him wouldn't be the same, whether he liked them or not. And he'd renovated this apartment to suit his taste. There was no reason he couldn't do it again.
When the sliding door opened, he looked up from his seat at the island, expecting to see all of them walk through the door. But it was only Jess and she hurried to close the door behind her. They were heading into a deep freeze, weatherwise, and it was already bitterly cold.
"She didn't leave, did she?" That wouldn't make sense, since the property couldn't be listed without mentioning the third-floor apartment.
"I don't think she's ever going to leave, to be honest. Joe and Marie are really taking her request to give them some history of the house seriously. Every room seems to have a dozen stories."
He opened his arms so she could step into his embrace, then wrapped her in a warm hug. She was shivering a little, just from the walk up to his apartment. "Why didn't you use the inside stairs?"
"I had an opportunity to escape and I took it. I was closer to the back door and was afraid I'd get sucked into another description of what kind of wallpaper was in the kitchen forty years before I was born."
He laughed and kissed the top of her head. "Driving you a little crazy?"
"A little? Measurements. Photos. Facts. I thought it would take maybe half an hour, tops."
"Marie doesn't do anything in a half hour. You should know that by now."
"Are you okay with this?" she asked. He wasn't sure what she meant by that, so he frowned. "The real estate agent, I mean."
"Oh. It would be awfully hard for her to list the house with an entire floor of description missing."
"I'm not asking if you know the logical reason for her presence. I want to know if you're okay with it. I know I focus on this being Joe and Marie's home, but it's your home and it can't be easy."
"The difference is that Joe and Marie own the house and I rent an apartment. It's the chance you take if you don't buy."
She backed away from him so she could see his face. "That can't be how you really feel."
He shrugged and moved around the island to put the roll of paper towels he'd been using back on the counter. "I love this apartment. And I love the house and I love Joe and Marie. But they're moving on and I will, too."
She looked at him for a long moment, and he could tell she wanted to poke at feelings about the house more. But she must have decided against it because she smiled. "It'll be a shame to give up that master bathroom. Especially the shower. It's so big two people could have sex in there."
The thought of Jess naked against all that white tile made him instantly hard as a rock and Rick was glad he was standing on the opposite side of the island. "Yes. Two people could have sex in there."
She shivered and rubbed her hands on her arms. "I bet it's warm in there, too. I think I caught a chill coming up here."
"I'd take you in the bedroom and warm you up, but you know the second I get my hands on you, they'll walk through the door."
"I'd rather not take the chance of getting caught naked by my grandparents and a very nosy lady with a camera, thanks." But she gave him a smile that just intensified the ache. "Rain check."
A few minutes later they heard Joe's heavy tread on the deck and Jess went to open the slider for them. Once they were in out of the cold, Joe introduced the real estate agent to Rick.
"We'll be quick," she promised, and he saw Jess—who was standing behind the trio—roll her eyes.
It bothered him more than he thought, watching the woman take pictures of his home so they could sell it to somebody else. He stayed where he was behind the island, only moving when she wanted to take a shot of the kitchen. Not only did he not care to follow them around the apartment, but every time he looked at Jess, he got that mental flash of her naked and slick and soapy in his shower again.
As Joe and Marie took the agent into his bedroom, Jess joined him at the island. "You okay?"
"Sure. Do I not look okay?"
"You're usually a sociable guy, so it's weird you're not talking."
"They're busy and, unless they want to know what kind of marble vanity top that is, there's nothing I can really add."
She looked at him for a long moment, and then blew an exasperated breath at some hair that had drifted toward her face. "Fine. Don't tell me."
"Okay. It's not fun watching people get ready to sell the place I've made a home, even if I think it's the right thing to do. And I also want to fuck you in my shower. So, yeah, there's a lot going on in my head right now."
She tried unsuccessfully to stifle a chuckle, and then she slid her hand into his. He interlocked their fingers and squeezed as she rested her head against his arm. "I know the house thing is hard. It is for all three of you. The shower thing I might be able to help you with at some point, though."
At some point was going to be about forty-five seconds after the door closed behind her grandparents and their real estate lady, but he didn't bother to tell her that. She'd figure it out when he locked the door and got her naked.
From the other room, he overheard the woman asking about the crown molding and groaned. Assuming he—and his throbbing erection—survived that long.
* * *
Jessica stood at the window, watching the real estate agent leave, and she could still hear the engine out on the road when Rick's arms looped around her waist and he kissed the back of her neck. She shivered and leaned back against him.
"I feel dirty," he said. "We should take a shower."
She laughed, but he didn't. "The lady just left, Rick. Joe or Marie might come up to talk about what she said."
"I locked the door. They'll figure it out."
"Or we could wait until a normal time to take a shower. It's barely evening."
"The thought of you in that shower made my dick so hard I'm afraid I'll be permanently damaged," he said, and she giggled. "Why do you think I stayed behind the kitchen island while they roamed around taking pictures?"
"I thought you were staying out of the way."
"I was. I didn't want to alarm anybody." He nipped at her earlobe, and she felt herself caving.
"I do love that shower."
"Let me give you a really good tour." Sliding his hand under her shirt, he ran his fingers up her spine and then tucked them under her bra strap. "Unless you have something you'd rather be doing."
"Something I'd rather do than be naked with you?" She turned in his arms and, after locking her hands behind his neck, hopped up to wrap her legs around his waist.
He caught her, supporting her ass with his hands, and kissed her before carrying her into the bedroom. There was definitely something to be said for having a guy who had to stay in good physical condition for a living, she thought.
Once he'd set her on her feet, he wasted no time stripping down. "You weren't kidding."
He looked down and then laughed. "I never kid about permanent damage to my dick."
"You're not going to turn that shower on cold, are you?"
"How hot do you like it?"
She pulled her shirt off and tossed it aside. "As hot as you can stand it."
He opened the wide glass door and walked into the massive shower. Jessica continued to strip as he turned the water on, so by the time it was running hot enough for him to step under the stream, she was ready to join him.
After pulling the door closed behind her, she grabbed the bar of soap off the shelf and lathered her hands. He started to turn, but she stopped him with a hand to the shoulder.
"You should let me wash your back."
"Okay, but then I'm washing your front."
She laughed and set the bar back on the shelf before touching him. Her fingertips, slick with soap, glided over his shoulders and down his arms. Then she looped her hands around and washed his chest. Her breasts were pressed against his back, almost aching with the need for him to cup them in his hands.
But she wasn't done with him yet. She slid her hands down his stomach, feeling his abs tighten as her touch skimmed lower.
"Jess." He growled her name, but whether it was a plea or a warning, she couldn't tell. And she didn't care.
She closed her hand around his cock, stroking the hard length of it with her slick hand. He dropped his head back as his hips rocked slightly, but after only a few strokes, he grasped her wrist and stilled her hand.
Then he turned and backed her up against the shower wall. The tile was cold and she arched her back, but there was nowhere to go with his body pressed against hers. He pushed her wet hair away from her face and then kissed her hard.
She opened her mouth to him, surrendering completely as his tongue danced over hers. Then he kissed her chin. The hollow at her throat.
The tile was already warming against her skin and she ran her fingers over his hair when his mouth closed over her nipple. He sucked hard, the sensation sizzling through her body.
Steam was filling the enclosure as the hot water beat down on them, rinsing away the soap and leaving their skin flushed. Or maybe it was his mouth on her body. She didn't know and didn't care. All she knew was the feel of his scruffy jaw on her delicate skin and his hand sliding up her thigh.
He stroked the soft flesh between her legs, and she sucked in a breath when he pressed the heel of his hand against her clit. She ground against his hand, so close to an orgasm she could only whimper. Then he slid two fingers into her and his thumb replaced his palm. As his fingers moved, his thumb brushed over her clit and she dug her fingernails into his shoulders as she came.
When the shuddering stopped, Rick reached around her and turned off the shower. "We've done the tile thing once already and you're right. I'm too old for that shit."
She expected him to grab them some towels, but he pushed open the glass door and lifted her into his arms. "Not on the bed! We're so wet even changing the sheets won't help because we'll soak the mattress."
"Yes, ma'am," he said, setting her on her feet. After grabbing a condom from his nightstand, he kissed her.
She heard the crinkle of the condom wrapper and then he was turning her so she faced the bed. When he put his hand on her back, she leaned forward and braced her palms on the edge of the mattress.
He entered her slowly, each stroke just a little deeper than the one before it. Her fingers curled in the bedspread and she tried not to hold her breath as his hands gripped her hips.
When she finally took all of him, he paused, his breathing fast and shallow. He skimmed his fingers over her back, raking the skin lightly with his nails. She shivered and pushed back against him with a moan.
When his hand slid up her neck and his fingers curled in her hair, she gasped and jerked her hips against his. He jerked hard in response, and she cried out. Every thrust was deep, with one hand on her hip and the other buried in her hair.
Then, just when she thought he was going to come, he stilled. After a few seconds, he withdrew and turned her around. "Screw the bed. We'll sleep on the couch if it gets wet. I want to see your face."
He picked her up and set her on the mattress, then settled himself over her. Brushing her hair away from her face, he looked into her eyes and smiled. "I like that. But I like seeing your face."
She felt a hot flush under his scrutiny, but she didn't look away. Instead she cupped his face in her hands and lifted her head to kiss him. She'd never get enough of kissing this man, she thought.
Rick nudged her knees apart and she sighed against his lips when he filled her again. "I love the feel of you inside me."
He grinned, his eyes crinkling. "I love the feel of me inside you, too."
Taking his time, he moved his hips in a steady rhythm that made her want to beat her fists on his back. But no matter how she raised her hips or dug into the backs of his thighs with her heels, he wouldn't be rushed.
The entire time, he drove her crazy with his mouth, kissing her mouth and her neck. Kissing her breasts and sucking hard on her nipples, almost but not quite to the point of pain. She squirmed and he reached between their bodies to stroke her clit.
The orgasm hit hard and she moaned as the tremors wracked her body. She clutched at his shoulders and he moved his hips faster. With deep, fast strokes, he pounded into her until his hips jerked and he growled her name.
When he rolled onto his back, they both stared at the ceiling, trying to catch their breath. He captured her hand with his and brought it to his mouth to kiss her palm. "Damn, woman."
She wasn't ready to form words, so she made the same sound she made when she bit into a slab of exceptionally good chocolate cake and closed her eyes.
After a few minutes the mattress dipped and she heard him go into the bathroom. She thought about moving, since she was sideways on the bed, naked with wet hair on top of the covers.
It didn't happen, though, and when Rick came back, he just stretched out beside her again. He stroked her thigh and turned his head to kiss her shoulder.
"Stay with me tonight."
Since she hadn't even been able to muster enough ambition to turn the right way on the bed, it sounded like a great idea. "You have to go to work soon, so you need to sleep."
"I'll sleep." He rolled onto his side and threw his arm over her. "I want you to sleep with me. We can have a very early breakfast together before I leave."
It seemed like a big step and that was ridiculous since her own bedroom was just one floor below them. Technically they lived under the same roof. But it was the right step, she thought. It was foolish for her to get up and creep back to her bed when they both wanted her in Rick's. "I should warn you that going to bed with wet hair doesn't bode well for my morning look."
"Honey, there is no look so bad I wouldn't want to see your face in the morning." He nuzzled his face against her neck. "I might laugh, but I won't be scared off."
She giggled and slapped his arm. "Nice. We should at least turn the right way so our heads are on the pillows."
"Mmm." She could hear his breathing getting slower and deeper as he mumbled. "Five more minutes."
Rick had had a feeling when he walked into the station the next morning it was going to be a doozy of a tour. Whenever the meteorologists started trotting out the record cold temperature graphs and talking in excited voices about the possibility of breaking them, his job got harder.
Some freezing rain had made things interesting, and he knew they'd probably spend most of their time responding to motor vehicle accidents. The tire chains were on the trucks in preparation and there were extra blankets in the cabs in case they were needed by any victims caught without coats.
But then somebody had started a fire in an old warehouse being rehabbed into retail space. Now they were in hell, and it was definitely freezing over.
They'd been first on the scene, but E-59 hadn't even finished laying the lines before additional alarms were struck. They had L-37's ladder in position, but with the relentless freezing drizzle falling out of the sky, it was treacherous and Rick couldn't do anything but pray none of them would slip and fall on it.
The only saving grace was the fact it was an empty building. It was massive and it was going to be a long night of battling the elements along with the flames. Hoses froze. Fittings. Footing was treacherous. Their exposed skin was at risk and if fighting a fire was anything, it was wet. Conditions in which water froze on contact with any surface added a hazardous element there was almost no protection against.
But at least there were no people inside, and that was what Rick tried to focus on as his company worked on breaching the roof.
He lost track of how long they worked on knocking down the flames. There were so many companies involved he couldn't keep track of them all, but that wasn't his job, anyway. That was the incident commander's problem.
Word filtered through they'd transported one guy to the hospital for a possible broken clavicle thanks to slipping on the ice. Rick winced when he heard that because that was a shitty bone to break. Then, a while later, he found out three guys were being transported for smoke inhalation after being rescued from a far corner of the building.
Clearly it was going to be one of those days.
Over time, he noticed the news cameras, and the number of cell phones being held up by the onlookers who could get close enough to catch a glimpse of the fire to impress their Facebook friends. And he wondered if Jess had seen any of it.
She'd be worried. Even to him, the scene looked like something out of a horror movie and he'd served through a lot of winters with Boston Fire. He couldn't imagine what it would look like to somebody who didn't have a lot of knowledge of firefighting or freezing rain and subzero temperatures.
His foot slipped on an icy beam and he went down hard, landing on his knee. Letting loose every swear word he knew through gritted teeth, he slowly pushed himself back to his feet and tested his weight on the leg. It was going to hurt like a son of a bitch tomorrow, but it would do for tonight.
That was what he got for letting his attention wander to Jess instead of focusing on the job. No matter how worried she might be, her life wasn't in his hands. He owed it to these guys to stay focused.
"You okay?" Chris Eriksson appeared at his side, his eyebrows drawn together in a way that was almost comical thanks to the tiny icicles clinging to them.
"Yeah, I'll live. We're going to pull out and take a break soon. I saw Porter slip a few minutes ago. He didn't go down, but we're all getting tired and somebody's going to get hurt."
"I could use a hot chocolate. I can see the volunteer truck from here and those foam cups are a thing of beauty, man. Every time I look at them, I want to cry."
Rick laughed and called in to command. It was time for some hot chocolate, dammit.
Chapter Eighteen
Jessica heard Joe give a long, low whistle and lowered her book to look up at the television screen. Her heart seemed to stop for a few seconds and she audibly sucked in a breath.
A roaring fire filled the screen and, as she watched, the camera angle pulled back and panned the mass of trucks and firefighters surrounding the scene. And while the flames and smoke dominated the background of the shot, ice was everywhere in the foreground.
It glistened on the fire trucks and on the snow. The camera zoomed in on a group of firefighters gathered around an older guy who was obviously in charge, and Jessica actually gasped. Their coats were encased in ice, and icicles hung from the rims of their helmets. One of the men facing the camera had ice in his mustache, and she felt a frisson of fear for a man she didn't even know.
"They shouldn't be out there like that. It's too cold."
"They've got a job to do," Joe said.
"Once the fire is that big, why can't they just let it burn?"
"And let it jump to the next building and keep spreading until it takes out an entire city block or more? That building's being renovated, but the others have people living in them, and businesses."
Jessica wanted to argue with him, but she knew he was right. It was their job to put out the fire, regardless of the weather. But there were a lot of firefighters in the city doing that job. "Maybe Rick and the others aren't at that fire. There are probably hundreds of fire trucks in Boston, right?"
Joe shook his head. "I know where that building is and if they weren't first on the scene, they were damn close to it."
"They train for this," Marie said in a soft voice. "When it's colder than usual, like tonight, it's a challenge, but they know what they're doing. Especially Rick. He's been doing it a long time."
Jessica remembered what Ashley had told her about not watching the news or the Facebook and Twitter updates, and now she was beginning to understand why. Jessica knew she could drive herself crazy, staring at the screen and hoping for a glimpse of Rick. She was already straining to hear the voices in the background, barely audible above the news correspondent's.
Forcing herself to sit back against the couch cushion, she wondered who they would notify first if something happened to Rick. His parents were the obvious choice, but it wouldn't surprise her at all if Joe and Marie were first on his list of people to call.
Five minutes later, she watched Aidan Hunt accept a cup of coffee from a volunteer. His expression was grim and the camera cut away as he lifted the cup to his mouth. She wanted to yell at the television. If the camera stayed with Aidan, maybe she'd get a glimpse of the others.
Then the station cut back to regular programming, promising to update on the fire as needed, and she wanted to drive to the scene herself just to make sure he was okay. Instead, she set her book on the table and went into the kitchen. Her mouth was dry, so she poured herself a glass of water and tried to think about anything else but what Rick was doing at that moment.
"I don't imagine it's an easy thing, loving a firefighter." Marie had followed her into the kitchen and, when her grandmother put her arm around her shoulders, Jessica leaned into the embrace.
"You're probably right. I don't know how Lydia and Ashley do it."
"They've had a lot of practice, and their dad was a firefighter, too. And their brother, so they've both done their share of waiting for news. But I meant you."
"I can't be in love with Rick." Too late, she realized how weird that sounded. She should have said she wasn't, not that she couldn't be. She knew she was, of course, but she didn't really want Marie to know it.
"Why can't you?"
"We live on opposite ends of the country, for one thing." It was weak, she knew. In the past two months, she'd spent more time in Boston than in San Diego. Not that it was a sustainable, long-term solution, but she'd proven she could work remotely. "I have a career and a condo and that's...it's just where my life is."
"I think it might already be too late to tell yourself you can't fall in love with Rick. Even your grandfather noticed you two have feelings for each other and he's not exactly a romantic soul."
"I don't know what my father would do without me." But she couldn't help wishing she'd called him back to clarify whether he'd been joking about a Boston office or not before she'd left California this time.
"He's a grown man, Jessica. He's leaned on you long enough."
It wasn't that simple, but she knew she wouldn't get anywhere arguing the point with Marie. She was going to change the subject, maybe suggest they have some ice cream, when her phone vibrated in her pocket. She squeezed Marie before stepping out of her embrace to pull the phone out, and then she felt a rush of relief when she saw Rick's name on the screen.
"It's Rick," she told Marie as she opened the text message.
Only have a sec, but if you see the news, I'm ok.
It looks awful. Be careful.
Yeah. Going back in soon but don't worry.
She couldn't help worrying, but she could tell by his short messages that he didn't have time to hold her hand by phone. Be safe. See you soon.
After she'd reluctantly put the phone back in her pocket, Jessica looked up to see Marie watching her with a soft expression. "I think the fact he reached out to you during a break in a tough night says a lot, don't you?"
Maybe it did, but the thought of going through this on a regular basis was daunting. And so was the thought of abandoning her father and everything in San Diego to move here. She knew he wouldn't leave Boston. His family was there, and Joe and Marie. And there was the extended family made up of his fellow firefighters and their families. He had an emotional investment in his home that was stronger than hers to San Diego, with the exception of her father and a few coworkers and friends she was close to.
Picking up her water glass, she tried again to chase away the dryness in her mouth. The thought of such drastic changes to her life scared her, and it seemed ridiculous to consider it based on the fact he'd texted her tonight.
But the knowledge that when most of the men were probably taking advantage of a short break to reach out to their loved ones at home, Rick had reached out to her thrilled Jessica in a way she couldn't deny.
But he'd said he was going back in, so there was more waiting and more worrying for the time being.
* * *
They were pulling back, given permission by the incident commander to take their break so fresh companies could move in, when the floor shifted under their feet. Suddenly they were scrambling and there was shouting and confusion. The smoke was thick, making it hard to see, but he could hear the unmistakable sound of the floor caving behind them.
He saw the reflective strips on Gavin's coat and they were facing the wrong way, and that meant the kid was getting turned around. Putting his hand on Gavin's shoulder, he shoved him in the right direction and yelled at him to keep moving. Gavin wasn't a rookie, but there were times a situation gone to shit outpaced a guy's experience and this was one of them.
When they got outside, Rick swiveled his head. Boudreau. Porter. Eriksson. But it was the E-59 crew that caught his eye. Grant Cutter was on his knees, gasping for air, but Aidan had his arms around Scotty, who was fighting like hell to go back in.
Danny Walsh.
He could hear the commands and updates flying. The other companies would focus their efforts on beating the fire back while locating Walsh and getting him the hell out. But Scott couldn't go back in and Aidan was losing his grip on him.
"Scotty." Rick stepped in front of him, putting his hand on the man's shoulder. "We need to get out of the way."
"He didn't come out, LT."
"I know." Some of the fight went out of Scott and Aidan steered him toward their engine. Rick walked with them, the rest of their crews following.
"You have to let me go back in. He's my brother-in-law. He's family."
"They'll get him out, Scotty, but we have to stay out of the way and let them do it. You know that."
"Ashley's pregnant. Barely two months, so only the family knows." Scotty sagged onto the ice-coated bumper of the truck, his eyes welling up with tears. "I've gotta bring him back to my sister."
Shit. The word echoed around Rick's mind. Shit shit shit.
"I can't go home without Danny, Rick. Not after everything they've been through. Not ever."
"That's not going to happen." Fear knotted his stomach. Fear for the guy he'd worked with for years. Fear for Ashley, who'd just gotten her marriage back on track. And fear for all of them. Danny was family to some of them literally, but to all of them figuratively.
His first instinct was to barge into the building and protocols be damned, like Scotty wanted. That was Danny Walsh in there.
But that first instinct was exactly why he took a deep breath and forced himself to clear his mind. If push came to shove, the guys of L-37 and E-59 would do anything to get Danny out, without regard for their own safety or maybe even the safety of others. And that was exactly why they had to stand back and let the other companies work.
None of them stripped out of their ice-stiffened coats, though. They grabbed fresh tanks and double-checked their gear. If they were needed, they'd be ready in seconds.
As they listened to the radio and to the organized chaos around them, the volunteers brought them coffee and hot chocolate from the canteen truck. Rick thanked them and made sure Scott actually drank his, then turned to scan the scene. They were in an area offset from the main action, where the news cameras were aimed. And there were a lot of trucks forming a barrier. He was pretty sure if Ashley or Lydia were watching the news, they wouldn't be able to tell in a sweeping glance that Danny wasn't standing with them. Tommy Kincaid would be listening to that old scanner he kept at the bar, though, and he might know what was going on. Whether he'd tell his daughters or not before the story had an ending, happy or otherwise, Rick couldn't guess.
Somebody shouted and there was a lot of movement at the side of the building. They started moving in that direction, but stopped when the strident beeping warned them an ambulance was trying to back through the crowd.
"Can you see anything?" Scott rocked onto the toes of the heavy boots, trying to see what was going on.
"It has to be Danny."
The EMTs threw open the back doors and hauled out the stretcher, but they were met halfway by three firefighters supporting the weight of a fourth between them. They had him in a hammock carry and even if the lolling of Danny's head didn't give it away, it was obvious to Rick he wasn't conscious.
All they could do was hope he was alive. The EMTs wasted no time getting him on the stretcher and into the back of the ambulance, and the men who'd carried him out gave helpless looks to Scott as it pulled away.
"He was breathing," one of them finally said.
"I should go to Ashley," Scott said quietly, almost as if he were talking to himself. "She needs to get to the hospital."
"Your dad probably already knows, but I'm going to call him and he and Lydia will take care of Ashley. We have a job to finish, repacking this shit, and you're already a man down. We can't go to the hospital until we can get the trucks out and it's going to be a while, so focus on what we're doing. The doctors will take care of Danny and other guys will show up to wait with the family until we can get there. You know that."
When Scott nodded and Rick was sure he had his head on straight enough to stay put, he left him in the care of the others and moved away to call Tommy at the bar. As he suspected, Danny's father-in-law knew what was going on and had been just about to leave to pick up Ashley at her house.
"Do me a favor," Rick said, "and make sure the second you know something, you let Scotty know. He's pretty messed up."
Once he'd gotten that call out of the way, he let his thumb hover over Jess's number. He wanted to hear her voice in the worst way, but then he tucked the phone away. She was a distraction he couldn't dwell on until this hellish night was over.
After knocking the ice off his coat and helmet, Rick grabbed a fresh coffee from a volunteer and went to check in with the incident commander.
* * *
Jessica woke to a weird sound, and it took her a second to realize it was the glass door sliding open. She was in Rick's bed and she sat up when he closed the door behind him. By leaning out over the bed a little, she could see him, and it was probably a testament to his exhaustion that he didn't even jump when he made eye contact with a person he wasn't expecting to be there. Or maybe he was expecting her to be there because where else would she be when he might need her?
"Hey," she said softly as he stepped out of his boots and tossed his coat on a chair before walking toward her.
He pulled the T-shirt over his head and then paused to take off his jeans and socks. He'd obviously taken a shower at some point after the fire. "Hey."
"You're limping."
"I whacked my knee a good one, but it's not a big deal."
When he reached the bed, she moved over and threw back the covers so he could slide in. He wrapped his arms around her and pulled her close. After dragging the covers up to his shoulder, she relaxed into his embrace.
"We almost lost Danny Walsh tonight," he said against her hair. "Or last night, technically. Whenever the hell it was."
"I heard about it on the news and texted Lydia. She said he'll be okay. Right?"
"He's got a pretty bad concussion, smoke inhalation, a broken arm and he busted his leg in two places, but he'll be okay."
"What happened?" She felt his muscles tense slightly. "Actually, never mind what happened. What matters is that he's going to be okay. Is everybody else okay, too?"
"The rest of our guys are okay. Other companies had a few minor injuries, I guess. A guy slipped on the ice and broke his clavicle, and a few were treated for smoke inhalation, but nothing major considering it was fully involved."
"I know you need to sleep now, but I was worried about you and I just wanted to see you for a few minutes when you got home."
"Do you have any plans for this morning? You need to work?"
She could hear the exhaustion in his voice and knew he'd be asleep in a matter of minutes. "There's nothing that can't wait."
He muttered something she couldn't make out and nuzzled his face into her hair. Then, a few seconds later, she felt his muscles go lax as he nodded off. Despite being an early riser, she'd spent a good chunk of the night tossing and turning herself, so she was content to drift in and out of sleep for a while.
It was almost three hours before he stirred a little and rolled onto his back. Jessica waited until he resumed snoring and then slid out of his bed. He'd wanted the comfort of her being there when he first climbed into bed, but she suspected he'd probably sleep better now if she wasn't lying awake next to him. Every time she shifted, he stirred, and she was restless.
Last night had been scary even before the news broke about Danny. And then she'd been afraid for him and for Ashley.
But when she'd seen Rick limping toward her, his face haggard with exhaustion, the fear had settled into the pit of her stomach. He'd been in just as much danger as Danny Walsh had been, and it could just as easily have been him lying in the hospital this morning.
Rather than roam his apartment alone or sit on his couch in silence, she went down the interior stairs and found Marie vacuuming the living room. Her grandmother hit the off switch when she saw her and gestured for her to sit down on the couch with her.
"Good morning, honey. How's Rick?"
"Exhausted. And he hurt his knee somehow. He was limping when he got home."
Marie squeezed her hand. "He's fine, then."
"He was sound asleep when I left him. I think he will be for quite a while actually. I'm not sure what to do now."
"We'll make a casserole for the Walshes," Marie said. "Something that can be put in the freezer and easily heated in the microwave. Ashley doesn't need to be worrying about meals while her husband's in the hospital."
Jessica followed her into the kitchen. "How many casseroles do you think she'll get?"
Her grandmother laughed as she pulled out her recipe box. "At least two dozen. Probably a lot more. Most of them will get thrown away when they need the freezer space."
"But we're going to make one anyway?"
Marie shrugged, her eyes serious. "Yes. It's simply what we do."
Chapter Nineteen
Rick killed the snowblower's engine and looked around the driveway. They didn't have any fresh snow, but the snowbanks along the edges were slowly creeping into the parking spaces, so he was using the snowblower to cut them back.
"I guess that woman's right about the driveway looking bigger," Joe said from the open garage door.
It had been the real estate agent, after looking at photos she'd taken, who suggested some snow removal—or at least rearranging—would make the driveway look bigger, and that was supposedly a huge selling point.
Rick's knee wasn't too bad and he was happy to have the physical activity to help take his mind off Danny's accident, so he'd volunteered to do it.
"I don't know what the holdup is," Rick said, pushing the snowblower inside. "She gave you a value on the property last month. Why didn't she just use that to list it?"
"I guess she gave us a pretty close ballpark figure, but to actually list it, she needs all kinds of photos and information. How old the roof is. The furnace. Crap like that. She wants to price it just right."
"That makes sense, I guess. Prospective buyers will want to know that."
"Seems like a pain in the ass to me."
"Yup."
"Marie and I were talking last night. Jessica said the place we're looking at is really reasonable and we could actually swing it even without a huge profit on the house."
"That's good. Means you won't have to worry about it in the future if you end up with surplus in the bank."
"It also means we could sell it to you if you were interested in it. You've taken good care of this old beast—and of us—for years and we'd like to give you an opportunity to think about buying it before we go ahead and formally list it on the market."
And then he named a price that Rick almost couldn't believe he'd heard correctly. "That means a lot to me, Joe. You know I love you guys and this house, but you can't do that. If I want to buy it, and honestly I have considered it, I'll pay you what it's worth."
"What's a building worth? It's the people that matter and you're like family to us, so think about it." Joe gave him a grin. "Besides, you can't go spending all your money or you won't be able to afford gas for that truck of yours."
Rick laughed because it was true, if a little exaggerated. His truck was so bad he'd changed the digital display so it showed the outside temperature instead of the average miles per gallon just because it was depressing, but he couldn't hide the dent it put in his wallet. But he wasn't driving around this city in a compact car for any amount of savings at the pump.
"I told Marie I'd talk to you about it," Joe said. "But we haven't said anything to Jessica. No sense in muddying the waters if you're not even interested to begin with, know what I mean?"
Rick nodded, his gut tightening. By muddying the waters, Joe meant displeasing their granddaughter, who probably wouldn't like them making that big of a financial decision on emotion alone. "I'm definitely interested, but I don't think I want to hide something like that from her. Things are complicated, I guess."
"Things always are, son."
It was several hours before Rick got the chance to speak to Jessica alone. Joe and Marie were busy at the kitchen table sorting through boxes of papers, and he went upstairs to find Jess sitting cross-legged on her bed, scowling at her computer.
"Hi," she said when she noticed him in the doorway. "Have you heard anything about Danny today?"
"Yeah. He's going home, actually. Nothing to be done but rest and let the breaks heal. But no lasting damage."
"He'll be out of work for a while, then?"
"Yeah. Guys will rotate through and cover for him to get the extra shifts, but they'll have to find a long-term replacement for him, unfortunately." It was always hard when somebody new joined a company that had been together a long time.
"At least he'll be okay eventually."
"Yup." He shoved his hands into his pockets, feeling awkward with her for the first time in a long time. "I wanted to talk to you about something. You got a minute?"
"Of course." She closed her laptop and set it aside. "You look so serious."
"I was talking to Joe earlier about the house, and he surprised me by telling me he and Marie want me to have dibs on buying the house," he said. "Not at the full asking price, though."
"Really? What did they offer it to you for?" When he said the number, he wasn't surprised when her eyebrows shot up. "Not at full asking price? Rick, that's not even half of what it's worth."
"Trust me, I know."
She stared at him so long, he had to fight the urge to squirm. He hadn't done anything wrong and he'd honestly thought they'd put the lack of trust when it came to the house behind them a long time ago. But he could practically hear the wheels turning in her head.
"I wish you'd say something," he told her.
"What do you think I'm going to say? Of course I'm going to recommend they retract the offer and list it with the real estate agent. I'm not letting them get screwed out of money just because they like you."
Screwed. The implication he'd deliberately masterminded the offer to screw over Joe and Marie hurt like a kick to the stomach, and the fact she'd think it of him made him angry. "Worried about your inheritance?"
The color drained from her face and she blinked at him for a few seconds. "Excuse me?"
"If they only get half what the house is worth, they might burn through it and not have anything left to leave you."
"I can't believe you'd say that to me." Red splotches shone on her cheeks, and her eyes sparkled with anger. "I couldn't care less about an inheritance, and I thought you knew me better than that."
"And I thought you knew me better than to think I'd take advantage of Joe and Marie."
"Do you understand my job is to protect people's money? Maximizing investments is what I do, so what kind of financial advisor would I be if I stood back and let my own grandparents take a bath on their house?"
"Your grandparents might not have fancy finance degrees, but they're not stupid and neither wants to leave the other unprotected in the future, so I'm sure they've thought this through."
"They're letting emotion cloud their judgment. Feelings have no place in business."
He rolled his eyes. "Did Davey have that cross-stitched on a pillow for you?"
When her mouth tightened and her eyes went flat, he realized he'd gone too far, but there was no taking it back.
"My success in financial planning has nothing to do with my father and everything to do with my education, instincts and experience. Don't sell me short, Rick."
"I think you're selling your grandparents short."
"I'm sure you'd think so, since you're the person who stands to benefit the most if I'm wrong."
Rick blew out a breath and ran a hand over his hair. This had gone sideways on him in a way he couldn't have imagined. "Look, I don't want to fight with you."
"We've obviously arrived at the conflicting interests phase of our relationship. We always knew it would probably happen. I'll sit down with Joe and Marie today and go through all of their options one more time, including a look at their long-term finances if they choose to sell you their home at a fraction of its value."
"You're phrasing it that way to make it sound worse than it is."
"It's an accurate representation of the situation. I'm sure they'll let you know what their decision is within a day or two."
The dismissal was clear in her voice and he knew in her current mood, he'd probably have better luck beating his head against a brick wall than convincing her he hadn't put Joe and Marie up to anything.
With a heavy heart, he turned and walked away.
* * *
"The last thing we wanted to do was cause a problem with you and Rick," Marie said, setting a big bowl of baked macaroni and cheese in front of Jessica.
The comfort food was killing her. And possibly her wardrobe. "You didn't cause a problem with us. We simply have different philosophies when it comes to protecting your investments."
"I know you see this old house as an investment," Joe said. "That's your job and with the market the way it is, I guess it is pretty valuable. But to your grandmother and I, it's a home. That's what matters to us and it's important to us that somebody loves it as much as we have. Sometimes there are emotions in business, no matter what your father and your business professors told you."
"I've broken down the numbers for you," she said. "You can see the impact it has on your financial future."
"Of course it has an impact," Marie said. "But our future financial security doesn't depend on the full amount. Your numbers show that we can live quite nicely with half the amount."
"That's true. It's not what I recommend, but all I can do is suggest. Ultimately, it's your decision to make."
"I just feel so bad that you're going back to California now," Marie said.
"I was always going back to California, since that's where I live. And now that you've made decisions and have the ball rolling, I don't need to be here. I can handle a lot of things via email."
"What about Rick?"
Jessica looked at her grandfather. "What about him?"
"We're not stupid. We've minded our own business, but it's obvious you and Rick have been in a relationship. If we hadn't offered him the house without talking to you first, what were your plans going to be?"
"I don't know," she said honestly. "We hadn't talked about the future."
"But you must have thought about it," Marie said.
"Of course I had. My father made a comment about opening an East Coast office to expand the business, but I wasn't sure at the time if he was being sarcastic or if he meant it. But the more I thought about it, the more I wanted to seriously consider it. I could still be a part of the family business I've helped build, while being here with you guys. And Rick."
"That can still happen."
"I don't think so. It was fun while it lasted. Now it's time for me to go home and get back to work."
Joe wisely changed the subject to something he'd seen on the news recently before Marie could get too emotional, and Jessica listened to them chatter back and forth until she could escape to her room to start packing.
How was it possible she'd accumulated so much stuff during her two stays in Boston? And ninety percent of it was stuff she couldn't wear in California. She'd planned to leave it behind for her next visit, but if Rick was going to buy the house, that wasn't going to work.
With her mouth set in a grim line, she started sorting the few things she'd carry with her from the majority of it, which she'd ship to her address in San Diego. With one checked bag and her carry-on, she could probably bring home the things she'd want right away.
When she dumped her underwear drawer on the bed, her gaze fell immediately on the yellow bra she'd worn to watch the hockey game with him. Picking it out of the pile, she sat on the edge of the bed, pressed it to her face and sobbed.
* * *
Rick was checking the air pressure in Ladder 37's tires when his cell phone chimed. Technically they had mechanics who took care of all things maintenance related, but he liked to know what was going on with his own truck. And it never hurt to make sure they were doing their jobs properly, either. In frigid temperatures, especially if they had the chains on, tire pressure mattered.
He pulled out the phone and read the words on the screen. Jessica just left for the airport. Her plane takes off in three hours. Just thought you might want to know. Love, Marie.
For a second, he was amused by her message. Someday she'd figure out she didn't have to sign texts like they were letters because the contact information came through with it.
Then the message itself hit him like a wrecking ball. In three hours, a plane was going to take Jess back to California. If they left things as they were, it would be over. If there was no more communication between them, by the time she returned to Boston again they'd be barely more than polite acquaintances who'd once been lovers.
He sank onto the bumper, feeling sick to his stomach. It didn't seem possible he could spend the rest of his life without seeing those eyes smile at him again. He would never kiss her again.
She would be gone and, even if they crossed paths again, what they'd had would be nothing but a distant memory.
"You okay?"
He looked up at Gavin. "What?"
"You look like you got really bad news."
There was no sense in trying to hide anything around these guys. "Jess is on her way to the airport to go back to San Diego."
"Oh, that's too bad. I really liked her."
"Me too, kid. More than liked, actually. I love her."
"Did you tell her that?"
"No."
"Not even on Facebook?"
Rick rolled his eyes. "Seriously, are you even old enough to shave?"
"I'm old enough to know you don't let the woman you love get on a plane without running after her. Have you ever even seen a movie, dude?" When Rick glared at him, the tips of Gavin's ears turned red. "Uh, Lieutenant Dude."
"I think all those movies were made before they changed the security procedures."
"You're a Boston Fire officer."
"I don't think Homeland Security's going to rewrite their manual for me."
"I just meant you could probably talk somebody into having her paged for you, but misuse of power's another way to go."
"I had no idea you were trying to be a comedian, kid. You might want to keep the day job, and don't spend so much time with Scotty Kincaid. You're starting to sound like him."
Gavin laughed and walked away, but Rick couldn't stop thinking about what he'd said. So they'd fought about how to handle the Broussards' house. People in love fought sometimes and they got through it.
But Jess didn't know he was a person in love. She didn't know he thought they had something worth fighting for because he hadn't told her. Assuming she knew it—could see in the way he looked at her or feel it in his kisses—wasn't enough. He had to say it.
He ran up the stairs and poked his head into Cobb's office. "I have an emergency, Chief. Okay, not a real medical emergency or anything. But I have to do something."
"Boudreau just shared the latest gossip with me. Go. And don't bother coming back today. I'll cover because you're either going to be worthless and mopey or you'll get the girl and forget to come back, anyway. And don't you dare flip that emergency light switch in your truck. No lights for personal shit."
Rick used the remote start on his truck so it was already running and ready to go when he climbed into the driver's seat. After buckling his seat belt, he gunned the engine and drove through the city as fast as traffic allowed, taking a few shortcuts here and there.
Using his phone's hands-free connection to the truck, he called Marie. "I need her flight info, Marie. Airline. Gate. I could walk around that airport for three days and not find her without a starting place."
She gave him the information and the hitch in her voice told him she was getting emotional. "Good luck, honey."
When he reached the right terminal, he lucked out and saw a police officer he knew from a charity marathon he used to run back when he thought running was fun. With permission to leave his truck at the curb for ten minutes, he went inside and scanned the lines waiting for security screening. She'd been in a cab and hadn't had that much of a head start on him.
She saw him first. When he found her in line, she was looking at him and those pretty eyes hit him hard. They weren't smiling today. He held out his hands, trying to gesture to her to please just give him a minute.
She hesitated so long, he wasn't sure she was going to give him a chance. Then she made her way back the way she'd gone, squeezing her way past the people in line behind her and trying not to hit anybody with the carry-on bag slung over her shoulder.
"Shouldn't you be at work?" she asked when she reached him.
He wanted to touch her face or to stroke her hair. Something. But he had to earn that right back. "I couldn't let you go without talking to you one more time. I want to apologize for losing my temper when we talked about the house. I felt like you were implying I was trying to screw Joe and Marie over and it hurt."
"I didn't think that and I'm sorry the words I used made you feel that way. It wasn't meant to be personal, but you hurt my feelings, too, so it got messy."
"It was mostly me, Jess. It's no excuse, but the house has been a roller coaster and Danny...the nightmares were bad and I didn't sleep well and it made me oversensitive. I understand that you make those kinds of decisions differently. I really do. I was so stupid and I don't want to lose you because of it."
"I don't...I didn't..." She pressed her lips together, her eyes wide and shimmering with tears.
"I love you." He touched her face then because he couldn't stop himself. Cradling her cheek in his hand, he wiped away a tear with his thumb. "That's the important thing. I love you and I want you to stay with me. I want us to figure out our future together and have children and take them to Joe and Marie's new place for Sunday dinners. I want us to be a family, Jess."
"You say family like it should mean something to me. My mother took off when I was three and never looked back. I just met my grandparents and I'm thirty-four years old. How the hell would I know anything about family? You see how I messed things up because emotion is hard for me to balance."
"You don't have to balance. You just love. Like you love Joe and Marie. They're your family and you're taking care of them because you love them. And how about your old man? You know how being a financial advisor or whatever you call it isn't your dream. You wanted to be an event planner, and you still do, but you do the finance stuff anyway and you kick ass at it because it's his dream and you love him? That's family, Jessica."
"I want the family I see in the picture frames at the store," she said, her eyes shimmering with tears.
"Those are models, Jess. God knows that ain't me. I'm real and I have bad days and sometimes I say stupid things, but I love you." He paused, not knowing what to say next and just hoping the right words would come out. "I love you so much you won't need a picture to capture it because I swear to God you will feel it every day of your life."
"I love you, too," she whispered, and Rick's knees almost buckled from the relief. He'd thought—hoped—she did, but he was afraid he'd blown it so badly she'd never say the words. "I don't know when it happened. Weeks ago? But I know standing in that line, waiting to be thousands of miles away from you, was breaking my heart."
"Don't get on the plane today. Come home with me and let me show you just how much I love you. That way, every time you get on that plane, I'll know you're coming back to me as soon as you can."
"Yes, I'll go home with you." She threw her arms around his neck and smiled up at him. "We'll make our own family."
"I can't wait." He kissed her and then raised his eyebrow because it always made her smile. "As a matter of fact, we should go get started on that right away. You don't absolutely need anything in your checked bag, do you? It's probably zipping around a conveyor belt right now."
Jess reached up and ran her fingertip over his eyebrow. "The only thing I absolutely need is you."
* * * * *
If you fell in love with Shannon Stacey's voice in CONTROLLED BURN
you're going to love THE KOWALSKIS.
See where it all started in
EXCLUSIVELY YOURS by Shannon Stacey.
Available Now.
Chapter One
"You got busy in the backseat of a '78 Ford Granada with Joseph Kowalski—only the most reclusive bestselling author since J. D. Salinger—and you don't think to tell me about it?"
Keri Daniels sucked the last dregs of her too-fruity smoothie through her straw and shrugged at her boss. "Would _you_ want anybody to know?"
"That I had sex with Joseph Kowalski?"
"No, that you had sex in the backseat of a '78 Granada." Keri had no idea how Tina Deschanel had gotten the dirt on her high school indiscretions, but she knew she was in trouble.
An exceptionally well-paid reporter for a glossy, weekly entertainment magazine did not withhold carnal knowledge of a celebrity on the editor-in-chief's most wanted list. And having kept that juicy little detail to herself wouldn't get her any closer to parking her butt in an editorial chair.
Tina slipped a photograph from her purse and slid it across the table. Keri didn't look down. She was mentally compiling a short list of the people who knew she'd fogged up the windows of one of the ugliest cars in the history of fossil fuels. Her friends. The cop who'd knocked on the fogged-up window with a flashlight at a really inopportune moment. Her parents, since the cop was in a bad mood that night. The approximately six hundred kids attending her high school that year and anybody _they_ told. Maybe short list wasn't the right term.
"It was 1989," Keri pointed out, because her boss clearly expected her to say something. "Not exactly a current event. And you ambushed me with this shopping spree."
Actually, their table in the outdoor café was surrounded by enough bags to stagger a pack mule on steroids, but now Keri knew she'd merely been offered the retail therapy _before_ the bad news. It shouldn't have surprised her. Tina Deschanel was a shark, and any friendly gesture should have been seen as a prelude to getting bitten in the ass.
"Ambushed?" Tina repeated, loudly enough to distract a pair of Hollywood starlets engaging in some serious public displays of affection in a blatant attempt to attract the cheap tabloid paparazzi. A rabid horde that might include Keri in the near future if she didn't handle this correctly.
"How do you think I felt?" Tina went on. "I reached out to a woman who mentioned on her blog she'd gone to high school with Joseph Kowalski. Once there was money on the table, I made her cough up some evidence, and she sent me a few photos. She was even kind enough to caption them for me."
Keri recognized a cue when it was shoved down her throat. With one perfectly manicured nail she hooked the 8x10 blowup and pulled it closer.
A girl smiled at her from the photo. She wore a pink, fuzzy sweater, faded second-skin jeans and pink high heels. Raccoon eyeliner made her dark brown eyes darker, frosty pink coated her lips and her hair was as big as Wisconsin.
Keri smiled back at her, remembering those curling iron and aerosol days. If the EPA had shut down their cheerleading squad back then, global warming might have been a total non-issue today.
Then she looked at the boy. He was leaning against the hideous brown car, his arms wrapped around young Keri's waist. Joe's blue eyes were as dark as the school sweatshirt he wore, and his grin managed to be both innocent and naughty at the same time. And those damn dimples—she'd been a sucker for them. His honey-brown hair was hidden by a Red Sox cap, but she didn't need to see it to remember how the strands felt sliding through her fingers.
She never failed to be amazed by how much she still missed him sometimes.
But who had they been smiling at? For the life of her, Keri couldn't remember who was standing behind the camera. She tore her gaze away from the happy couple and read the caption typed across the bottom.
_Joe Kowalski and his girlfriend, Keri Daniels, a few hours before a cop busted them making out on a back road and called their parents. Rumor had it when Joe dropped her off, Mr. Daniels chased him all the way home with a golf club._
Keri snorted. "Dad only chased him to the end of the block. Even a '78 Granada could outrun a middle-aged fat guy with a five iron."
"I fail to see the humor in this."
"You didn't see my old man chasing taillights down the middle of the street in his bathrobe. It wasn't very funny at the time, though."
"Focus, Keri," Tina snapped. "Do you or do you not walk by the bulletin board in the bull pen every day?"
"I do."
"And have you not seen the sheet marked ' _Spotlight Magazine_ 's Most Wanted' every day?"
"I have."
"And did you happen to notice Joseph Kowalski has been number three for several years?" Keri nodded, and Tina leaned across the table. " _You_ are going to get me an exclusive feature interview with the man."
"Or...?"
Tina sat back and folded her arms across her chest. "Don't take it to that point, Keri. Look, the man's eleventh bestseller is going to be _the_ summer blockbuster film of the decade. More A-listers lined up to read for that movie than line up on the red carpet for the Oscars. And he's a total mystery man."
"I don't get why you're so dedicated to chasing him down. He's just an author."
"Joseph Kowalski isn't just an author. He played the media like a fiddle and became a celebrity. The splashy NY parties with that gorgeous redhead—Lauren Huckins, that was it—on his arm. Then Lauren slaps him with a multi-million dollar emotional distress suit, he pays her off with a sealed agreement and then he disappears from the map? There's a story there, and I want it. Our readers will eat him up, and _Spotlight_ is going to serve him to them because you have access to him nobody else does."
"Had. I _had_ access to him." Keri sighed and flipped the photo back across the table even though she would rather have kept it to moon over later. "Eighteen years ago."
"You were his high school sweetheart. Nostalgia, darling! And rumor has it he's still single."
Keri _knew_ he was still single because the Danielses and Kowalskis still lived in the same small New Hampshire town, though Mr. and Mrs. Kowalski lived in a much nicer house now. Very _much_ nicer, according to Keri's mother.
"You've risen fast in this field," Tina continued, "because you have sharp instincts and a way with people, to say nothing of the fact I trusted you. But this..."
The words trailed away, but Keri heard her boss loud and clear. She was going to get this exclusive or her career with _Spotlight_ was over and she could start fresh at the bottom of another magazine's totem pole. And since her career was pretty much the sum total of her life, it wasn't exactly a threat without teeth.
But seeing Joe again? The idea both intrigued her and scared the crap out of her at the same time. "He's not going to open up his insanely private life to the magazine because he and I wore out a set of shocks in high school, Tina. It was fun, but it wasn't _that_ good."
Now she was flat-out lying. Joe Kowalski had set the gold standard in Keri's sex life. An ugly car, a Whitesnake tape, cheap wine and Joe still topped her personal "Ten Ways to a Better Orgasm" list.
Tina ran her tongue over her front teeth, and Keri had known her long enough to know her boss was about to deliver the kill shot.
"I've already reassigned your other stories," she said. It was an act of interference entirely inappropriate for Tina to do to someone of Keri's status at the magazine.
"That's unacceptable, Tina. You're overstepping your—"
"I can't overstep boundaries I don't have, Daniels. It's my magazine, and your promotion to editorial depends on your getting an interview with Kowalski, plain and simple." Then she reached into her purse and passed another sheet to her. "Here's your flight information."
* * *
The reclusive, mega-bestselling author in question was trying to decide between regular beef jerky or teriyaki-flavored when he heard Keri Daniels was back in town.
Joe Kowalski nodded at the cashier who'd actually left a customer half-rung up in an attempt to be the first to deliver the news. It wasn't the first time Keri had been back. If she'd gone eighteen years without a visit home to her parents, Janie Daniels would have flown out to LA and dragged her daughter home by an earlobe.
It was, however, the first time Keri had come looking for him that he knew of.
"She's been asking around for your phone number," the cashier added on, watching him like a half-starved piranha. "Of course nobody will give it to her, because we know how you feel about your privacy."
And because nobody had it, but he didn't feel a need to point that out. But he was surprised it had taken Keri as long as this to get around to looking him up, considering just how many years Tina Deschanel had been stalking his agent.
"Maybe she's on the class reunion committee," Joe told the cashier, and her face fell. Committees didn't make for hot gossip.
Members of the media had been hounding his agent for years, but only Tina Deschanel, who took tenacious to a whole new level, was Keri Daniels's boss. Joe had been watching Keri's career from the beginning, waiting for her to sell him out, but she never had. Until now, maybe.
While he wasn't a recluse of Salinger-esque stature, Joe liked his privacy. The New England dislike of outsiders butting into their lives, combined with his own fiscal generosity—in the form of a ballpark, playgrounds, library donations or whatever else they needed—kept the locals from spilling his business. By the time he struck it big, classmates who'd moved away didn't remember enough about him to provide interesting fodder.
Nobody knew the details of the lawsuit settlement except the lawyers, his family and Lauren—who would be financially devastated should she choose to break her silence. And, as unlikely as it seemed, he and Keri had never been linked together in the media reports his publicist monitored. He managed to keep his private life pretty much just that, despite the hype surrounding the movie.
"You're not old enough for a class reunion," Tiffany said, batting her way-too-young eyelashes at him.
_A_ _half dozen of each_ , he decided, tossing bags of beef jerky into his cart. He had a lot more list than cart space left and he kicked himself for not making Terry come along. She could have pushed a second cart _and_ run interference on nosy cashiers. She was good in the role, probably from years of experience.
As if on cue, the loudspeaker crackled. "Um...Tiffany, can you come back to register one, please? I have to pick up my kids in ten minutes."
The girl rolled her eyes and started back toward the front of the town's tiny market, but not before calling over her shoulder, "She's staying with her parents, but I guess you already know where they live."
Yeah, he guessed he did, too. The only question was what he was going to do about it. He and his entire family were preparing to leave town for two weeks, and it would be a shame if he missed out on whatever game Keri was playing.
Assuming it was even true. Not that she was in town, but that she wanted to give him a call. In his experience, if there wasn't enough dirt to keep a small town grapevine bearing fruit, people would just add a heaping pile of manufactured fertilizer.
Joe gave a row of pepperoni sticks the thousand-yard stare. If Keri Daniels _was_ looking for his phone number, it had to mean somebody had spilled the beans. The rabid pit bull of a woman she worked for had discovered her star reporter had once been the girl of Deschanel's favorite prey's dreams. If that was the case, he and Keri were heading for a reunion and _this_ time Keri could do the begging, just like he had before she'd run off to California.
Two hours later, after he'd unloaded his groceries at his own place, he faced his twin sister across the expanse of their mother's kitchen. Teresa Kowalski Porter was _not_ a happy woman.
"You are one dumb son of a bitch."
Whereas he liked to play with words—savor them—Terry just spat them out as they popped into her head.
"I thought you were a moron for putting up with her shit then," she said. "But now you're going back for a second helping?"
"I'm ninety-nine percent sure her boss sent her out here in order to use our history to manipulate me into giving the magazine an interview."
"Keri Daniels never needed any help when it came to manipulating people. And I don't even want to think about that other one percent on an empty stomach."
The entire Kowalski family had once held some resentment toward Keri, but Terry's had festered. Not only because his sister knew how to hold a grudge—although she certainly did—but because Keri had hurt her even before she'd gotten around to hurting Joe.
Terry and Keri had been best friends since kindergarten, despite how corny their names sounded when said together. The trouble started during their freshman year when Mr. Daniels got a big promotion. Between the new style Daddy's money bought and a developing body that just wouldn't quit, Keri had soon started circling with a new group of friends. By the beginning of sophomore year, Keri had left Terry in her social dust, and she hadn't been forgiven. Joe's relationship with Keri had been the only thing to ever come between him and his twin.
And that's why he'd come to Terry first. "Aren't you even a little curious about how she turned out?"
"No." She pulled a soda from the fridge and popped the top without offering him one—never a good sign. "She broke your heart and now, almost twenty years later, she wants to capitalize on that and sell you out to further her career. That tells me all I need to know about how she turned out, thanks."
Joe kicked out a chair and sat at the kitchen table. "It's just dinner, Terry. Dinner with somebody who used to mean a lot to both of us."
"Why are you even talking to me about this, Joseph? I could give a shit less about Keri Daniels. If you want to have dinner with her, then do it. You're an adult."
"I need you to cover for me with the family."
Terry laughed, then grabbed a list from the fridge to double-check against the army of plastic bins at her feet. "Okay, _almost_ an adult."
"You know Mom's going to be all over my ass about being ready to go day after tomorrow even though I'm the first one packed every year. If I fall off her radar for even a few hours, she'll have a fit."
"You really are a dumb-ass. Mom knows she's in town. Tell her you're going to dinner with the bitch who ripped your heart out of your chest and stomped on it. Do you think three jars of peanut butter are enough?"
"We're only going for two weeks. And I don't want the whole damn town to know I'm going to see her."
"Eight adults and five kids...I guess three will be enough."
"Terry." He waited until she looked up from her list. "Seven adults."
"What? Oh. Yeah." She laughed at herself, but the pain was written all over her face. "Who's the dumb-ass now, huh?"
"He is," Joe said, not for the first time. "Did you call that divorce lawyer my agent recommended yet?"
"I'm putting it off until the trip is over." She held up a hand to ward off the argument she knew was coming. "I never thought I'd say this, but I'd rather talk about Keri Daniels."
"Fine. If she agrees to dinner, I'm going to tell everybody I've got a meeting in Boston tomorrow night. Will you back me up?"
"Why didn't you just tell _me_ that, too?" she asked, clearly exasperated now.
"I thought about it. But I kept seeing Keri a secret from you once, sis, and it hurt you when you found out. I didn't want to do it again."
She sighed and Joe tasted victory. "Okay, I'll back you up, but I still think you're a moron. How many jars of pickles did we go through last year?"
* * *
"You want me to do _what_?"
Joe stretched out on the battered leather couch in his office and tried not to laugh at the tone of horrified shock in his agent's voice. "Dinner date. Reporter from _Spotlight Magazine_. You heard right."
"Did that Deschanel bitch kidnap one of the kids? Threaten your mother? I know people, Joe. I can take care of this for you."
"It's Keri. Keri Daniels."
A loaded pause. "That's great. Sure I want to do that for you, Joe, because with a big movie premiere coming up and a deadline approaching, I absolutely want your head fucked up over your high school sweetheart. And exposing yourself professionally to somebody you've exposed yourself to personally? Great idea."
"Dan. Take a breath."
"Oh, I'm taking so many breaths I'm hyperventilating. I need to put a fucking bag over my mouth. Or maybe put a bag over your head because your brains are leaking out."
"I'm pretty sure Tina Deschanel found out Keri and I dated in high school and I doubt Keri wants to do this any more than I do."
"Then don't do it. Please, for the love of my fifteen percent, don't do it."
"I'm just going to have dinner with her and then she can go back to California and tell her boss she tried."
"Then why don't _you_ call her?"
Good question. One he didn't particularly care to share the pathetic answer to with Dan.
After all these years, he didn't want to be reunited with Keri by telephone. He wanted to see her face at the same time he heard her voice. Okay, if he was being honest, he wanted to know if he could see the Keri he'd loved in her.
Worst-case scenario, whatever business she felt she had with him could be conducted over the phone and he wouldn't get to see her at all. It was just curiosity—for old times' sake—but he wanted to see her again.
"I'm famous," he said lightly. "I pay people to make my phone calls for me."
"Bullshit. And speaking of paying people, why are you dumping this on me? Jackie's in charge of publicity and press."
"Her head would explode."
The silence on the other end lasted so long Joe thought his agent might have hung up on him. But no such luck. "Joe, we've been together a long time and, speaking as a guy who's had your back for almost a decade and a half, I think this is even a worse idea personally than it is professionally."
"I know, but I'm going to do it anyway."
* * *
Keri swallowed another mouthful of non-designer water and resisted glancing at her watch again. Maybe she'd been spoiled by a generous expense account, but meeting in a cheap chain restaurant in the city was too high a price to pay for privacy, in her opinion.
And what was with Joe having his agent contact her to set up the dinner? He couldn't pick up the phone and call her himself? Maybe his overinflated ego interfered with telephone use, so he had to use his agent as though she were a total stranger. As if she didn't know he had a birthmark shaped like an amoeba on his right ass cheek.
Unfortunately, her opinions didn't seem to matter. Tina had made it very clear that if Joseph Kowalski held up a hoop, Keri was to jump through it, wearing a pom-pom hat and barking like a dog if that's what it took to make the author happy.
It really burned her ass to be in this predicament, and just thinking about her boss made her temples throb. The temptation to walk out was incredibly strong but, while she knew she could walk into any magazine editor's office and come out with a job, it would set her back years in her quest to climb to the top of the masthead.
It was only an interview, after all.
There hadn't been a new press or book jacket photo of Joe since his sixth book. That picture had pretty much looked like him, albeit without the grin and dimples. It was one of those serious and contemplative author photos and she'd hated it. But by now, especially considering the coin he was pulling down, he was probably a self-indulgent, fat, bald man with a hunched back from too much time over the keyboard.
She, on the other hand, thought she'd aged well. Nothing about her was as firm as it had been in high school, but she was still slim enough to pull off the pricey little black dress she'd chosen for tonight. Her hair, now sleek and smooth to her shoulders, was still naturally blond, though she would admit to some subtle highlighting.
"Hey, babe," a voice above her said, and just like that the sophisticated woman was gone. She was eighteen again, with big dreams, bigger hair, and an itch only Joe Kowalski could scratch.
She could almost taste the Boone's Farm as she turned, braced for an old, fat Joe and finding...just Joe.
He'd aged even better than she had, the bastard. His face had matured and he had a trace of what men were allowed to call character lines, but he still had that slightly naughtier version of the boy-next-door look. Of course, he wasn't _quite_ as lean as he used to be, but it probably wasn't noticeable to anybody who hadn't spent a significant amount of senior year running her hands over his naked body.
All in all, he resembled the boy who'd charmed her out of her pants a lot more than he did the stodgy author she'd hoped to charm into an interview.
"Hi, Joe." She'd stored up a mental cache of opening lines ranging from cute to funny to serious, and every single one seemed to have been deleted. "Thank you for coming."
He slid onto the bench seat across the booth from her. "Time's been pretty damn good to you, if you don't mind my saying so."
No, she didn't mind at all. "You, too. Interesting choice of restaurant, by the way. An eccentricity of the rich and reclusive author?"
He flashed those dimples at her and Keri stifled a groan. Why couldn't he have been fat and bald except for unattractive tufts of hair sprouting from his ears?
"I just like the all-you-can-eat salad bar," he said. "So tell me, is Tina hiding under the table? Waiting to pounce on me in the men's room?"
Keri laughed, partly because it was such a relief to have the topic out in the open. "No, she refuses to leave the city. Says her lungs can't process unpolluted air."
His smoky blue eyes were serious even though his dimples were showing. "Terry's been expecting you to sell me out for your own advantage since I first made the _NYT_ list."
Hearing his sister's name made her wince, and knowing she still held such a low opinion of Keri just made her sad. During the very rare moments she allowed herself to dwell on regrets, she really only had two. And they were both named Kowalski.
"I'm being professionally blackmailed," she admitted. "If I don't get an exclusive interview for _Spotlight_ from you, I'm out of a job."
"I figured as much. Who spilled the beans?"
Keri pulled the 8x10 from her bag and handed it to him. "I don't know. Do you remember who took that?"
"Alex did, remember? The night we...well, the caption's pretty thorough."
She remembered now. Alex had been a friend of Joe's, but they'd all traveled in the same circle. "But Tina said the blogger who claimed to go to school with you was a woman."
"His name's Alexis now. You wouldn't believe how much he paid for his breasts."
Keri laughed, but Joe was still looking at the photo. Judging by the way the corners of his lips twitched into a small smile and how he tilted his head, Keri figured Tina had been right about the nostalgia angle.
The waitress approached their table, order pad in hand.
Joe still hadn't looked up. "Remember the night you started drinking your screwdrivers without the orange juice and did a striptease on Alex's pool table?"
"I bet the jokes about Alex's pool table having a nice rack went on forever," the waitress said, and _then_ Joe looked up.
"You bet they did," he said easily, but he was blushing.
"There must be a whole new slew of jokes about Alex's rack now," Keri said, making Joe laugh.
The waitress tapped her pen on the tab. "So do you guys know what you want?"
And then he did it, just as he always had whenever he'd been asked that question—he looked straight at Keri with blatant hunger in his eyes and said, "Yes, ma'am, I do."
The shiver passed all the way from her perfectly styled hair to her Ferragamo pumps. Then she watched in silent amusement while he ordered for them both—her regular high school favorite of a medium-well bacon cheeseburger with extra pickles, fries and a side of coleslaw. There was no mention of salad, all-you-can-eat or otherwise.
When the waitress left, she gave him a scolding look. "That's more calories than I've consumed in the last two years, Joe."
He waved away her halfhearted objection. "Let's get down to business."
Keri didn't want to. She was too busy enjoying that sizzle of anticipation she'd always felt when Joe looked at her. Apparently those blue eyes hadn't lost their potency over the past two decades.
Joe leaned back against the booth and crossed his arms. It was probably supposed to look intimidating, but all the gesture really did was draw attention to how tan and incredibly well-defined his biceps were against his white T-shirt. Typing definitely wasn't the only workout his arms got.
"Let's see if I can synopsize our situation," he said. "I never give interviews. You want an interview. No, strike that. You _need_ an interview, because the rabid jackal you work for has made it clear your job is on the line. Am I close?"
The sizzle receded to a tingle. "You're in the ballpark."
"I'm not just in the ballpark, babe. I'm Josh Beckett on the mound at Fenway. If I don't give you what you need, you're hiding behind palm trees waiting for drunk pop stars to pop out of their Wonderbras."
And that pretty much killed the last of the lingering tingle. "Payback's a bitch and all that, right, Joe?"
The dimples flashed. "Isn't it?"
Keri just shrugged. She wasn't about to start putting deals on the table or making promises. After years of dealing with celebrities, she usually knew how to handle herself. But this was Joe Kowalski. He'd seen her naked and she'd broken his heart. That changed the rules.
"I'm leaving town tomorrow," he said. "I'll be gone two weeks."
The tingle flared up again, but this time it was a lot more panic and a lot less anticipation. "There's always the telephone or fax or email."
"Not where I'm going."
She laughed. "Would that be Antarctica or a grass hut in the Amazon Basin?"
"I'm not even leaving the state."
Joe had sucked at cards in high school—he had no poker face—but she couldn't read him now. The instincts that had skyrocketed her to the top of the _Spotlight_ food chain were giving her nothing, except the feeling he was setting her up for something she might want no part of.
The waitress brought their food, buying Keri a few more minutes to think. One thing Joe had never had was a mean streak—if there was no chance in hell of the interview happening, he wouldn't have agreed to meet her for dinner. He'd never had it in him to humiliate somebody for the sake of his own enjoyment.
Granted, the kind of checks he had to be cashing changed a person, but she'd already seen enough of him—and heard enough from her mother—to know Joe was still Joe. Just with more expensive toys.
That didn't mean he wasn't going to have her jumping through hoops, of course. Probably an entire flaming series of them.
She bit into the bacon cheeseburger and the long-forgotten flavor exploded on her tongue. She closed her eyes and moaned, chewing slowly to fully savor the experience.
"How long has it been since you've had one of those?" Joe asked, and she opened her eyes to find him watching her.
Keri swallowed, already anticipating the next bite. "Years. Too many years."
He laughed at her, and they enjoyed some idle chit-chat while they ate. She brought up the movie and he talked about it in a generic sense, but she noted how careful he was not to say anything even remotely interview worthy.
There would be no tricking the man into revealing something that would get Tina off her back.
"You know," she said, still holding half her cheeseburger, "I _really_ want to enjoy this meal more, and I can't with this hanging over my head. What's it going to take?"
"I gave it some thought before I came, and I think you should come with me."
"Where?"
"To where I'm going."
Keri set the cheeseburger on the plate. "For two weeks?"
The length of time hardly mattered, since she couldn't return to California without the interview anyway. But she'd like an idea of what she was signing up for.
"Whether you're there for two weeks or not is up to you. For each full day you stick it out with the Kowalskis, you get to ask me one question."
Keri, unlike Joe, did have a poker face and she made sure it was in place while she turned his words over in her head. "When you say the Kowalskis, you mean..."
"The entire family." The dimples were about as pronounced as she'd ever seen them. "Every one of them."
Her first thought was _oh shit_. Her second, to wonder if _People_ was hiring.
Joe reached into the back pocket of his jeans and pulled out a folded sheet of spiral notebook paper. "Here's a list of things you'll need. I jotted it down in the parking lot."
Keri unfolded the paper and read the list twice, trying to get a sense of what she was in for.
BRING: Bug spray; jeans; T-shirts; several sweatshirts, at least one with a hood; one flannel shirt (mandatory); pajamas (optional); underwear (also optional); bathing suit (preferably skimpy); more bug spray; sneakers; waterproof boots; good socks; sunscreen; two rolls of quarters.
DO NOT BRING: cell phone; Blackberry; laptop; camera, either still or video; alarm clock; voice recorder; any other kind of electronic anything.
She had no clue what it meant, other than Joe wanting her half-naked and unable to text for help.
Copyright © 2010 by Shannon Stacey
Author Note
The processes and organizational structures of large city fire departments are incredibly complex, and I took minor creative liberties in order to maintain readability.
To first responders everywhere, thank you.
Acknowledgments
Thank you to Angela James and Carina Press for your constant support.
Also available from Shannon Stacey
and Carina Press
**THE KOWALSKI** Series from Shannon Stacey
Suggested reading order
EXCLUSIVELY YOURS
UNDENIABLY YOURS
YOURS TO KEEP
ALL HE EVER NEEDED
ALL HE EVER DESIRED
ALL HE EVER DREAMED
LOVE A LITTLE SIDEWAYS
TAKEN WITH YOU
FALLING FOR MAX
HOLIDAY SPARKS
MISTLETOE & MARGARITAS
SLOW SUMMER KISSES
SNOWBOUND WITH THE CEO
HER HOLIDAY MAN
BE MINE
HEAT EXCHANGE
A FIGHTING CHANCE
Also available from Shannon Stacey and Harlequin
ALONE WITH YOU
HEART OF THE STORM
And stay tuned for **FULLY IGNITED**, the next book in the BOSTON FIRE Series from Shannon Stacey, coming soon
About the Author
_New York Times_ and _USA TODAY_ bestselling author Shannon Stacey lives with her husband and two sons in New England, where her two favorite activities are writing stories of happily-ever-after and driving her UTV through the mud. You can contact Shannon through her website, shannonstacey.com, where she maintains an almost daily blog, or visit her on Twitter, twitter.com/shannonstacey, and her Facebook page, facebook.com/shannonstacey.authorpage, or email her at shannon@shannonstacey.com. To find out about other books by Shannon Stacey or to be alerted to new releases, sign up for her monthly newsletter here or at bit.ly/shannonstaceynewsletter.
**Introducing the Carina Press Romance Promise!**
The Carina Press team all have one thing in common: we are romance readers with a longtime love of the genre. And we know what readers are looking for in a romance: a guarantee of a happily-ever-after (HEA) or happy-for-now (HFN). With that in mind, we're initiating the **Carina Press Romance Promise**. When you see a book tagged with these words in our cover copy/book description, we're making you, the reader, a very important promise:
**This book contains a romance central to the plot and ends in an HEA or HFN.**
Simple, right? But so important, we know!
Look for the Carina Press Romance Promise and one-click with confidence that we understand what's at the heart of the romance genre!
Look for this line in Carina Press book descriptions:
_One-click with confidence. This title is part of the_ **Carina Press Romance Promise:** _all the romance you're looking for with an HEA/HFN. It's a promise!_
Find out more at CarinaPress.com/RomancePromise.
Find out more at CarinaPress.com.
Catch up on Shannon Stacey's Boston Fire series
HEAT EXCHANGE (Boston Fire, book one)
And don't miss:
FULLY IGNITED (Boston Fire, book three)
HOT RESPONSE (Boston Fire, book four)
Grab your copies now!
Connect with us for info on our new releases, access to exclusive offers and much more!
Visit CarinaPress.com
We like you—why not like us on Facebook: Facebook.com/CarinaPress
Follow us on Twitter: Twitter.com/CarinaPress
**Get the latest on Carina Press by joining our eNewsletter!**
**Don't miss out, sign up today!**
CarinaPress.com/newsletter
Sign up and get Carina Press offers and coupons delivered straight to your inbox!
Plus, as an eNewsletter subscriber, you'll get the inside scoop on everything
Carina Press and be the first to know about our newest releases!
Visit CarinaPress.com
Other ways to keep in touch:
Facebook.com/CarinaPress
Twitter.com/CarinaPress
We think you have a good book in you!
Do you write in the below genres? Harlequin's Carina Press wants to see your manuscript.
• Contemporary Romance
• Romantic Suspense
• Historical Romance
• Paranormal Romance
• Mystery
• Erotic Romance
• LGBT
• Science Fiction & Fantasy
Submit today! _All manuscripts receive a thorough evaluation by a Carina Press editor and a decision within 12–16 weeks. What do you have to lose?_
To learn more about our submission guidelines, visit us at CarinaPress.com.
ISBN-13: 9781426899973
Controlled Burn
Copyright © 2015 by Shannon Stacey
Edited by: Angela James
All rights reserved. By payment of the required fees, you have been granted the non-exclusive, non-transferable right to access and read the text of this e-book on-screen. No part of this text may be reproduced, transmitted, down-loaded, decompiled, reverse engineered, or stored in or introduced into any information storage and retrieval system, in any form or by any means, whether electronic or mechanical, now known or hereinafter invented, without the express written permission of publisher, Harlequin Enterprises Limited, 225 Duncan Mill Road, Don Mills, Ontario, Canada M3B 3K9.
All characters in this book have no existence outside the imagination of the author and have no relation whatsoever to anyone bearing the same name or names. They are not even distantly inspired by any individual known or unknown to the author, and all incidents are pure invention.
This edition published by arrangement with Harlequin Books S.A.
® and ™ are trademarks of the publisher. Trademarks indicated with ® are registered in the United States Patent and Trademark Office, the Canadian Intellectual Property Office and in other countries.
www.CarinaPress.com
| {
"redpajama_set_name": "RedPajamaBook"
} | 7,966 |
Q: SP - Error occurred in deployment step 'Recycle IIS Application Pool' in VS I have a privileges everywhere, but where i deploy a project with webParts, Visual studio, show me following error :
Error 1 Error occurred in deployment step 'Recycle IIS Application Pool': Object reference not set to an instance of an object.
How to solve this proble. Thank you previously!
A: *
*Make sure that you are running the Visual Studio as Administrator.
*Make sure the application pool of SharePoint web application is running.
*Check the Site Url of your SharePoint Solution and make sure you assigned it properly and correctly ,
Right click on solution name > Properties > Check Site URL
*
*Try to remove your solution from central administration .
*Check if the SharePoint timer service is running.
*Make sure that the "Package" folder is exist in your solution explorer.
*
*if not exist make sure that it's not excluded from the solution then include it
*else try to create a new solution and move your web part
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,441 |
{"url":"https:\/\/research.web3.foundation\/en\/latest\/polkadot\/BABE\/Babe\/","text":"$\\def\\skvrf{\\mathsf{sk}^v} \\def\\pkvrf{\\mathsf{pk}^v} \\def\\sksgn{\\mathsf{sk}^s} \\def\\pksgn{\\mathsf{pk}^s} \\def\\skac{\\mathsf{sk}^a} \\def\\pkac{\\mathsf{pk}^a} \\def\\D{\\Delta} \\def\\A{\\mathcal{A}} \\def\\vrf{\\mathsf{VRF}} \\def\\sgn{\\mathsf{Sign}}$\n\n# BABE\u00b6\n\n## 1. Overview\u00b6\n\nBABE stands for 'Blind Assignment for Blockchain Extension'. In BABE, we deploy Ouroboros Praos [2] style block production.\n\nIn Ouroboros [1] and Ouroboros Praos [2], the best chain (valid chain) is the longest chain. In Ouroboros Genesis, the best chain can be the longest chain or the chain which is forked long enough and denser than the other chains in some interval. We have a different approach for the best chain selection based on GRANDPA and longest chain. In addition, we do not assume that all parties can access the current slot number which is more realistic assumption.\n\n## 2. BABE\u00b6\n\nIn BABE, we have sequential non-overlaping epochs $(e_1, e_2,...)$, each of which contains a number of sequential slots ($e_i = \\{sl^i_{1}, sl^i_{2},...,sl^i_{t}\\}$) up to some bound $t$. We randomly assign each slot to a party, more than one parties, or no party at the beginning of the epoch. These parties are called a slot leader. We note that these assignments are private. It is public after the assigned party (slot leader) produces the block in his slot.\n\nEach party $P_j$ has at least one type of secret\/public key pair:\n\n\u2022 Session keys consists of two keys: Verifiable random function (VRF) keys $(\\skvrf_{j}, \\pkvrf_{j})$ and the signing keys for blocks $(\\sksgn_j,\\pksgn_j)$.\n\nWe favor VRF keys being relatively long lived, but parties should update their associated signing keys from time to time for forward security against attackers causing slashing. More details related to these key are here.\n\nEach party $P_j$ keeps a local set of blockchains $\\mathbb{C}_j =\\{C_1, C_2,..., C_l\\}$. These chains have some common blocks (at least the genesis block) until some height.\n\nWe assume that each party has a local buffer that contains the transactions to be added to blocks. All transactions in a block is validated with a transaction validation function.\n\n### BABE with GRANDPA Validators $\\approx$ Ouroboros Praos\u00b6\n\nBABE is almost the same as Ouroboros Praos [2] except chain selection rule and the slot time adjustment.\n\nIn BABE, all validators have same amount of stake so their probability of being selected as slot leaders is equal. Given that we have $n$ validators and relative stake of each party is $\\theta = S\/n$ where $S$ is the total amount of stake, the probability of being selected is\n\nwhere $c$ is a constant.\n\nThe threshold used in BABE for each validator $P_i$ is\n\nwhere $\\ell_{vrf}$ is the length of the VRF's first output (randomness value).\n\nBABE consists of three phases:\n\n#### 1. Genesis Phase\u00b6\n\nIn this phase, we manually produce the unique genesis block.\n\nThe genesis block contain a random number $r_1$ for use during the first epoch for slot leader assignments, the initial stake's of the stake holders ($st_1, st_2,..., st_n$) and their corresponding session public keys ($\\pkvrf_{1}, \\pkvrf_{2},..., \\pkvrf_{n}$), $(\\pksgn_{1}, \\pksgn_{2},..., \\pksgn_{n}$).\n\nWe might reasonably set $r_1 = 0$ for the initial chain randomness, by assuming honesty of all validators listed in the genesis block. We could use public random number from the Tor network instead however.\n\nTODO: In the delay variant, there is an implicit commit and reveal phase provided some suffix of our genesis epoch consists of every validator producing a block and all produced blocks being included on-chain, which one could achieve by adjusting paramaters.\n\n#### 2. Normal Phase\u00b6\n\nWe assume that each validator divided their timeline in slots after receiving the genesis block. They determine the current slot number according to their timeline. If a new validator joins to BABE after the genesis block, this validator divides his timeline into slots with the Median algorithm we give in Section 4.\n\nIn normal operation, each slot leader should produce and publish a block. All other nodes attempt to update their chain by extending with new valid blocks they observe.\n\nWe suppose each validator $P_j$ has a set of chains $\\mathbb{C}_j$ in the current slot $sl_k$ in the epoch $e_m$. We have a best chain $C$ selected in $sl_{k-1}$ by our selection scheme, and the length of $C$ is $\\ell\\text{-}1$.\n\nEach validator $P_j$ produces a block if he is the slot leader of $sl_k$. If the first output ($d$) of the following VRF is less than the threshold $\\tau$ then he is the slot leader.\n\nIf $P_j$ is the slot leader, $P_j$ generates a block to be added on $C$ in slot $sl_k$. The block $B_\\ell$ should contain the slot number $sl_{k}$, the hash of the previous block $H_{\\ell\\text{-}1}$, the VRF output $d, \\pi$, transactions $tx$, and the signature $\\sigma = \\sgn_{\\sksgn_j}(sl_{k}||H_{\\ell\\text{-}1}||d||pi||tx))$. $P_i$ updates $C$ with the new block and sends $B_\\ell$.\n\nIn any case (being a slot leader or not being a slot leader), when $P_j$ receives a block $B = (sl, H, d', \\pi', tx', \\sigma')$ produced by a validator $P_t$, it validates the block with $\\mathsf{Validate}(B)$. $\\mathsf{Validate}(B)$ should check the followings in order to validate the block:\n\n\u2022 if $\\mathsf{Verify}_{\\pksgn_t}(\\sigma')\\rightarrow \\mathsf{valid}$ (signature verification),\n\n\u2022 if the party is the slot leader: $\\mathsf{Verify}_{\\pkvrf_t}(\\pi', r_m||sl) \\rightarrow \\mathsf{valid}$ and $d' < \\tau_t$ (verification with the VRF's verification algorithm).\n\n\u2022 if $P_t$ did not produce another block for another chain in slot $sl$ (no double signature),\n\n\u2022 if there exists a chain $C'$ with the header $H$,\n\n\u2022 if the transactions in $B$ are valid.\n\nIf the validation process goes well, $P_j$ adds $B$ to $C'$. Otherwise, it ignores the block.\n\nAt the end of the slot, $P_j$ decides the best chain with the chain selection rule we give in Section 3.\n\n#### 3. Epoch Update\u00b6\n\nBefore starting a new epoch $e_m$, there are certain things to be completed in the current epoch $e_{m-1}$. * Validators update * (Session keys) * Epoch randomness\n\nIf there is a validator update in BABE, this update has to be done until the end of the last block of the current epoch $e_{m-1}$ so that they are able to actively participate the block production in epoch $e_{m+1}$.\n\nThe new randomness for the new epoch is computed as in Ouroboros Praos [2]: Concatenate all the VRF outputs in blocks starting from the first slot of the epoch to the $R\/2^{th}$ slot of $e_m$ ($R$ is the epoch size). Assume that the concatenation is $\\rho$. Then the randomness in the next epoch:\n\nThis also can be combined with VDF output to prevent little bias by the adversaries for better security bounds. BABE is secure without VDF but if we combine VDF with the randomness produced by blocks, we have better parachain allocation.\n\n## 3. Best Chain Selection\u00b6\n\nGiven a chain set $$\\mathbb{C}j$$ an the parties current local chain $C_{loc}$, the best chain algorithm eliminates all chains which do not include the finalized block $B$ by GRANDPA. Let's denote the remaining chains by the set $\\mathbb{C}'_j$. If we do not have a finalized block by GRANDPA, then we use the probabilistic finality in the best chain selection algorithm (the probabilistically finalized block is the block which is $k$ block before than the last block of $C{loc}$).\n\nWe do not use the chain selection rule as in Ouroboros Genesis [3] because this rule is useful for parties who become online after a period of time and do not have any information related to current valid chain (for parties always online the Genesis rule and Praos is indistinguishable with a negligible probability). Thanks to Grandpa finality, the new comers have a reference point to build their chain so we do not need the Genesis rule.\n\n## 4. Relative Time\u00b6\n\nIt is important for parties to know the current slot for the security and completeness of BABE. Therefore, we show how a party realizes the notion of slots. Here, we assume partial synchronous channel meaning that any message sent by a party arrives at most $\\D$-slots later. $\\D$ is not an unknown parameter.\n\nEach party has a local clock and this clock does not have to be synchronized with the network. When a party receives the genesis block, it stores the arrival time as $t_0$ as a reference point of the beginning of the first slot. We are aware of the beginning of the first slot is not same for everyone. We assume that this difference is negligible comparing to $T$ since there will not be too many validators in the beginning. Then each party divides their timeline in slots.\n\nObtaining Slot Number: Parties who join BABE after the genesis block released or who lose notion of slot run the following protocol to obtain the current slot number with the Median Algorithm and then updates with the consistency algorithm if it sees a inconsistency with the output of median algorithm after running the consistency algorithm.\n\nIf a party $P_j$ is a newly joining party, he downloads chains and receives blocks at the same time. After chains' download completed, he adds the valid blocks to the corresponding chains. Assuming that a slot number $sl$ is executed in a (local) time interval $[t_{start}, t_{end}]$ of party $P_j$, we have the following protocols for $P_j$ to output $sl$ and $t \\in [t_{start}, t_{time}]$.\n\n- Median Algorithm: The party $P_j$ stores the arrival time $t_i$ of $n$ blocks with their corresponding slot time $sl_i$. Let us denote the stored arrival times of blocks by $t_1,t_2,...,t_n$ whose slot numbers are $sl_1,sl_2,...,sl_n$, respectively. Remark that these slot numbers do not have to be consecutive since some slots may be empty, with multiple slot leaders or the slot leader is offline, late or early. After storing $n$ arrival times, $P_j$ sorts the following list $\\{t_1+a_1T, t_2+a_2T,..., t_n+a_nT_\\}$ where $a_i = sl - sl_i$. Here, $sl$ is a slot number that $P_j$ wants to learn at what time it corresponds in his local time. At the end. $P_j$ outputs the median of the ordered list as ($t$) and $sl$.\n\nLemma 1: Asuming that $\\D$ is the maximum network delay in terms of slot number and $\\alpha\\gamma(1-c)^\\D \\geq (1+\\epsilon)\/2$ where $\\alpha$ is the honest stake and $\\gamma\\alpha$ is the honest and synchronized parties' stake and $\\epsilon \\in (0,1)$, $sl' - sl \\leq \\D$ with the median algorithm where $sl'$ the correct slot number of time $t$ with probability 1 - \\exp(\\frac{\\delta^2\\mu}{2} where $0 < \\delta \\leq \\frac{\\epsilon}{1+\\epsilon}$ and $\\mu = n(1+\\epsilon)\/2$.\n\nProof: Let us first assume that more than half of the blocks among $n$ blocks are sent by the honest and synchronized parties and $t = t_i + a_iT$. Then, it means that more than half of the blocks sent on time. If the block of $sl_i$ is sent by an honest and synchronized party, we can conclude it is sent at earliest at $t_i' \\leq t_i - \\D T$. In this case, the correct slot number $sl'$ at time $t$ is $sl_i + \\lceil\\frac{t-t_i'}{T}\\rfloor = sl_i + \\lceil\\frac{t_i + a_iT - t_i'}{T}\\rfloor$. If $\\D T = 0$, sl' = sl, otherwise $sl' \\geq sl_i + \\lceil\\frac{a_iT + \\D T}{T}\\rfloor = sl+\\D$.\n\nIf the median does not corresponds to time derived from an honest and synchronized parties' block, we can say that there is at least one honest and synchronized time after the median because more than half of the times are honest and synchronized. Let's denote this time by $t_u + a_uT$. Let's assume that the latest honest one in the ordered list is delayed $\\D' \\leq \\D$ slots. It means that if the median was this one, $sl_u' - sl \\leq \\D'$ as shown above where $sl_u'$ is the correct slot number of time $t_u + a_uT$. Clearly, $sl \\leq sl_u'$. Then, we can conclude that $sl' - sl \\leq sl_u' - sl \\leq \\D' \\leq \\D$.\n\nNow, we show the probability of having more than half honest and synchronized blocks in $n$ blocks. If $\\alpha\\gamma(1-c)^\\D \\geq (1+\\epsilon)\/2$, then the blocks of honest and synchronized parties are added to the best chain even if there are $\\D$ slots delay (it is discussed in the proof of Theorem 2) with the probability more than $(1+\\epsilon)\/2$. We define a random variable $X_v \\in {0,1}$ which is 1 if $t_v$ is the arrival time of an honest and synchronized block. Then the expected number of honest and synchronized blocks among $n$ blocks is $\\mu = n(1+\\epsilon)\/2$. We bound this with the Chernoff bound:\n\nGiven that $0 < \\delta \\leq \\frac{\\epsilon}{1+\\epsilon}$, \\mu(1-\\delta) \\geq n\/2$, this probability should be negligibly small with a$\\delta \\approx 1$in order to have more than half honest and synchronized blocks in$n$slots. If$\\epsilon \\geq 0.1$and$\\delta = 0.09$, the probability of having less than half is less than$0.06$if$n \\geq 1200$. We give another algorithm called consistency algorithm below. This can be run after the median algorithm to verify or update$t$later on. - Consistency Algorithm: Let us first define lower consistent blocks. Given consecutive blocks $\\{B'_1, B'_2,...,B'_n \\in C$ if for each block pair $B'_u$ and $B'_v$ which belong to the slots $sl_u$ and $sl_v$ ($sl_u < sl_v$), respectively are lower consistent for a party $P_j$, if they arrive on $t_u$ and $t_v$ such that $sl_v - sl_u = \\lfloor\\frac{t_v - t_u}{T}\\rfloor$. We call upper consistent if for all blocks $sl_v - sl_u = \\lceil\\frac{t_v - t_u}{T}\\rceil$. Whenever $P_j$ receives at least $k$ either upper or lower consistent blocks, it outputs$t$and $sl = sl_u + \\lfloor\\frac{t}-t_u}{T}\\rfloor$ where $sl_u$ is the slot of one of the blocks in the block set. Lemma 2: Assuming that the network delay is at most $\\D$ and the honest parties' stake satisfies the condion in Theorem 2, $P_j$'s current slot is at most $\\D$-behind or$2\\D$-behind of the correct slot$sl'$at time$t$(i.e., $sl' - sl \\leq \\D$). Proof: According to Theorem 2, there is at least one block honestly generated by an honest party in $k$ slot with probability $1 - e^{-\\Omega(k)}$. Therefore, one of the blocks in the lower oe upper consistent blocks belong to an honest party. We do our proof with lower consistent block. The upper consistent one is similar. Let's denote$\\hat{\\D} = \\D$or$\\hat{\\D} = 2\\D$If $k$ blocks are lower consistent, then it means that all blocks are lower consistent with the honest block. If $P_j$ chooses the arrival time and slot number of this honest block, then $sl \\leq sl' - \\hat{\\D}$ because the honest parties' block must arrive to $P_j$ at most $\\hat{\\D}$-slots later. Now, we need to show that if $P_j$ chooses the arrival time of a different block which does not have to be produced by an honest and synchronized party, then he is still at most $\\D$-behind. Assume that $P_j$ picks $sl_v > sl_u$ to compute$sl$for$t$. We show that this computation is equal to $sl = sl_u +\\lfloor\\frac{t - t_u}{T}\\rfloor$. We know because of the lower consistency $sl_v- sl_u = \\lfloor\\frac{t_v - t_u}{T}\\rfloor$. So $P_i$ is going to obtain the same $sl$ and$t$with all blocks. Similarly, if $P_i$ picks $sl_v < sl_u$, he obtains $sl$ There are two drawbacks of this protocol. One of drawbacks is that a party may never have $k$ consistent blocks if an adversary randomly delays some blocks. In this case, $P_i$ may never has consistent blocks. The other drawback is that if the honest block in$k$-consistent block is not a synchronized party then consistency algorithm performs worse than the median. However, this protocol can be used after the median protocol to update or verify the slot number with the consistency algorithm. If this party sees$k$-consistent blocks and the slot number$sl'$obtained with the the consistency algorithm is less than slot number obtained from the median protocol, he updates it with$sl'$. ## 5. Security Analysis\u00b6 (If you are interested in parameter selection based on the security analysis, you can directly go to the next section) BABE is the same as Ouroboros Praos except the chain selection rule and slot time extraction. Therefore, we need a new security analysis. ### Definitions\u00b6 We give the definitions of security properties before jumping to proofs. Definition 1 (Chain Growth (CG)) [1,2]: Chain growth with parameters $\\tau \\in (0,1]$ and $s \\in \\mathbb{N}$ ensures that if the best chain owned by an honest party at the onset of some slot $sl_u$ is $C_u$, and the best chain owned by a honest party at the onset of slot $sl_v \\geq sl_v+s$ is $C_v$, then the difference between the length of $C_v$ and $C_u$ is greater or equal than\/to $\\tau s$. Definition 2 (Chain Quality (CQ)) [1,2]: Chain quality with parameters $\\mu \\in (0,1]$ and $k \\in \\mathbb{N}$ ensures that the ratio of honest blocks in any $k$ length portion of an honest chain is $\\mu$. Definition 3 (Common Prefix) Common prefix with parameters $k \\in \\mathbb{N}$ ensures that any chains $C_1, C_2$ possessed by two honest parties at the onset of the slots $sl_1 < sl_2$ are such satisfies $C_1^{\\ulcorner k} \\leq C_2$ where $C_1^{\\ulcorner k}$ denotes the chain obtained by removing the last $k'$ blocks from $C_1$, and $\\leq$ denotes the prefix relation. We define a new and stronger conmmon prefix property since we have a chance to finalize blocks earlier (smaller $k$) than the probabilistic finality that Ouroboros Praos [2] provides thanks to GRANDPA. Definition 4: (Strong Common Prefix (SCP)) Assuming that the common prefix property is satisfied with parameter $k$, strong common prefix with parameter $k \\in \\mathbb{N}$ ensures that there exists $k' < k$ and a slot number $sl_1$ such that for any two chain $C_1,C_2$ possessed by two honest parties at the onset of $sl_1$ and $sl_2$ where $sl_1 < sl_2$, $C_1^{\\ulcorner k'} \\leq C_2$. In a nutshell, strong common prefix property ensures that there is a least one block which is finalized earlier than other blocks. It has been shown [4] that the persistence and liveness is satisfied if the block production ensure chain growth, chain quality and common prefix proerties. Persistence ensures that, if a transaction is seen in a block deep enough in the chain, it will stay there and liveness ensures that if a transaction is given as input to all honest players, it will eventually be inserted in a block, deep enough in the chain, of an honest player. ### Security Proof of BABE\u00b6 We first prove that BABE satisfies chain growth, chain quality and strong common prefix properties in one epoch. Second, we prove that BABE's secure by showing that BABE satisfies persistence and liveness in multiple epochs. Before starting the security analysis, we give probabilities of being selected as a slot leader [2] or noone selected. We use the notations $sl = \\bot$ if a slot $sl$ is empty, $sl = 0_{L}$ if $sl$ is given to only one late honest party ($\\D$ behind the current slot) and $sl = 0_S$ if $sl$ is given to only one synchronized honest party. similarly, where $\\mathcal{P}$ is the set of indexes of all parties, $\\mathcal{H}_L$ is the set of indexes of all late and honest parties, $\\mathcal{H}_S$ is the set of indexes of all honest and synchronized parties with using Proposition 1 in [2]. We can bound $p_{0_S}$ and $p_{0_L}$ as $p_{0_S} \\geq \\phi(\\alpha_S)(1-c) \\geq \\alpha_Sc(1-c)$ and $p_{0_L} \\geq \\phi(\\alpha_L)c(1-c)\\geq \\alpha_L(1-c)$ where $\\alpha_S$ denotes the total relative stake of synchronized and honest parties and $\\alpha_L$ denotes the total relative stake of honest and late parties. For the rest, we denote $\\alpha = \\alpha_S + \\alpha_L = \\gamma\\alpha + \\beta\\alpha$ where $\\gamma + \\beta = 1$ and $\\alpha$ is the relative stakes of honest parties. In Lemma 1 and Lemma 2, we prove that a late party can be at most $\\D$ behind of the current slot. If a late party is a slot leader then his block is added to the best chain if there are at least $2\\D$ consecutive empty slots because he sends his block $\\D$ times later and his block may be received $\\D$ times later by other honest parties becuase of the network delay. Having late parties in BABE influences chain growth. Theorem 1 (CG): Let $k, R, \\D \\in \\mathbb{N}$ and let $\\alpha = \\alpha_S + \\alpha_L = \\gamma\\alpha + \\beta\\alpha$ is the total relative stake of honest parties. Then, the probability that an adversary $\\A$ makes BABE violate the chain growth property (Definition 1) with parameters $s \\geq 6 \\D$ and $\\tau = \\frac{\\lambda c\\alpha(\\gamma+ \\lambda \\beta)}{6}$ throughout a period of $R$ slots, is no more than $2\\D Rc \\exp({-\\frac{(s-5\\D)\\lambda c\\alpha(\\gamma+ \\lambda \\beta)}{16\\D}})$, where c denotes the constant $\\lambda = (1-c)^{\\D}$. Proof: We define two types of slot. We call a slot $2\\D$-right isolated if the slot leader is one late party and the next $2\\D - 1$ slots are empty (no party is assigned). We call a slot $\\D$-right isolated if the slot leader is only one synchronized honest party (not late party) and the next consecutive $\\D-1$ slots are empty. Now consider a chain owned by an honest party in $sl_u$ and a chain owned by an honest party in $sl_v \\geq sl_u + s$. We need to show that honest parties' blocks are added most of times between $sl_u$ and $sl_v$. Therefore, we need to find the expected number of $2\\D$-right isolated slots between $sl_u$ and $sl_v$ given that the relative stake of late parties is $\\alpha_L = \\beta \\alpha$ and expected number of $\\D$-right isolated slots given that the relative stake of synchronized honest parties is $\\alpha_S = \\gamma\\alpha$. Remark that a slot can be either $2\\D$-right isolated or $\\D$-right isolated or neither of them. Consider the chains $C_u$ and $C_v$ in slots $sl_u$ and $sl_v$ owned by the honest parties, respectively where $sl_u$ is the first slot of the epoch. We can guarantee that $C_u$ is one of the chains of everyone in $sl_u + 2\\D$ and the chain $C_v$ is one of the chains of everyone if it is sent in slot $sl_v - 2\\D$. Therefore, we are interested in slots between $sl_u + 2\\D$ and $sl_v - 2\\D$. Let us denote the set of these slots by $S = \\{sl_u + 2\\D, sl_u+2\\D+1,...,sl_v-2\\D\\}$. Remark that $|S| = s-4\\D$. Now, we define a random variable $X_t \\in \\{0,1\\}$ where $t\\in S$. $X_t = 1$ if $t$ is $2\\D$ or $\\D$-right isolated with respect to the probabilities $p_\\bot, p_{0_L}, p_{0_S}$. Then With $\\lambda = (1-c)^{\\D}$, $\\alpha = \\alpha_L+\\alpha_S = \\beta\\alpha+ \\gamma \\alpha$, Remark that $X_t$ and $X_{t'}$ are independent if $|t-t'| \\geq 2\\D$. Therefore, we define $S_z = \\{t\\in S: t \\equiv z \\text{ mod }2\\D\\}$ where all $X_t$ indexed by $S_z$ are independent and $|S_z| > \\frac{s-5\\D}{2\\D}$. We apply a Chernoff Bound to each $S_z$ with $\\delta = 1\/2$. Recall that we want to bound the number of $2\\D$ and $\\D$-right isolated slots. Let's call this number $H$. If for all $z$, $\\sum_{t \\in S_z}X_t \\geq |S_z|\\mu\/2$, then $H = \\sum_{t\\in S} X_t \\geq |S|\\mu\/2$. With union bound since $\\mu \\geq \\lambda c\\alpha(\\gamma+ \\beta)$ We find that in the first $s$ slot of an epoch the chain grows $\\tau s$ block with the probability given in (2). Now consider the chain growth from slot $sl_{u+1}$ to $sl_{v+1}$. We know that the chain grows at least $\\tau s -1$ blocks between $sl_{u+1}$ to $sl_v$. So, the chain grows one block for sure if $sl_{v+1}$ is $\\D$ or $2\\D$-right isolated which with probability $\\alpha f c(\\gamma+c\\beta)$. If we apply the same for each $sl > sl_u$ we obtain given $|S| = s-4\\D$ and if $s \\geq 6\\D$, $|S| \\geq \\frac{s}{3}$ ($\\tau s = \\frac{\\lambda c\\alpha(\\gamma+ \\lambda \\beta)}{6}s \\geq \\frac{\\lambda c\\alpha(\\gamma+ \\lambda \\beta)}{2}|S|$). Theorem 2 (CQ): Let $k,\\D \\in \\mathbb{N}$ and $\\epsilon \\in (0,1)$. Let $\\alpha(\\gamma+(1-c)^\\D\\beta)(1-c)^\\D \\geq (1+\\epsilon)\/2$ where $\\alpha = \\alpha_S+\\alpha_L = \\gamma\\alpha + \\beta\\alpha$ is the relative stake of honest parties. Then, the probability of an adversary $\\A$ whose relative stake is at most $1-\\alpha$ violate the chain growth property (Definition 2) with parammeters $k$ and $\\mu = 1\/k$ in $R$ slots with probability at most $Re^{-\\Omega(k)}$. Proof (sketch): The proof is very similar to the proof in [2]. It is based on the fact that the number of $2\\D$ and $\\D$ isolated slots are more than normal slots because of the assumption $(\\alpha(\\gamma+(1-c)^\\D\\beta)(1-c)^\\D \\geq (1+\\epsilon)\/2$. Remark that probability of having $2\\D$-right isolated slot is $\\alpha\\beta(1-c)^{2\\D}$, having $\\D$-right isolated slot is $\\alpha\\gamma(1-c)^{\\D}$ and sum of them are greater than 1\/2 because of the assumption. Theorem 3 (SCP): Let $k,\\D \\in \\mathbb{N}$ and $\\epsilon \\in (0,1)$. Let $\\alpha(\\gamma+(1-c)^\\D\\beta)(1-c)^\\D \\geq (1+\\epsilon)\/2$ where $\\alpha = \\alpha_S+\\alpha_L = \\gamma\\alpha + \\beta\\alpha$ is the relative stake of honest parties. Assuming that the GRANDPA finality gadget finalizes a block at most $\\kappa$ slots later with the probability $\\theta$, then the probability of an adversary $\\A$ whose relative stake is at most $1-\\alpha_L+\\alpha_S$ violate the strong common prefix property with parammeter $k$ in $R$ slots with probability at most $(\\theta Rc^{k+1}(1-c)^{\\kappa - k} + (1-\\theta))\\exp(\\ln R + \u2212 \\Omega(k-2\\D))$. Proof Sketch: First of all, we need to show that common prefix prefix property is satisfied with the honest relative-stake assumption. With a similar proof in [2] in Theorem 5, we can conclude that the common prefix property can be violated with the probability at most $\\exp(\\ln R + \u2212 \\Omega(k-2\\D))$. SCP property is violated if there is no two chain $C_1,C_2$ at any slot number $sl_1$ such that $C_1^{\\ulcorner k'} \\leq C_2$ and $k'\\leq k$ where $C_2$ is a chain of an honest party in slot $sl_2>sl_1$. If $\\kappa$ slots later, the chain grows more than $k$, then the probabilistic finality passes the GRANDPA finality gadget. So, if for all $\\kappa$ slots after a non-empty slot, the chains grows more than $k$ or the GRANDPA finality gadget finalize a block after $\\kappa$ slots then strong common prefix proerty is violated given that the GRANDPA finality gadget finalizes a block at most $\\kappa$ slots later with the probability $\\theta$. This happens with at least the probability $\\theta Rc^{k+1}(1-c)^{\\kappa - k} + (1-\\theta)$ in $R$ slots. Remark than even if $\\theta = 0$, we still have the common prefix property as in Ouroboros Praos [2]. Theorem 4 (Persistence and Liveness): Fix parameters $k, R, \\D, L \\in \\mathbb{N}$, $\\epsilon \\in (0,1)$ and $r$. Let $R \\geq 24k\/c(1+\\epsilon)$ be the epoch length, $L$ is the total lifetime of the system and BABE satisfies persitence [2] with parameters $k$ and liveness with parameters $s \\geq 12k\/c(1+\\epsilon)$ with probability $1-\\exp({\\ln L\\D c-\\Omega(k-\\ln tqk)})$ where $r= 8tqk\/(1+\\epsilon)$ is the resetting power of the adversary during the randomness generation. Proof (Sketch): The proof is very similar to Theorem 9 in [2]. The idea is as follows: The randomness for the next epoch is resettable until the slot number $R\/2(1+\\epsilon) > 12k\/c(1+\\epsilon)$. Now let's check the chain growth in $s = 12k\/c(1+\\epsilon)$ with $\\tau= \\frac{\\lambda c\\alpha(\\gamma+ \\lambda \\beta)}{6}$ where $\\lambda = (1-c)^\\D$. The stake distribution (for epoch$e_{j+2}$) which is updated until the end of epoch$e_j$is finalized at latest in the slot number$12k\/c(1+\\epsilon)$of epoch$e_{j+1}$. So it is finalized before the randomness of the next epoch's ($e_{j+2$) generated. In addition to this, chain growth property shows that there will be at least one honest block in the first$12k\/c(1+\\epsilon)$slots. These two imply that the adversary cannot adapt his stake according to the random number for the next epoch $e_{j+1}$ and this random number provides good randomness for the next epoch even though the adversary has capability of resetting $r = 8tkq\/(1+\\epsilon)$ times ($t$ is the number of corrupted parties and $q$ is the maximum number of random-oracle queries for a party). So, the common prefix property still preserved with the dynamic staking. Therefore, we can conclude that persistence is satisfied thanks to the common prefix property of dynamic stake with the probability (comes from Theorem 1) If we use the assumptions we can simplify this probability as $\\exp({\\ln L\\D c-\\Omega(k-\\ln tqk)}$. Liveness is the result of the chain growth and chain quality properties. These results are valid assuming that the signature scheme with account key is EUF-CMA (Existentially Unforgible Chosen Message Attack) secure, the signature scheme with the session key is forward secure, and VRF realizing is realizing the functionality defined in [2]. Analysis With VDF: TODO If we use VDF in the randomness update for the next epoch, $r = \\mathsf{log}tkq$ disappers in $p_{sec}$ because we have completely random value which do not depend on hashing power of the adversary. ## 6. Practical Results\u00b6 In this section, we find parameters of BABE in order to achieve the security in BABE. In addition to this, we show block time of BABE in worst cases (big network delays, many malicious parties) and in average case. We fix the life time of the protocol as $\\mathcal{L}=2.5 \\text{ years} = 15768000$ seconds. Then we find the life time of the protocol $L = \\frac{\\mathcal{L}}{T}$. We find the network delay in terms of slot number with$\\lfloor \\frac{D}{T}\\rfloor$where$D$is the network delay in seconds. Assuming that parties send their block in the beginning of their slots,$\\lfloor\\rfloor$operation is the enough to compute the delay in terms of slots. The parameter$c$is very critical because it specifies the number of empty slots because probability of having empty slot is$1-c$. If$c$is very small, we have a lot of empty slots and so we have longer block time. If$c$is big, we may not satisfy the condition $\\alpha(\\gamma+(1-c)^\\D\\beta)(1-c)^\\D \\geq (1+\\epsilon)\/2$ to apply the result of Theorem 4. So, we need to have a tradeoff between security and practicality. We need to satisfy two conditions to apply the result of Theorem 4 and to apply the result of Lemma 1 . Remark that the second condition implies the first one so it it enough to satistfy the second condition. In order to find a$c$value which provide resistance against maxumum network delays, we let$\\alpha = 0.65$and$\\gamma = 0.8$. Given this if we want to be secure even if we have maximum delay$D$, we need following$c$values. \u2022 c = 0.278 if$\\D = \\lfloor \\frac{D}{T}\\rfloor = 1$, \u2022 c = 0.034 if$\\D = \\lfloor \\frac{D}{T}\\rfloor = 2$\u2022 c = 0.018 if$\\D = \\lfloor \\frac{D}{T}\\rfloor = 3$\u2022 c = 0.0125 if$\\D = \\lfloor \\frac{D}{T}\\rfloor = 4$\u2022 c = 0.0094 if$\\D = \\lfloor \\frac{D}{T}\\rfloor = 5$\u2022 c = 0.0076 if$\\D = \\lfloor \\frac{D}{T}\\rfloor = 6$We compute the average block time in the case that the network delay is in average 1 second and all validators behave honestly,$gamma = 0.8$. In order to find, the probability of an unsychronized party's block added to the best chain, we find the probability of having$2\\D$-right isolated slot meaning that the leaders are all honest and late and the next$2\\D-1$slots are empty. Remark that the definitions of$2\\D$right isolated slot is more relaxed than the definition in the proof of Theorem 1 because we do not care the growth of other chains as we care in the security analysis. We compute the expected number of$2\\D$-isolated slot according to average delay (1 sec) even though we use the secure$c$value to have maximum network resistance. So,$\\D = $\\lfloor\\frac{1}{T}\\rfloor$ in the below computations.\n\nGiven a non-empty slot, the probability that this slot is $2\\D$-right isolated slot is\n\nThe expected number of non-empty slot in $L$ is $Lc$. So, the expected number of $2\\D$-right isolated slot in $Lc$ slot is\n$\\mathbb{E} = Lcp_{H_S}.$ Then, the block time $T_{block} \\leq \\frac{LT}{\\mathbb{E}} = \\frac{T}{c p_{H_S}}$.\n\nWe give graphs for required slot time to have the block time in $(-1,+1)$-neighborhood of the time in the x-axis with different maximum network delay ($D = 1,2,3,4,5,6$ seconds) resistance. Slot time being 0 in the graphs means is that it is not possible to have the corresponding block time.\n\nIf we decide to be resistant 6 seconds delay, we can choose $T = 3$ and have around 14 seconds block time if the average network delay is 1 second. In this case, the epoch length has to be around 27 hours to make sure that we have a good randomness and $k = 54$. If GRANDPA works well the epoch length can be around half of 27 hours.\n\nIf we decide to be resistant 4 seconds delay, we can choose $T = 2$ and have around 10 seconds block time if the average network delay is 1 second. In this case, the epoch length has to be around 18 hours to make sure that we have a good randomness and $k = 55$. If GRANDPA works well the epoch length can be around half of 18 hours.\n\n## References\u00b6\n\n[1] Kiayias, Aggelos, et al. \"Ouroboros: A provably secure proof-of-stake blockchain protocol.\" Annual International Cryptology Conference. Springer, Cham, 2017.\n\n[2] David, Bernardo, et al. \"Ouroboros praos: An adaptively-secure, semi-synchronous proof-of-stake blockchain.\" Annual International Conference on the Theory and Applications of Cryptographic Techniques. Springer, Cham, 2018.\n\n[3] Badertscher, Christian, et al. \"Ouroboros genesis: Composable proof-of-stake blockchains with dynamic availability.\" Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2018.\n\n[4] Aggelos Kiayias and Giorgos Panagiotakos. Speed-security tradeoffs in blockchain protocols. Cryptology ePrint Archive, Report 2015\/1019, 2015. http:\/\/eprint.iacr.org\/2015\/1019","date":"2019-05-23 23:35:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 317, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8498662710189819, \"perplexity\": 2173.7710446225983}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232257432.60\/warc\/CC-MAIN-20190523224154-20190524010154-00551.warc.gz\"}"} | null | null |
The Open Gardens NT website is currently in development.
Open Gardens NT is a not for profit organisation founded in 2018. It owes its success to the hard work and dedication of the many volunteers who made up the nationally-run Open Garden Scheme that ran in the Territory from 1987 to 2014 and the volunteers that have now launched the Northern Territory (NT) Open Garden Scheme.
Open Gardens NT mission is to promote the knowledge and pleasure of gardens and gardening across the Northern Territory by opening inspiring private gardens to the public.
To facilitate the opening of gardens in the Northern Territory for public viewing.
To promote the enjoyment and benefits of gardening and gardens.
To encourage community engagement in gardening by promoting horticultural education, garden design, ecological sustainability and other activities.
To cooperate with, and to participate in activities with, other organisations which have similar objects.
Sign up to the mailing list to be kept up to date on the Open Gardens scheme for the Northern Territory. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,379 |
Calories of Broken Wheat Upma, Is broken wheat (dalia) healthy?
How many calories does one serving of Broken Wheat Upma have?
One serving of Broken Wheat Upma gives 109 calories. Out of which carbohydrates comprise 74 calories, proteins account for 10 calories and remaining calories come from fat which is 25 calories. One serving of Broken Wheat Upma provides about 5 percent of the total daily calorie requirement of a standard adult diet of 2,000 calories.
See here for broken wheat (dalia) recipe.
Is Broken Wheat Upma healthy?
Yes, broken wheat upma is healthy. Made from broken wheat, onions, carrots, green peas, oil and Indian spices.
Dalia ( Broken Wheat) : High Fibre in Dalia aids in managing Diabetes. The high fiber further assists in controlling the levels of cholesterol too, thus reducing the risk of strokes. Strong Bones are the backbone of our body. We are aware that with age our bone mineral density decreases and we need a good dose of calcium, phosphorus and magnesium to maintain the health of our bones and dalia provides that. See here for detailed 8 amazing benefits of dalia.
Carrots : 1/4 cup carrot is used. Carrots have the nutrient Beta Carotene which is a form of Vitamin A, helps prevent deterioration of the eye as one gets older and prevents night blindness. Carrot is great for the eyes.They relieve constipation, lower blood pressure, have fibre. Read the 11 super benefiits of carrots and why to include in your daily diet.
Green Peas : Green peas are good for weight loss, good source of vegetarian protein, has insoluble fibre to relieve constipation. Is green peas good for diabetics and see full benefits of green peas.
Can diabetics, heart patients and over weight individuals have Broken Wheat Upma?
Yes, they can. Broken wheat is high in fibre and this recipe has a mix of vegetables which we like and each vegetable has its own benefits. You always better off using a mix of vegetables as you get more nutrients.
Can healthy individuals have Broken Wheat Upma?
Yes, this is a perfect healthy breakfast recipe.
1. Vitamin B1: Vitamin B1 protects nerves, helps in carbohydrate metabolism, prevents heart diseases and helps produce red blood cells.
How to burn 109 calories that come from one serving of Broken Wheat Upma? | {
"redpajama_set_name": "RedPajamaC4"
} | 4,262 |
\subsection*{Abstract}
Many enterprise environments have databases running on
network-attached server-storage infrastructure (referred to as {\em
Storage Area Networks} or {\em SANs}). Both the database and the SAN
are complex systems that need their own separate administrative teams.
This paper puts forth the vision of an innovative management framework
to simplify administrative tasks that require an in-depth
understanding of both the database and the SAN. As a concrete
instance, we consider the task of diagnosing the slowdown in
performance of a database query that is executed multiple times (e.g.,
in a periodic report-generation setting). This task is very
challenging because the space of possible causes includes problems
specific to the database, problems specific to the SAN, and problems
that arise due to interactions between the two systems. In addition,
the monitoring data available from these systems can be noisy.
We describe the design of
\dbtool{} which is an integrated diagnosis tool for database and SAN
administrators. \dbtool{} generates and uses a powerful abstraction
called {\em Annotated Plan Graphs (APGs)} that ties together the
execution path of queries in the database and the SAN. Using an
innovative workflow that combines domain-specific knowledge with
machine-learning techniques,~\dbtool{} was applied successfully to
diagnose query slowdowns caused by complex combinations of events
across a PostgreSQL database and a production SAN.
\eat{
We present {\sc DiaDS}, an integrated DIAgnosis tool for Databases and Storage area networks
(SANs). Existing diagnosis tools in this domain have a database-only (e.g., ~\cite{tune-oracle})
or SAN-only (e.g., ~\cite{shen05perf}) focus.
\dbtool{} is a first-of-a-kind framework
based on a careful integration of information
from the database and SAN layers; and is not a simple concatenation of
database-only and SAN-only modules. This approach
not only increases the accuracy of diagnosis, but also leads to significant improvements in efficiency.
\dbtool{} uses a novel combination of non-intrusive machine learning techniques (e.g., Kernel Density Estimation)
and domain knowledge encoded in a new symptoms database design. The machine learning part
provides core techniques for problem diagnosis from monitoring data, and domain knowledge
acts as checks-and-balances to guide the diagnosis in the right direction. This unique system design enables
\dbtool{} to function effectively even in the
presence of multiple concurrent problems as well as
noisy data prevalent in production environments.
We demonstrate the efficacy of our approach
through a detailed experimental evaluation of~\dbtool{} implemented on a real data center testbed with PostgreSQL databases and an enterprise SAN.
}
\section{Conclusions}
\label{sec:conclusions}
In this paper, we presented our vision for an integrated database and
SAN management framework. This framework is aimed at assisting
administrators in management tasks that require an understanding of
both database and SAN environments. As an example of this vision, we
described a diagnostic tool, called \dbtool{}, that supports root
cause analysis for problems that span databases and SANs. This
integrated diagnosis is based on a novel information abstraction
called Annotated Plan Graph (APG) that captures the end-to-end mapping
of database operators and their dependencies on various SAN
components, including performance and configuration information.
Using a novel interplay of machine learning and domain knowledge
(e.g., symptoms databases), \dbtool{} progressively drills down from
the SQL query to execution plans, operators, and eventually to
performance and configuration characteristics of the SAN components.
It can then associate impact of potential problems to the actual
symptoms to identify the root cause of the problem. We also described
some experimental scenarios of \dbtool{} diagnosis for root cause
problems occurring in database and SAN layers.
We contend that the integrated management framework and the APG
abstraction presented in this paper enables a key capability in
enterprise data center management. By providing visibility into the
SAN to database administrators and vice versa, it allows for smarter
resource planning and improved efficiencies in the data center.
\section{Introduction}
\label{sec:intro}
Database deployments in enterprise environments are typically business
critical and support high transaction rates. These deployments run on
enterprise-class storage subsystems with terabyte-scale data mapped to
the database either through a file system (referred to as {\em System
Managed Storage}) or raw volumes (referred to as {\em Database Managed
Storage}). Traditionally, storage was attached directly to high-end
database servers to meet their capacity, throughput, and bandwidth
requirements. However, economic realities of high administration costs
for islands of disconnected resources, combined with under-utilization
of statically-provisioned server and storage hardware, have
transformed the direct-attached architectures into a network-attached
setup with multiple application servers (including databases)
connected to a consolidated and virtualized storage pool; an
architecture known popularly as a {\em Storage Area Network (SAN)}.
SANs are very complex systems. A typical SAN has a hierarchy of {\em
core} and {\em edge} fibre-channel switches with {\em zoning}
configuration that controls the connectivity of server ports with one
or more heterogeneous storage controllers. The storage controllers
manage a large number of raw disks by aggregating them into logical
entities like pools and volumes. Given this complexity, database
administrators are forced to treat the SAN as a black-box, entrusting
SAN administrators to configure the required CPU, network, and storage
resources for meeting their database's performance requirements.
Such a {\em silo-based} approach for database and SAN management is
the state-of-art today. In a typical real-world scenario, database
administrators open problem tickets for the SAN administrator to
analyze and fix issues related to query slowdowns: {\em ``Queries to
the RepDB database used for report generation have a 30\% slow down in
response time, compared to performance two weeks back.''} Unless there
is an obvious failure or degradation in the storage hardware or the
connectivity fabric, the SAN administrator's response to this problem
ticket could be: {\em ``The I/O rate for RepDB tablespace volumes has
increased 40\%, with increased sequential reads, but the response time
is within normal bounds.''} This ``blame game'' may continue for
several weeks before the problem is actually fixed. In reality, the
query slowdown problem could be due to any number of causes including
suboptimal plan selection by the database due to incorrect cost
models, lock contention for the database tables, CPU saturation of a
database server, congestion in the controller ports, and others. The
lack of consistent end-to-end information may lead to either {\em
throwing iron at the problem} and creating islands of underutilized
resources, or employing highly paid consultants who understand both
databases and SANs to solve the original problem tickets.
Our vision in this paper is an integrated database and SAN management
framework. This framework combines details of both database operations
as well as SAN configuration and performance into a novel data
structure referred to as an {\em Annotated Plan Graph (APG)}. The
framework uses a combination of machine learning algorithms and domain
knowledge to help administrators with key day-to-day tasks such as
optimized allocation of SAN resources for varying database workload
characteristics, diagnosis of database performance slowdowns, and
what-if analysis related to workload or configuration changes. As a
concrete instance of our vision, this paper focuses on integrated
diagnosis of query performance slowdown in databases running over
SANs.
\subsection{Challenges in Integrated Diagnosis}
Enterprise environments are constantly evolving with changes in the
SAN configuration, the mix of database queries, as well as the
workload characteristics of other applications sharing the SAN. In
such an environment, the key challenges for diagnosis are as follows:
\vspace{1mm}
\squishlist
\item {\em Cascading of events}: Analyzing the impact of an event
across multiple layers of a system is a nontrivial problem. The cause
and effect of a problem may not be contained within a single layer,
but manifested across multiple layers (typically referred to as {\em
event flooding}).
\vspace{1mm}
\item {\em Inaccuracies in monitoring data}: Monitoring in production
environments is configured to minimize the impact on the foreground
applications. Typically, the monitoring intervals are large (5
minutes or higher), which may lead to inaccuracies (referred to as
{\em noisy} data) because the instantaneous effects of spikes and
other bursty behavior can get averaged out.
\vspace{1mm}
\item {\em High dimensional search space with complex correlations}:
An integrated analysis involves a large number of entities including
database operators, physical SAN devices, logical volumes and pools in
a SAN, and workload. Pure machine learning techniques that aim to find
correlations or regression functions in the raw monitoring data, which
otherwise may have been effective within a single layer, can be
ineffective in the integrated scenario. Existing diagnosis tools for
some commercial databases~\cite{tune-oracle} use a rule-based approach
where a root-cause taxonomy is created and then complemented with
rules to map observed symptoms to possible root causes. While this
approach has the merit of encoding valuable domain knowledge for
diagnosis purposes, it may become complex to maintain and customize.
\squishend
\subsection{Contributions}
Our vision is to leverage the existing monitoring tools for SANs and
databases to develop an integrated database and SAN management
platform. This platform will simplify the subset of administrative
tasks that require an understanding of both databases and SANs, e.g.,
problem diagnosis, resource provisioning, what-if analysis, and
disaster recovery planning. As a concrete instance of the integrated
functionality, the paper describes our prototype of an integrated
diagnosis tool (referred to as~\dbtool{}) that spans the database and
the underlying SAN that consists of end-to-end I/O paths with servers,
interconnecting network switches and fabric, and storage controllers.
Figure~\ref{fig:apgs} shows an integrated database and SAN taxonomy
with various logical (e.g., sort and scan operators) and physical
components (e.g., server, switch, and storage subsystem).
To the best of our knowledge,~\dbtool{} is the first diagnosis tool
that analyzes both SAN and database events in an integrated fashion.
The key contributions of this paper are:
\vspace{1mm}
\squishlist
\item A novel canonical representation of database query operations
combined with physical and logical entities from the SAN environment
(referred to as {\em Annotated Plan Graphs}). This representation
captures the information required for end-to-end diagnosis, and is
created using monitoring data from available database and SAN tools.
\vspace{1mm}
\item An innovative diagnosis workflow that {\em drills down}
progressively from the level of the query to database plans and to
operators, and then uses configuration dependency analysis and {\em
symptom signatures} to further drill down to the level of performance
metrics and events in components. It then {\em rolls up} using impact
analysis to tie potential root causes back to their impact on the
query slowdown. The diagnosis is accomplished using a combination of
machine learning and domain knowledge
\vspace{1mm}
\item An empirical evaluation of~\dbtool{} on a real-world testbed
with a PostgreSQL database running on an enterprise-class storage
controller. We describe (and demonstrate) problem injection scenarios
including combinations of events at the database and SAN layers, along
with a drill-down into intermediate internal results generated
by~\dbtool{}.
\squishend
\section{The Potential of Integrated Database and SAN Tools}
\label{sec:potential}
While integrated diagnosis using \dbtool{} solves an important practical
problem, the proposed system and techniques have the potential to enable even
broader functionality. In this section, we present few instances of these
capabilities.
\squishlist
\vspace{1mm}
\item {\it \bfseries What-if analysis}: Often database and storage
administrators have to apply changes within their respective configurations. In
typical enterprises, this either proceeds without regard to impact on the other
layer or requires extensive collaboration between the two teams. In contrast,
using techniques developed in our work, it is easy to conceive an integrated
database and SAN tool that allows administrators to proactively assess the
impact of their planned changes on the other layer. In fact, the impact analysis
component of~\dbtool{} seems to be a promising approach for developing such a
feature. While it may not completely identify all possible problems, it will
serve as a valuable check which can then lead to quicker and more focused
discussions between the teams.
\vspace{1mm}
\item {\it \bfseries Proactive diagnosis and self-healing}: Another useful
extension for~\dbtool{} is to provide proactive diagnosis and importantly,
self-healing capability. The current symptoms database design can be extended to
include, along with symptoms, possible fixes for the root cause of the problem.
Once the tool identifies a root cause, it can then apply the fix to self-heal
the environment. It is important to note that as in real life, the fix may be
required within the database or storage or a combination of both layers. An
integrated approach like ours will be crucial in identifying the right fix and
then applying it in any one layer.
\vspace{1mm}
\item {\it \bfseries Integrated Database and SAN Planning}: Along with
diagnosis, we believe that annotated plan graphs, by capturing information from
the database and SAN layers into a single construct, can lead to smarter
planning and optimization for database deployments over a SAN. For example,
decisions like the choice of storage required for given database workloads or
choice of DB query plan given the storage infrastructure can be intelligently
made using these techniques. An early work by Salem et al~\cite{salem} presented
a similar approach for such integrated planning, though it uses a concatenation
of independent database and storage analysis components. In contrast, annotated
plan graphs provide a much tighter integration with information flow between the
two layers aiding in analysis.
\vspace{1mm}
\item {\it \bfseries Machine Learning and Domain Knowledge Interplay}: One of
the important aspects of our work is the coupling of machine learning and domain
knowledge techniques towards diagnosis. Use of domain knowledge through a
symptoms database serves as a guiding tool to the machine learning algorithms
preventing spurious correlations due to noisy data or event propagation. An
interesting course of future work is to enhance this relationship with machine
learning techniques contributing towards identifying potential symptoms which
can be checked by an expert and added to the symptoms database. Considering that
a symptoms database may never be complete, this provides a self-evolving
mechanism towards bettering the quality of the symptoms databases.
\vspace{1mm}
\item {\it \bfseries Synergy between~\dbtool{} and ADDM~\cite{tune-oracle}}: A
possible deployment of \dbtool{} is along with a more fine-grained diagnosis
tool like Oracle ADDM~\cite{tune-oracle, tune-oracle2} which uses instrumented
code to get operator level timing information. Both use a similar mechanism of
finding symptoms and then mapping them to a root cause. However, our use of
historic performance data helps in answering questions like {\it why did my
query slow down?} while ADDM helps answering questions like {\it why is my query
slow?}. A combination of the tools provides a stronger analysis engine.
\squishend
\section{Related Work} \label{section:related}
There has been much prior research for performance diagnosis in
databases~\cite{tune-oracle,automatic-hp} as well as enterprise
storage systems~\cite{genesis,shen05perf}. However, most of these
techniques perform diagnosis in an isolated manner attempting to
identify root cause(s) of a performance problem in individual database
or storage silos. Since the performance problem may lie in any one or
a combination of database (DB) and SAN layers, an integrated system
like \dbtool{} would be a useful and more efficient approach.
Recent studies that have looked at the interdependence between
database and storage systems highlight the importance of such an
integrated analysis. Reference \cite{reiss-sigmod} described how an
inaccurate storage cost model in the database query optimizer can
significantly impact the choice of query execution plans. Reference
\cite{salem} proposed an end-to-end database and storage planning
technique by characterizing the storage I/O workload of a given
database workload using an independent combination of database and
storage analysis. While sharing the same spirit, our work brings a
much tighter coupling of database-level and storage-level information
as well as capturing their interdependence using a novel Annotated
Plan Graph abstraction described in Section \ref{sec:apgs}.
\dbtool{} can be a good complement to fine-grained database diagnosis
and tuning tools like Oracle's Automatic Database Diagnostic Monitor
(ADDM)~\cite{tune-oracle}. ADDM is a database profiling and diagnosis
tool that uses expert knowledge about the database to identify
problems as well to recommend possible fixes to the problems.
Reference \cite{sqlcm} describes a server-side monitoring and analysis
system for Microsoft SQL Server that is useful during manual
diagnosis. Our work complements this research by providing a
non-intrusive and low-overhead mode of analysis that uses historic
performance data to diagnose {\it changes} in query performance. We
discuss this synergy further in Section~\ref{sec:potential}.
There has also been significant work in diagnosing performance
problems within the systems research community~\cite{peerpressure,
symptom-db}. Broadly, these techniques can be split into two
categories: (a) systems using machine learning techniques, and (b)
systems using domain knowledge. Reference \cite{peerpressure, pc-slow}
uses statistical techniques to develop models for a healthy machine,
and uses the models to identify {\it sick} machines. On the other
hand, systems like~\cite{codebook, symptom-db, symptom-format, chilukuri06} use
domain knowledge to create a {\it symptoms} database that associates
performance symptoms with underlying root causes. Such databases are
often created manually and require a high level of expertise and
resources to maintain.
We believe that for a diagnosis tool to be practically useful, a mix
of machine learning and domain knowledge will be required. Pure
machine learning techniques can be misled due to spurious correlations
in data resulting from noisy data collection or event flooding (where
a problem in one component causes another component to be
impacted). In \dbtool{}, we counterbalance this effect using suitable
domain knowledge like component dependencies, symptoms databases, and
knowledge of query plan and operator relationships.
Next, we describe Annotated Plan Graphs that capture database and storage component behavior in a single integrated abstraction.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,606 |
Tak Bai (în ) este un district (Amphoe) din provincia Narathiwat, Thailanda, cu o populație de 66.579 de locuitori și o suprafață de 253,45 km².
Componență
Districtul este subdivizat în 8 subdistricte (tambon), care sunt subdivizate în 56 de sate (muban).
Referințe
Amphoe în provincia Narathiwat | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 576 |
POLICE DIGEST
Gun, fentanyl seized in traffic stop; two charged A loaded handgun and counterfeit pills containing fentanyl were among the illegal items uncovered during a traffic stop last week that has led to charges against two men, according to Cranston Police. The
SEIZED ITEMS: Cranston police said they seized this a loaded .45 caliber handgun and a bag of pills following a traffic stop. Police said the bag contained 49 counterfeit
(Courtesy of Cranston Police)
Posted Wednesday, November 3, 2021 12:56 pm
Gun, fentanyl seized in traffic stop; two charged
A loaded handgun and counterfeit pills containing fentanyl were among the illegal items uncovered during a traffic stop last week that has led to charges against two men, according to Cranston Police.
The stop occurred at approximately 4:15 p.m. on Oct. 27 in the Cranston Street area based on officers observing "various motor vehicle violations," a statement from police reads.
"A search of the vehicle resulted in the seizure of forty-nine (49) counterfeit pills stamped with the numerical identifier E-404, associated with an amphetamine commonly prescribed to treat ADHT and narcolepsy," according to the statement. "Later, a field test revealed the pills contained fentanyl, a potent pain killer responsible for multiple deadly opioid overdoses. Also discovered inside the vehicle was a quantity of crack cocaine, a digital scale, and a loaded .45 caliber handgun."
LASSITER
The operator of the vehicle, identified Jarred Alba, 25, of 101 Oaklawn Ave. in Cranston, attempted to flee on foot during the stop but was "quickly apprehended," police say.
Alba is charged with Carrying a Firearm While Committing a Crime of Violence, License Required for Carrying a Pistol, Possession with Intent to Deliver fentanyl, Possession of crack cocaine, Disorderly Conduct and Resisting Arrest.
The sole passenger in the vehicle, identified as Troy Lassiter, 49, of 26 John St. in Johnston, is charged with possession of crack cocaine. He was also taken into custody on an active Third Division District Court bench warrant.
Alba was ordered held without bail at his arraignment, according to police, while Lassiter's bail was set at $50,000 with surety.
"The public should be warned that there is an influx of counterfeit prescription pills being sold on the black market here in Rhode Island," Chief of Police Michael Winquist said in the release. "These pills contain high purity levels of fentanyl or methamphetamine, resulting in overdose deaths of unsuspecting users in Cranston and throughout the country. These counterfeit pills are being produced to resemble prescription pills such as OxyContin, Adderall, and Xanax. These drugs are popular among high school and college students and are referred to as study drugs. Only use prescription pills prescribed to you and purchased from a legitimate pharmacy. Buying pills on the street can have deadly consequences."
Four charged in 'large-scale' Warwick marijuana operation
Four people have been charged in connection with a "large-scale illegal marijuana cultivation operation" in Warwick near the Cranston line, according to Warwick Police.
A search warrant for the buildings at 1700 and 1708 Elmwood Ave. was executed Oct. 29, police said in a press release, and "active illegal marijuana grows were confirmed" at the location. The warrants were sought as a result of an investigation that began in September based on "
information that a large marijuana grow and possible fentanyl pill press" were being operated at the site.
"Investigative intelligence was developed that area juveniles had or were being solicited to work in the cultivating and harvesting of illegal marijuana at the location," the release states.
During last week's search, police uncovered "marijuana plants in various stages of growth, lighting, and irrigation and ventilation systems," according to police.
"Also located during the search were bags of packaged marijuana, buckets of recently harvested marijuana and equipment and packaging material commonly used for the sale and distribution of illegal narcotics," the release continues. "In total 368 marijuana plants in various stages of growth and over 27 pounds of harvested marijuana were seized."
Four people were arrested at the scene and charged with one count each of manufacturing/possession of marijuana and conspiracy, police said. They are Arman Matevosyan, 39, of 61 Fairfax Drive in Warwick; Gagik Davtian, 62, of 101 Sefton Ave. in Warwick; Raymond Renzi, 53, of 630 Greenville Ave. in Johnston; and Artak Ghazaryan, 43, of 61 Fairfax Drive in Warwick.
Both of the buildings involved were condemned and their electricity was cut, according to police. The release also indicates that such operations are often found "nestled in residential neighborhoods," creating risks for residents and generating complaints.
"These illegal operations are often a serious fire hazard and exhibit many building code violations," the release states. "Lack of enforcement is simply not an option particularly when intelligence suggested area juveniles were being recruited and more serious and deadly narcotics may also be manufactured on site. This particular site was just steps away from a City of Warwick recreation facility where youth sports are played."
Prosecutors: ACI inmate arranged sale of gun, drugs
An Adult Correctional Institutions inmate was sentenced to four years in federal prison last week for his role in arranging for the sale of a firearm and methamphetamine while incarcerated, according to the U.S. Attorney's office.
Tyler Bagley, 29, pleaded guilty to charges of being in possession of a firearm as a felon and conspiracy to distribute methamphetamine in U.S. District Court in July. As part of the sentence issued last week, he will served three years of supervised release following his prison term.
Prosecutors say that in October 2020, Bagley "telephoned his then girlfriend, Bernice Chase, 39, of Providence, and, using coded language, instructed her to call a phone number he provided to her to arrange for the sale of a firearm that he had previously obtained."
"Chase called the number and arranged to meet the next day with the buyer to provide him with a Glock9mm pistol in exchange for $450," a statement from the U.S. Attorney's office continues. "About an hour after the transaction was completed, Bagley telephoned Chase and instructed her to deposit $200 into his prison account and for her to keep the remainder of the proceeds. Unbeknownst to Bagley and Chase, the individual that purchased the firearm was an undercover agent with the Bureau of Alcohol, Tobacco, Firearms, ad Explosives."
It adds: "About a month later, Bagley contacted Chase by telephone from inside the prison and, using coded language, told Chase to again contact the person that purchased the firearm and to sell him 28 grams of methamphetamine. A day later, Chase and the ATF undercover agent met, and she provided the agent with 14 grams of meth in exchange for $800. Chase told Bagley that she could not get the full 28 grams, but she was able to get 15 and made 300 dollars profit. Bagley instructed Chase to keep half of the proceeds and to deposit half into his prison account. Analysis at a DEA laboratory established that the methamphetamine sold to the undercover agent weighed 14.058 grams and was 97% pure."
According to the U.S. Attorney's office, Chase pleaded guilty to the same charges as Bagley in September of this year. Her sentencing is scheduled for January.
Warwick man convicted of federal firearms charges
A Warwick man has been convicted of illegally buying and selling 16 firearms without a federal license, according to the U.S. Attorney's office.
Ademola Kayode, Jr., 30, was found guilty of possessing a firearm as an unlawful user of controlled substances, making a false statement during the purchase of firearms, and two counts of making false statements to federal agents, a statement from prosecutors reads. The verdict came Oct. 20 following a three-day trial in U.S. District Court jury on Oct. 20.
The case against Kayode stems from a 16-month investigation on the part of ATF agents and other law enforcement officials.
"According to the government's evidence presented at trial, an investigation by ATF agents determined that between March 25, 2015, and July 16, 2016, Kayode falsely asserted on ATF background forms required for gun purchases that he was not a user of controlled substances, when in fact he was," the U.S. Attorney's statement reads. "In total, Kayode purchased sixteen firearms in sixteen months from federally licensed firearms dealers in Rhode Island and Georgia during this period, in addition to others on the Internet. Kayode came to the attention of ATF agents because of his repeated purchases of firearms in a relatively short period of time, often the same or similar model. An investigation determined that Kayode repeatedly sold firearms without a federal firearms license to do so, and at least five of those firearms ended up in the hands of individuals who were legally prohibited from possessing them."
The statement continues: "To date, five of the sixteen firearms purchased by Kayode between March 2015 and July 2016 have been recovered by law enforcement. Three of the guns were recovered in Rhode Island, one in Atlanta, and one in Queens, New York. All were in the possession of individuals who are legally prohibited from possessing firearms."
Kayode's sentencing is scheduled for Feb. 8.
-- Daniel Kittredge
police, crime, arrests
Johnston Police arrest breaking and entering suspect
Johnston Police seek breaking and entering suspect | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,398 |
package com.googlecode.objectify.impl;
import com.google.cloud.datastore.StructuredQuery;
import com.googlecode.objectify.Key;
import com.googlecode.objectify.LoadResult;
import com.googlecode.objectify.ObjectifyFactory;
import com.googlecode.objectify.cmd.Filter;
import com.googlecode.objectify.cmd.LoadIds;
import com.googlecode.objectify.cmd.LoadType;
import com.googlecode.objectify.cmd.Query;
import com.googlecode.objectify.util.ResultCache;
import com.googlecode.objectify.util.ResultProxy;
import java.util.Arrays;
import java.util.LinkedHashMap;
import java.util.Map;
/**
* Implementation of the LoadType interface.
*
* @author Jeff Schnitzer <jeff@infohazard.org>
*/
class LoadTypeImpl<T> extends Queryable<T> implements LoadType<T>
{
/** */
private final String kind;
/** Might be null; perhaps we specified a raw kind only */
private final Class<T> type;
/** Possible parent */
private final Key<T> parent;
/**
*/
LoadTypeImpl(LoaderImpl loader, String kind, Class<T> type) {
this(loader, kind, type, null);
}
/** */
LoadTypeImpl(LoaderImpl loader, String kind, Class<T> type, Key<T> parent) {
super(loader);
this.kind = kind;
this.type = type;
this.parent = parent;
}
/* (non-Javadoc)
* @see com.googlecode.objectify.impl.cmd.QueryCommonImpl#createQuery()
*/
@Override
QueryImpl<T> createQuery() {
return new QueryImpl<>(loader, kind, type);
}
/* (non-Javadoc)
* @see com.googlecode.objectify.cmd.Query#filter(java.lang.String, java.lang.Object)
*/
@Override
public Query<T> filter(String condition, Object value) {
QueryImpl<T> q = createQuery();
q.addFilter(condition, value);
return q;
}
/* */
@Override
public Query<T> filter(final StructuredQuery.Filter filter) {
final QueryImpl<T> q = createQuery();
q.addFilter(filter);
return q;
}
@Override
public Query<T> filter(final Filter filter) {
final QueryImpl<T> q = createQuery();
q.addFilter(filter);
return q;
}
/* (non-Javadoc)
* @see com.googlecode.objectify.cmd.Query#order(java.lang.String)
*/
@Override
public Query<T> order(String condition) {
QueryImpl<T> q = createQuery();
q.addOrder(condition);
return q;
}
/* (non-Javadoc)
* @see com.googlecode.objectify.cmd.LoadIds#id(long)
*/
@Override
public LoadResult<T> id(final long id) {
return loader.key(this.makeKey(id));
}
/* (non-Javadoc)
* @see com.googlecode.objectify.cmd.LoadIds#id(java.lang.String)
*/
@Override
public LoadResult<T> id(final String id) {
return loader.key(this.makeKey(id));
}
/* (non-Javadoc)
* @see com.googlecode.objectify.cmd.LoadIds#ids(Long[])
*/
@Override
public Map<Long, T> ids(final Long... ids) {
return ids(Arrays.asList(ids));
}
/* (non-Javadoc)
* @see com.googlecode.objectify.cmd.LoadIds#ids(java.lang.String[])
*/
@Override
public Map<String, T> ids(final String... ids) {
return ids(Arrays.asList(ids));
}
/* (non-Javadoc)
* @see com.googlecode.objectify.cmd.LoadIds#ids(java.lang.Iterable)
*/
@Override
public <S> Map<S, T> ids(final Iterable<S> ids) {
final Map<Key<T>, S> keymap = new LinkedHashMap<>();
for (final S id: ids)
keymap.put(this.makeKey(id), id);
final Map<Key<T>, T> loaded = loader.keys(keymap.keySet());
return ResultProxy.create(Map.class, new ResultCache<Map<S, T>>() {
@Override
protected Map<S, T> nowUncached() {
final Map<S, T> proper = new LinkedHashMap<>(loaded.size() * 2);
for (final Map.Entry<Key<T>, T> entry: loaded.entrySet())
proper.put(keymap.get(entry.getKey()), entry.getValue());
return proper;
}
});
}
/**
* Make a key for the given id, which could be either string or long
*/
private <T> Key<T> makeKey(final Object id) {
final com.google.cloud.datastore.Key key = factory().keys().createRawAny(
loader.ofy.getOptions().getNamespace(),
Keys.raw(this.parent),
kind,
id);
return Key.create(key);
}
/* (non-Javadoc)
* @see com.googlecode.objectify.cmd.LoadType#parent(java.lang.Object)
*/
@Override
public LoadIds<T> parent(final Object keyOrEntity) {
final Key<T> parentKey = factory().keys().anythingToKey(keyOrEntity, loader.ofy.getOptions().getNamespace());
return new LoadTypeImpl<>(loader, kind, type, parentKey);
}
/** */
private ObjectifyFactory factory() {
return loader.getObjectifyImpl().factory();
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,856 |
Q: Выводит удаленные записи из БД вот моя настройка nhibernate где я говорю, что не выводить записи у которых "IsDeleted = true" в БД :
public abstract class NHSessionProviderBase {
private ISessionFactory _factory;
private readonly object SyncObject = new object();
private readonly Assembly _entitiesAssembly;
private readonly bool _schemaUpdate;
private readonly FilterDefinition SoftDeleteDefinition;
public string ConnectionString { get; }
public NHSessionProviderBase(string connectionString, Assembly entitiesAssembly, bool schemaUpdate = false) {
SoftDeleteDefinition = new FilterDefinition(
"softdelete",
string.Format("(IsDeleted = :{0})", "isDeleted"),
new Dictionary<string, IType> { { "isDeleted", NHibernateUtil.Boolean } }, true);
ConnectionString = connectionString;
_entitiesAssembly = entitiesAssembly;
_schemaUpdate = schemaUpdate;
}
public ISession OpenSession() {
var session = GetSessionFactory().OpenSession();
ApplyFilters(session);
return session;
}
public IStatelessSession OpenStatelessSession() {
return GetSessionFactory().OpenStatelessSession();
}
private void ApplyFilters(ISession session) {
var type = SoftDeleteDefinition.ParameterTypes.First().Value as PrimitiveType;
session
.EnableFilter(SoftDeleteDefinition.FilterName)
.SetParameter(SoftDeleteDefinition.ParameterNames.First(), type.DefaultValue);
}
private ISessionFactory GetSessionFactory() {
if (_factory == null) {
lock (SyncObject) {
if (_factory == null) {
var configuration = PostgreSQLConfiguration.PostgreSQL82.ConnectionString(ConnectionString)
.IsolationLevel(IsolationLevel.ReadUncommitted);
var mapping = AutoMap.Assembly(_entitiesAssembly)
.Where(x => x.GetInterfaces().Contains(typeof(IBaseEntity)))
.UseOverridesFromAssembly(_entitiesAssembly)
.Conventions.AddAssembly(_entitiesAssembly)
.Conventions.Add<EnumConvention>()
.Conventions.Add<TableNameConvention>()
.Conventions.Add<BinaryColumnLengthConvention>()
.Conventions.Add<CustomForeignKeyConvention>();
AddIgnoredBase(mapping);
AddConventions(mapping.Conventions);
var cfg = Fluently.Configure()
.Database(configuration//.ShowSql()
.UseReflectionOptimizer())
.Mappings(c => c.AutoMappings.Add(mapping))
.ExposeConfiguration(x => x.SetListener(ListenerType.Delete, new SoftDeleteEventListener()))
.ExposeConfiguration(x => x.AddFilterDefinition(SoftDeleteDefinition));
if (_schemaUpdate) {
cfg.ExposeConfiguration(x => new SchemaUpdate(x).Execute(false, true));
}
var build = cfg.BuildConfiguration();
foreach (var classMap in build.ClassMappings) {
if (typeof(IDeletableEntity).IsAssignableFrom(classMap.MappedClass)) {
classMap.AddFilter(SoftDeleteDefinition.FilterName, SoftDeleteDefinition.DefaultFilterCondition);
}
}
_factory = build.BuildSessionFactory();
}
}
}
return _factory;
}
private void AddIgnoredBase(AutoPersistenceModel mapping) {
foreach (var ignored in this.IgnoredBaseClasses()) {
mapping.IncludeBase(ignored);
}
}
private void AddConventions(SetupConventionFinder<AutoPersistenceModel> conventions) {
foreach (var convention in this.CustomConventions()) {
conventions.Add(convention);
}
}
protected virtual Type[] CustomConventions() { return new Type[0]; }
protected virtual Type[] IgnoredBaseClasses() { return new Type[0]; }
}
Тут работает:
var query = _session.Query<Store>(x => x.Organization.Xin == xin).Select(x => new {
x.Name,
x.IsDeleted
});
Но когда делаешь SelectMany, оно выводит удаленные записи:
var q = _session.Query<Organization>(x => x.Xin == xin).SelectMany(x => x.Stores).Select(x => new {
x.IsDeleted,
x.Name
});
A: может кому-то поможет :
public class OrganizationOverride : IAutoMappingOverride<Organization> {
public void Override(AutoMapping<Organization> mapping) {
mapping.HasMany(x => x.Stores).ApplyFilter<SoftDeleteDefinition>();
}
}
и сам фильтр:
public class SoftDeleteDefinition : FilterDefinition {
public SoftDeleteDefinition() {
this.WithName("mappingSoftDelete");
this.WithCondition(string.Format("(IsDeleted = :{0})", "isDeleted"));
this.AddParameter("isDeleted", NHibernateUtil.Boolean);
}
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,284 |
1978 VX8 (asteroide 32741) é um asteroide da cintura principal. Possui uma excentricidade de 0.17400790 e uma inclinação de 1.42722º.
Este asteroide foi descoberto no dia 7 de novembro de 1978 por Eleanor F. Helin e Schelte J. Bus em Palomar.
Ver também
Lista de asteroides
Asteroide da cintura principal
Ligações externas
Asteroides da cintura principal
Objetos astronômicos descobertos em 1978 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,363 |
For cold, windy days when you don't want to miss a run, the Nike Therma-Sphere Element Hybrid Men's Running Hoodie is equipped to keep you warm and comfortable. It combines the everyday look of a half-zip sweatshirt with built-in mittens and a hood for added warmth.
Soft, fleece-like Nike Therma-Sphere fabric throughout helps you stay warm by holding in your body's natural heat. On the chest and back are woven overlays to help block wind.
A zipped chest pocket lets you keep your cash and cards close and secure. Two more pockets on the sides let you quickly stash small items.
Fabric: Body: 93% polyester/7% elastane. Overlays: 100% polyester. | {
"redpajama_set_name": "RedPajamaC4"
} | 31 |
Waterford, Farmington Hills, Lyon Township, MI -For the third time in less than a month, an incident today involving a careless driver put Road Commission for Oakland County (RCOC) employees at risk while working near a busy road.
Fortunately, no one was hurt, but a car drove into the barreled-off work zone, scattering barrels but luckily missing RCOC staff and equipment. Today's incident took place on M-59 east of Williams Lake Road in Waterford Township.
This follows the incident Wednesday this week in which a driver slammed into a Road Commission pickup truck on M-5 in Farmington Hills causing the truck to flip over and sending two RCOC employees to the hospital (one remains hospitalized).
And, on Aug. 20, a semi plowed into the back of an RCOC truck in a work zone on I-96 near Milford Road in Lyon Twp.
"We are imploring motorists to slow down and be aware that there are men and women in the work zones," stated RCOC Deputy Managing Director/County Highway Engineer Gary Piotrowicz. "When driving, it is imperative that you watch your surroundings and slow down in work zones. Working near traffic is a difficult job, and the last thing we want to see is road workers put at risk by careless drivers," Piotrowicz added.
For more on the Road Commission for Oakland County, visit http://www.rcocweb.org/. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,772 |
Q: Are expletives (cursing, swear words or vulgar language) allowed on SE sites? Can I use salty, expletive-laden language on Stack Exchange sites, like Q*Bert?
For more information, see "What kind of behavior is expected of users?" in the Help Center.
Return to FAQ index
A: No.
Using expletives is not acceptable behavior on any Stack Exchange site and is a violation of the Code of Conduct, even on Meta. There are a very small handful of exceptions (such as if you were talking about the word itself on a language site), but in general you should not use expletives anywhere, under any circumstances. If you can't effectively communicate what you need to say without resorting to lowest common denominator cursing, then keep it to yourself.
If you use expletives, you will likely get a warning. Any language that becomes a source of disruption is subject to removal through editing. If you use even what one might consider the mildest of expletives for style and someone removes them, leave them out.
If you continue to use expletives, you will be placed on timed suspension.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,181 |
Saw Scaled Viper
(Echis Carinatus)
Description: Saw scaled viper is a very venomous snake and is common throughout India. Body is short, Adults length measured between 300 - 500 mm (12 - 20 inc). Scales are strongly keeled and are rough in appearance. Head is broader then neck; scaled on upper surface of head are small and strongly keeled. Large eye has vertical pupil. Tail is very short and thin. Back has light, dark brown, brick-red, gray or sand-colored with zigzag patterns. Top of the head has usually distinct, arrow-head mark. Underside is white speckled with brown. Several different color forms exist. This snake is called Saw scaled Viper because it rubs sides of it's body together, producing a rasping sound. It is very ill-tempered snake and will attack any intruder. It's venom is highly hemotoxic and quite potent. Many deaths are attributed to this species. Found in a variety of environments, it is common is rural settlements, cultivated fields, arid regions, barns and rock walls. It is also found in deserts. This snake is very well camouflaged and due to the size, it is barely noticed by anyone.
Scalation of Saw-scaled viper
Reproduction: Male combat observed. Female bared 4-8 living young between April - August. Female may produce two clutches a year. In Maharashtra (ratnagiri Dist. ) over 2000 Saw-scaled vipers were recorded in one week (July). The same area was visited in December and not a single snake could be found. Hibernation or aestivation in laterite crevices may account for this dramatic disappearance.
Distribution: Saw-scaled vipers are found throughout India except West Bengal and the Northeast. Also found in Pakistan and Sri Lanka. Found upto 1500mtr. (4920ft).
Look-alikes: Common Cat Snake, Sand Boas, Russell's Kukri Snake, Sind Awl-headed Snake.
Images of look-alike snakes:
Common Cat Snake
Russell's Kukri Snake
Posted by Dhanashri at 10:45 PM No comments:
Indian Rat Snake
(Ptyas Mucosa)
General Information: Snakes fascinate us more then any other creature on earth. Because people don't know much about then, snakes are misunderstood and feared. In India most of our snakes are absolutely harmless to humans while only four species are responsible for thousands of deaths each year.Indian snakes range in size from a few centimeters to almost ten meters in length. Snakes live in scorching deserts, humid forests, cool hill ranges, in lakes, streams, and even in the sea. The variety of colors and patterns rivals the butterflies while their grace and fluidity are unmatched in nature. Snake behaviour and adaptations are endlessly exciting but the first step is to be able to identify them. So let us get a bit close to them. Let us get a little friendly with such fascinating creature.
Scalation of Indian Rat Snake
Description: Rat snakes are non venomous. They are large, fast moving snakes which grow to a length of 2 ½ meters or more. Color varies from pale yellow, olive, brown, gray or black. There body is lightly or strongly marked with black; Marking usually distinct on tail. Lip scales usually separated by vertical black lines. Underside often has prominent dark cross-bars. Scales smooth or keeled (upper rows). Head is broader then neck. Large eye has round pupil. Rat snakes are found wherever rats and frogs/toads are prevalent. So, of course, they are often found in rice fields and in human habitation. As hill forests are cleared and agriculture spreads to the slopes, rat snakes too are spreading "upwards". Recent records say that they are found 2,000 meters up in plains. Formerly they were rarely seen above 1,000 meters.The rat snake is active during the day, hunting for rodents, frogs, toads and birds along fields and in bushes. Large rat snakes can give a painful bite and are quick to defend themselves.
Reproduction: The female lays about 8 to 16 eggs. At hatching young sizes between 320 - 470mm (13 - 19 inc.). Young ones start on hatching start their diet on frogs and toads. During a breeding season, a male rat snakes perform a combat dance. This is actually their way of protecting the area they live in and preventing other male snakes from entering their territory. This dance has nothing to do with mating as people claim.
Distribution: This snake inhabits a wide range of habitats - coastal, arid, wet, mountainous, open fields as well as forests. Found throughout South and Southeast Asia, from Sea level to 4000m (13,120ft).
Look - alikes: Indian Rat snake looks alikes are Cobras, Banded Racer, Indo - Chinese Rat Snake and King Cobra.
Images of Look - alike snakes:
Banded Racer
Posted by Dhanashri at 11:29 PM 5 comments:
Labels: Indianratsnake, Ratsnake, Snakes
Plain Tiger
(Danaus chrysippus)
Plain tiger is one of the commonest butterflies you come across in the city. Beautiful butterfly with black
border and white spots, it's wingspan sizes between 70-80mm. This is a Tawny, medium-sized butterfly.
-Description-
Male Upperside: Reddish reddish brown with black borders in both wings and black apex in fore wing. Fore wing with variable number of white spots in the costal and apex. Hind wing with 4 small black spots around the cell in Male . The fourth spot in male is a cluster of scent-scales that attract females.
Male Underside: Dull orange. Fore wing dark brown in the upper half with white spots in the black area and hind wings with six black spots.
Female Upperside and underside: The coloration and marking of forewing similar to male. Hind wing has 3black spots around the cell instead of 4 in male. Underside same as of Male.
Also known as the African Monarch, the African Queen, the Lesser Wanderer and the AK Butterfly, it is the commonest of all Indian butterflies and the strongest flier of the genus Danaus. Found throughout the country, including the deserts and in the hills up to 3000m. flies in an undulating fashion and generally remains on wing for considerably longer periods. The female of the danaid eggfly, Hypolimnas misippus; the Leopard Lacewing, Cethosia cyane and the Indian Fritillary, Argyreus hyperbius hybrida mimic this butterfly.
Distribution: Clearings and edges in open forests, scrubs and savannhas, neglected corners and gardens in human habitations and riversides are the best places to look for this butterfly. This butterfly though breeds throughout the year it is most commonly seen during the monsoons or just after it but persists even in summers.
Habits: The plain Tiger is protected against attacks from avian and reptilian predators by virtue of the unpalatable alkaloids it ingests during it's larval stage. it's bright colors advertise it's unplaltability. It's flight is slow and laborious. This gives it's predators sufficient time to recognize it. It flies straight and close to the ground with few vertical deviations. When at rest, the wings are closed over the black. However, the newly - emerged specimen, still too wet and soft to fly, flaps them slowly to reveal the brighter colors on the upperside. While basking, it rests close to the ground, on small bushes, etc. and spreads it's wings with it's back towards the sun, so that the wings are completely exposed to the sun's rays.
Reproduction: The male courts the female by hovering over it with light wing-beats. To lay eggs, the female perches at the edge of a leaf, curls it's abdomen to reach the lower surface and lays a single egg at a time. The female may lay well over half-a-dozen eggs on the same plan, especially on a large bush of Calotropis, but never more then one on a leaf. The egg is silvery-white and shiny. It is tall with an apical point and ribbed sides. After the caterpillar hatches, it's first meal is of the eggshell itself. The caterpillar is cylindrical and of almost uniform width from the head to the abdominal tip. it's most striking characteristics are a banded body and three pairs of long and black tentacles. Initially the caterpillar is yellowish with black bands on it, but later it turns a dark chocolate-brown or black with alternate, narrow whitish and yellowish bands and a series of dorsolateral, rather longish yellow spots.
Larval Host Plants: The caterpillars feed on "milkweed" plants. these, in our region, include a large bush - Calotropis ( Sanskrit - Arka, Marathi & Hindi - Arka), Asclepias Curassavica ( Sanskrit - Kakatundi)
Posted by Dhanashri at 9:57 PM 2 comments:
Grass Jewel
(Family Lycaenidae: Blues)
After introducing you all to two of very common birds found around human habitation now it's time to get familiar to the tiniest ( Wingspan - 15 - 22mm) of the butterflies - Grass Jewel
Male Upperside - Brown but varies coloration in dry areas and humid areas. the coloration in Dry areas is much paler as compared to that of humid areas. Forewing are uniform with a very ill - defined anticiliary dark line in some specimens. Hindwings have a subterminal series of round black spots crowned with pale Ochraceous ( Pale Yellow or Orange), the posterior four spots generally well defined and outwardly edged with white.
Male Underside - Pale silky brown. Forewing consists of following white markings: - a short line on the inner and outer sides of the discocellulars ; a transverse, slightly curved, discal series of small, more or less incomplete rings; a transverse postdiscal series of disconnected slender lunules; a subterminal series of similar but more regular lunules and a terminal broken line, followed by a dark unbroken anticiliary line; the groundcolour between the two short discocellar lines, that enclosed within each ring of the discal markings, and between the sub-terminal lunules and the terminal line slightly darker than on the rest of the wing.
Female upperside and undersides : ground-colour and markings as in the male, but the latter larger and more clearly defined; on the hind wing the yellow crowning the black spots on the tornal area on the upperside and surrounding the same on the underside, wider and more prominent. Antennae, head, thorax and abdomen as in the male.
Habits - A unique habit which at once distinguishes this species from all other Blues, is the way in which it moves it's wings. As soon as it settles after a flight, it sways all it's four wings from side to side, and then slows down and finally sits still. it's flight is weak, fluttering and in short bouts; it remains within half a meter from the ground and settles often. The male occasionally basks with their wings half open. Other then small herbs and flowers the male also feeds on wet soil where they may assemble in a small group.
Reproduction - The female lays it's eggs singly among the bracts of flower-buds. It bends it's abdomen and reaches deep, into the base of the bract, to lay eggs. (Please refer the image given above). The egg is disc-shaped and has fine, smooth, microscopic reticulations on it, which forms irregular polygons. the color of egg is glassy green with a blue tinge. The caterpillar stays hidden among the bracts and buds ad feeds on them. The Caterpillar is green or brown with dorsal and subdorsal longitudinal lines on the body. Pupation takes place close to where the caterpillar fed, as the dense bracts provide good shelter.
Larval Host Plants - The host plants are varied and since this is a wide-ranging butterfly, there are likely to be many more as yet unreported species. The recorded host plants include: Hygrophila Auriculata ( Sanskrit - Kokilaksha, Hindi - Talimakhana), Lantana Camara ( Sanskrit - chaturangi, Marathi - Ghaneri, Hindi - Khaneri).
Posted by Dhanashri at 9:10 PM 1 comment:
About Red Vented Bulbul
Red Vented bulbul
(Pycnonotus Cafer)
Bulbul... We all must have heard this bird species. One of the frequent visitor to our gardens. Bulbul is a species name and it has almost 15 - 20 sub - species. But the commonest of them all is a Red Vented bulbul. This bird is frequently seen around our gardens as well as scrub jungle. These birds are seen in large swarms on Peepul or Banyan trees, eating the fruits but also have a varied diet consisting of insects, vegetables and flower nectar.
Description - The Red-vented Bulbul is easily identified by its short crest giving the head a squarish appearance. The body is dark brown with a scaly pattern while the head is darker or black. The rump is white while the vent is red. The black tail is tipped in white. Sexes are similar in plumage, but young birds are duller than adults.
Nesting - Nest is built in the bush at the height of around 2-3 m. Sometimes nest is made in lamp shades, lofts, wire bundles, electric housings and similar places. Nests are made from grass, twigs, rootlets, paper, plastic, cobwebs, foils etc. Male and female both equally share parental responsibilities. Two or three eggs is a typical clutch. Nests are occasionally built inside houses or in a hole in a mud bank. Breeding season is from February to July. The eggs are pale-pinkish with spots of darker red more dense at the broad end.
Ecological Note - Bulbuls are good Pollinators and also insect controllers.
Cultural Note - Bulbul is a persian name for Nightingale, which featured extensively in their poetry. It was given to the Red Vented Bulbul of Bengal and the actual bird was forgotten! It now featured extensively in our poetry. In 19th Century India these birds were frequently kept as cage pets and for fighting especially in the Carnatic region.
Related Species - Yellow Throated Bulbul ( Rare sighting or individual sightings recorded in South India). Red Whiskered bulbul ( Found in the Western coastal region of India and in North East India)
Red Vented Bulbul Call - http://www.indiabirds.com//birdsounds/redwishkeredbulbul%20(2).mp3
About House Sparrow...
House Sparrow
(Passer Domesticus)
Today we will get familiar with a very familiar bird - House Sparrow.
We all must have seen this cute little bird around our house and garden since childhood, but how much do we know about it???
We here in Maharashtra know this bird as "chimnee". This bold bird is closely linked to human being. One of the first birds to visit a bird - feeder. Sparrows freely mix with Bulbuls, White - eyes, Munias etc.
Male - White cheeks. Black throat and chest. Back of head chestnut, extending to eye. Gray cap. Bill black. Broad, white upper wingbar. Back feathers edged with chestnut. Underparts whitish gray. In winter, the black bib is hidden by pale tips to the breast feathers that eventually wear off and reveal the black.
Female - Dingy brown all over. Unstriped gray brown chest and underparts. Large pale yellowish eyestripe. Black and straw-colored stripes on back. Bill yellowish. Eyes black. Crown plain gray brown.
Nesting - Nesting sites are Wall hole, under the roof and any place in the house where nesting material can be placed and eggs can be laid. They nest throughout the year and parental responsibilities are carried equally by both male and female. At a time about 3-5 eggs of greenish - white color with brown spots are laid.
Cultural Notes - Sparrows are extensively featured in nursery Rhymes. In Rigveda, a reference is made to a sparrow injured by a wolf, which was treated by Ashwinikumar twins, the physicians of God.
Status - Though commonly seen around human habitation number of this little bird is Declining currently. Save this beautiful species from vanishing.
House Sparrow Call - A Familiar chirping call when feeding or Roosting. Breeding male sings Tsi - tsi, Chip, Chip, Chew, Cheer when displaying with flapping wings.
********************************************************************************************************************************************************
Posted by Dhanashri at 12:22 AM 3 comments:
Labels: Birding, Birds, Nature | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 487 |
\section{Introduction \label{intro}}
Strong electron correlations lead to a wide variety of exceptional phenomena. In high-$T_\mathrm c$ superconductors strong correlations play an important role. For instance, they induce the Mott insulating state in the undoped parent compounds, despite the fact that the electronic bands are half filled. A rich phase diagram appears upon the introduction of charge carriers in these Mott insulators. Under certain circumstances charge carriers spontaneously order along lines, called stripes, which separate undoped antiferromagnetic (AF) regions \cite{Zaanen1989cmd}. Diffraction experiments have demonstrated (static) charge and spin modulation in La$_{1.6-x}$Nd$_{0.4}$Sr$_x$CuO$_4$ (Nd-LSCO) \cite{tranquada1995esc} and La$_{2-x}$Ba$_x$CuO$_4$ (LBCO) \cite{Fujita2002cbc,abbamonte2005smm},
leaving no doubt that stripes exist in these compounds. Two conditions need to be satisfied for stripe ordering to occur: (1) A doping near $x$ = 1/8, corresponding to a filling factor of 1/2 for Cu sites along the stripe. This condition relates stripes to the 1/8 anomaly \cite{Moodenbaugh1988spl}, a strong suppression of superconductivity at this doping. (2) A structural phase transition from the low-temperature orthorhombic (LTO) to the low-temperature tetragonal (LTT) phase, which is believed to provide a pinning potential for stripes, through the specific rotations of oxygen octahedra surrounding the Cu atoms \cite{tranquada1995esc}.
In a wider variety of compounds, among which La$_{2-x}$Sr$_x$CuO$_4$ (LSCO), incommensurate spin ordering is observed but no evidence for charge ordering \cite{Yamada1998dds,Lee1999nss,Arai1999isp,mook1999charge}. In LSCO such spin ordering can be observed throughout the doping range $x$ = 0.02--0.25 \cite{Yamada1998dds}. Peaks in neutron diffraction data (either at zero or finite energy) resemble these due to stripes and it is therefore reasonable to propose the presence of a fluctuating stripe phase when condition (1) and (2) for a static stripe phase are not fulfilled \cite{Kivelson2003hdf}.
While static stripe ordering in Nd-LSCO and LBCO has a pronounced effect on superconducting and transport properties, such as $T_\mathrm c$, the thermopower and the Hall coefficient $R_\mathrm H$ \cite{Nakamura1992atp,Hucker1998cso,Adachi2001cgt}, the consequences of fluctuating stripes\slash incommensurate spin correlations in LSCO and YBCO remain elusive. An interesting hypothesis is that \emph{if} fluctuating stripes are conducting \cite{kivelson1998elc} and fluctuate along some preferential direction, an anisotropy occurs in the macroscopic conductivity of the host material. Ando \emph{et al.}~\cite{Ando2002era} have investigated conductance anisotropy in LSCO in the lightly hole-doped ($x$ = 0.02--0.04) region and in underdoped YBCO, finding the lowest resistance in the direction along the spin stripes. In addition to conductance anisotropy, several other fingerprints of stripes have been investigated. Anisotropic magnetoresistance (MR) was reported for underdoped YBCO and related to stripes \cite{Ando1999maa}. Lavrov \emph{et al.}~\cite{Lavrov2003nsc} have searched for nonlinear current-voltage effects related to stripe motion induced by applied electric fields. Their negative result implies that if charged stripes exist in thin films, they should be pinned strongly.
In our work we proceed to investigate conductance anisotropy in LSCO thin films (0.10 $<x<$ 0.25) structured into Hall bridges oriented in various directions with respect to the LSCO Cu-O-Cu direction with 5$^\circ$ resolution. Furthermore, we investigate the transverse in-plane ($I$\,$\perp$\,$B$, $B$\,$\parallel$\,$c$) MR, motivated by the observation of linear transverse MR in LSCO single crystals for doping $x=$ 0.12--0.13 by Kimura \emph{et al.}~\cite{Kimura1996ipo}, which might well be a signature of a fluctuating stripe phase. We observe a sensitivity of the conductance anisotropy for lattice symmetry and we find indications for inhomogeneity on a small length scale. We carefully consider whether these could be due to the presence of stripes, discussing alternative explanations as well. In particular, we discuss the role of structural antiphase boundaries, which will be shown to be nucleated from substrate terrace edges.
\section{Experimental details\label{sec:exp}}
\begin{figure}
\centering
\includegraphics[scale=\schaal]{Fig1a}
\includegraphics[scale=\schaal]{Fig1b}
\caption{\label{FIG1} (a) Sample structure consisting of 36 LSCO Hall bars (one of them shown in the inset) covering $\alpha =$ 0--175$^\circ$ with 5$^\circ$ resolution. Bonding pads and wiring leads are covered by Ti/Au. The STO [100] axis aligns with the long side of the sample. Hall bar dimensions are shown in the inset (in $\mu$m). (b) $R(T)$ curves for three different LSCO compositions.}
\end{figure}
LSCO thin films (thicknesses $d$ in the range 30--60 nm) were grown by pulsed laser ablation from sintered LSCO targets on SrTiO$_3$ (001) (STO), (La$_{0.3}$Sr$_{0.7}$)(Al$_{0.65}$Ta$_{0.35}$)O$_3$ (100) (LSAT), and NdGaO$_3$ (110) (NGO) substrates. All STO substrates except one were chemically etched \cite{koster1998qis}. NGO and STO substrates were annealed for at least two hours at 950~$^\circ$C in an oxygen environment, LSAT substrates for 10 hrs at 1050~$^\circ$C. Atomic force microscopy (AFM) confirmed atomically flat substrate surfaces with unit-cell-height substrate steps. The miscut angle typically was 0.1--0.2$^\circ$.
Films were deposited in 0.13~mbar oxygen at a temperature of 700~$^\circ$C. The laser fluence was 1.2~J\,cm$^{-2}$. The film growth was monitored by reflective high-energy electron diffraction, which showed intensity oscillations, indicative for layer-by-layer growth. The thin films were annealed for 15 min at the deposition pressure and temperature, after which the oxygen pressure was increased to 1~atm, in which the films were annealed 15 min at 600~$^\circ$C, 30 min at 450~$^\circ$C and subsequently cooled down to room temperature.
$c$-Axis oriented epitaxial growth was confirmed by x-ray diffraction. Lattice mismatches result in tensile strain values of 3.2\%, 2.4\%, and 2.0\% for STO, LSAT, and NGO, respectively.
Hall bars in various orientations [figure \ref{FIG1}(a)] were defined by photolithography and Ar-ion milling. The STO [100] axis aligns with the long side of the sample. For each experiment, insulating behavior of the substrate was confirmed. Electrical contacts were made by wire bonding to sputtered Ti/Au contact pads, defined by lift-off. Resistance and Hall measurements were performed in a commercial cryostat (Quantum Design, PPMS) with magnetic fields applied perpendicular to the thin film. Resistance measurements were independent of applied current (typically 1--100 $\mu$A) and Hall measurements were linear over the entire magnetic field range ($B=$ -9~T to +9~T). No significant changes in resistivity $\rho$ or $T_\mathrm c$ were observed as a result of thermal cycling.
Figure \ref{FIG1}(b) shows $R(T)$ plots for samples with different Sr contents. We verified that the target stoichiometry ($x$ = 0.10, 0.12, and 0.25) was transferred 1:1 to the thin film by comparing Hall coefficients obtained for our thin films with bulk values \cite{Ando2004ehc}. For the compositions $x$ = 0.10 and $x$ = 0.12, the Hall angle $\rho/R_\mathrm H$ showed a $T^2$-dependence over 50--300~K, whereas for $x$ = 0.25 $\rho/R_\mathrm H$ linearly depends on temperature. This behavior is in perfect agreement with reported high-quality single-crystal and thin-film data on LSCO \cite{Xiao1992uhe,Hwang1994std,Ando2004ehc}.
\section{Results and Discussion}
\subsection{Conductance anisotropy \label{anisotropy}}
\begin{figure}
\centering
\includegraphics[scale=\schaal]{Fig2
\caption{\label{FIG2} Conductance anisotropy measured by the transverse resistance for an LSCO thin film ($x=0.12$) for different orientations $\alpha$. Arrows indicate anomalies which will be discussed in Sec.~\ref{anomalies}. }
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=\schaal]{Fig3
\caption{\label{FIG3} (a) Schematical representation of a step-edge induced antiphase boundary (dashed line) in LSCO on STO. Substrate (b--e) and LSCO thin film (f,g) surfaces. Films shown in (f) and (g) were grown on substrates in (d) and (e), respectively. Scale bars denote 1~$\mu$m.}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=\schaal]{Fig4
\caption{\label{FIG4} Room temperature transverse resistance ($B$ = 0~T) versus Hall bar orientation with respect to step-edge direction for different substrates and $x$ = 0.10 (red, open symbols), $x$ = 0.12 (black, solid), and $x$ = 0.25 (blue crosses). $\alpha_\mathrm{se}$ varies randomly between 10 and 140$^\circ$. }
\end{figure}
Conductance anisotropy, a smoking gun for the presence of fluctuating stripes \cite{Ando2002era}, is most effectively examined by measuring the transverse resistance $R_{xy} = U_y/I_x(\alpha)$ ($x$ and $y$ orthogonal directions) for $B$ = 0~T, since the angle-dependence of the longitudinal resistance $R(\alpha)$ is easily affected by small inhomogeneities in the sample. We find a relatively large signal for $R_{xy}$ for all substrates and doping (figure \ref{FIG2}), which cannot be attributed to misalignment of the voltage contacts, given the resolution of the applied photolithography technique.
One possible explanation for the anisotropy is the stepped substrate surface, which might induce structural antiphase boundaries in the film. Such antiphase boundaries have experimentally been observed in YBa$_2$Cu$_3$O$_7$ on STO using high-resolution electron microscopy \cite{Wen1993ssy}. Figure \ref{FIG3}(a) shows schematically how an antiphase boundary would look like for LSCO. The CuO$_2$-planes are interrupted at the structural antiphase boundary. Typical surfaces of our substrates as measured by AFM are shown in figures \ref{FIG3}(b--e). In (f,g) it can be seen that the film surface reflects the morphology of the substrate. We have determined the step-edge orientation $\alpha_\mathrm{se}$ of all our substrates from AFM data obtained before deposition of the LSCO thin films.
Figure \ref{FIG4} shows that the sign of $R_{xy}$ can be predicted with high certainty from the orientation of the Hall bar with respect to the step-edge orientation ($\alpha - \alpha_\mathrm{se}$) for all our samples. This provides evidence that antiphase boundaries in LSCO thin films are dominantly nucleated from substrate step edges. The large spread in $R_{xy}$ reflects the randomness exhibited by step edges. For $x$ = 0.10--0.12, we estimate an antiphase-boundary resistivity of $\rho_\mathrm{AB} \approx 10^{-9}$~$\Omega$\,cm$^2$ at room temperature, which is in line with typical interface resistances involving high-$T_\mathrm c$ cuprates \cite{Beck1996lbg,Hilgenkamp2002gbh}.
From a typical critical current density value ($J_\mathrm c \approx$~10$^6$~A\,cm$^{-2}$) we estimate an $I_\mathrm c R_\mathrm n$ product of about 1~mV, which is a reasonable value \cite{Hilgenkamp2002gbh}.
For $x$ = 0.25, $\rho_\mathrm{AB}$ is about 10 times smaller, which is in agreement with an expected decrease in thickness of the depletion region \cite{Hilgenkamp2002gbh}.
\subsection{Conductance anisotropy anomaly at 105~K \label{anomalies}}
\begin{figure}
\centering
\includegraphics[scale=\schaal]{Fig5
\caption{\label{FIG5} (a) Numerical derivative of the transverse resistance $R_{xy}$ showing a clear jump $\Delta$ at 105~K, coincident with the cubic-tetragonal transition in the STO substrate at 105~K. (b) These effects are absent on LSAT, providing evidence that $\Delta$ is indeed related to the STO phase transition. On LSAT weak instabilities are found in the range 60--80~K predominantly for orientations close to the Cu-Cu direction (the curves showing such an instability are plotted by solid symbols). (c) Orientational dependence of $\Delta$, measured at 105 K for LSCO on STO (d) $R_{xy}$ for $x=0.25$, in which the anomaly can be observed without differentiation because of the small value of the antiphase boundary induced background. The arrow denotes the resistance change estimated from the stress developing in the LSCO layer upon the the STO phase transition, using the pressure-dependent resistivity data from Nakamura \emph{et al.}~\cite{Nakamura1999tpl}. (e) As (a) and (b) but for $x=0.10$ and $x = 0.25$ (both on STO) and for $x= 0.12$ on NGO. The latter shows an instability at $T=78$~K for $\alpha = 45^\circ$.}
\end{figure}
In figure \ref{FIG2}, anomalies can be observed in $R_{xy}(T)$ at 105 K. These are most clearly revealed upon numerical differentiation. Figure \ref{FIG5} shows that discontinuities in $\mathrm d R_{xy}/\mathrm d T$ are present for all doping, however only for STO substrates. By defining $\Delta \equiv (\mathrm d R_{xy}/\mathrm d T)_{T\downarrow \mathrm{105 K}} - (\mathrm d R_{xy}/\mathrm d T)_{T\uparrow \mathrm{105 K}}$ we demonstrate in figure \ref{FIG5}(c) that $\Delta$ depends on the Hall bar orientation $\alpha$. The largest $\Delta$ is observed for $\alpha = 45^\circ$, whereas for $\alpha = 90^\circ$ $\Delta$ hardly exceeds the noise level. We do not observe anomalies in the longitudinal resistance.
The sudden change in $\mathrm d R_{xy}/\mathrm d T$ at 105~K coincides with a cubic-tetragonal phase transition in STO \cite{Courtens1972bsp}. The fact that such behavior is only observed for STO substrates proves that it is in fact induced by this structural transition. Noise in $\mathrm d R_{xy}/\mathrm d T$ below 105~K can then be attributed to rearrangement of domains in the substrate, since the $c$ axis can align along three orthogonal directions. The deviation from the cubic unit cell in the tetragonal phase ($T<105$~K) is small ($c/a$ = 1.00056 at 56~K \cite{Lytle1964xrd}). Since the LSCO film is epitaxially connected to the substrate, we expect the change of the substrate's lattice to be fully passed on to the LSCO film.
The lattice parameter changes associated with the LTO-LTT transition in LBCO ($a_\mathrm {LTT}/a_\mathrm {LTO}$ = 1.0017 and $b_\mathrm{LTO}/a_\mathrm{LTO}$ = 1.0036 \cite{Katano1993css}) are a few times larger than the structural changes induced by the STO. Yet, for LBCO these small modifications represent a significant change in the tilting direction of the oxygen octahedra, providing the necessary pinning potential to stabilize a static stripe phase \cite{tranquada1995esc}. Pinning of the fluctuating stripe phase present in LSCO as a result of the induced lattice asymmetry by the STO phase transition would naturally lead to the observed conductance anisotropy change. There are however a few difficulties with this stripe pinning scenario. First, one might expect a stronger doping dependence, as a static stripe phase appears in single crystals of Nd-LSCO and LBCO only around $x = 1/8$. Second, the appearance of static stripes in these compounds coincides with discontinuities in transport properties, in particular in $R_\mathrm H$ \cite{Nakamura1992atp, Adachi2001cgt}. We do not observe any peculiarity in $R_\mathrm H$ around 105~K.
Transport properties in LSCO and other high-$T_\mathrm c$ compounds are generally sensitive to applied pressure, pointing toward a delicate dependence of electronic structure on crystal structure \cite{Yamada1992pes,Nakamura1999tpl}. The observed conductance anisotropy anomalies might therefore be a manifestation of pressure effects on transport properties. We estimate the stress developing in the LSCO layer due to the strain change at 105~K from the Young's modulus of 10$^{11}$--10$^{12}$ Pa \cite{Nakamura1999tpl,Sarrao1994cem} to be 0.06--0.6 GPa. Using data from Nakamura \emph{et al.}~\cite{Nakamura1999tpl} we estimate for $x$ = 0.25 at 105~K a maximum resistivity change induced by such stress of 10$^{-7}$~$\Omega$\,cm, leading to $\Delta R_{xy} \approx 20$~m$\Omega$ for our structure. This value compares well to the measured $\Delta R_{xy}$ for this doping; see the arrow in figure \ref{FIG5}(d). For lower $x$, the pressure dependence of LSCO is stronger and $\Delta R_{xy}$ will likely be larger. This is consistent with our observations, although a quantitative comparison is difficult because antiphase boundaries induce a stronger background in $R_{xy}$. The expected stress effect in the longitudinal resistance ($\Delta R \approx 4 \Delta R_{xy}$) is smaller than the noise we measure in $R$, which explains why we do not observe anomalies in $R$. Only the differential measurement of $R_{xy}$ is sensitive enough to reveal the STO cubic-tetragonal phase transition through a resistivity measurement.
For NGO, no structural phase transitions are reported in the temperature range 50--200~K \cite{Senyshyn2004tep}. For LSAT there might be small distortion from cubic symmetry at and below 150~K \cite{chakoumakos1998tel}. We do not observe a transition near 150~K in \ref{FIG5}(b). Weak fluctuations for $T>$~150~K could be traced to variation in the temperature sweep rate. Both on LSAT and NGO [figures \ref{FIG5}(b) and (e), bottom panel] we observe instabilities in $\mathrm d R_{xy}/\mathrm d T$ in the temperature range 60--80~K, predominantly for Hall bar orientations close to $\alpha$ = 45$^\circ$ and 135$^\circ$. These instabilities are only observed when cooling down, and not for increasing temperature, unlike the effects on STO. Although speculative, they could be explained by a structural transition in the film, rather than in the substrate. Perhaps the high-temperature tetragonal (HTT) structure is sufficiently clamped by the substrate to reduce the transition to the low-temperature orthorhombic (LTO) phase to 60--80~K.
\subsection{Magnetoresistance}
\begin{figure}
\centering
\includegraphics[scale=\schaal]{Fig6
\caption{\label{FIG6} (a) Temperature evolution of MR in LSCO on STO ($x$ = 0.12). Solid lines are parabolic fits to data for 50, 70, and 80~K. At 90~K, a crossover to linear MR is observed. (b) The onset of linear MR between 85 and 110~K is observed for various substrates and $x$. The zero-field resistances at 50~K are shown in the graphs. (c) MR for several Hall bars on two different samples. We do not observe sample-to-sample variations. }
\end{figure}
Magnetoresistive properties of LSCO and high-$T_\mathrm c$ cuprates in general have been investigated widely, both in the superconducting ($T<T_\mathrm c$) regime \cite{suzuki1991rtm,Ando1995ldb,Xiang2009fsa}, as in the normal state \cite{Kimura1996ipo,Balakirev1998oml,Harris1995vkr,Ando2003aml,Vanacken2005hfm}. Many studies have focused on the violation of Kohler's rule \cite{Harris1995vkr}, anisotropy of MR in relation to stripes \cite{Ando2003aml}, and high magnetic fields \cite{Vanacken2005hfm}. Most work has been done with single crystals. Here we show that the low-field magnetoresistance $\Delta \rho/\rho_0$ of LSCO thin films shows intriguing non-monotonic behavior as function of temperature with a crossover from quadratic to linear MR at 90~K. Such behavior [figure \ref{FIG6}(a,b)] is observed for all doping values and all substrates that were used for this research. Literature reports \cite{Balakirev1998oml} quadratic MR without linear component for much thicker LSCO films on LaSrAlO$_3$, which puts the LSCO under \emph{compressive} strain (with a moderate lattice mismatch of 0.5\%).
The linear MR ($T>$ 90~K) in our thin films is weakly dependent on $x$ and substrate type, and comparable in magnitude to linear MR reported in single crystals ($x$ = 0.12--0.13) by Kimura \emph{et al.}~\cite{Kimura1996ipo}. In both cases, linear MR weakly decreases with increasing temperature over 90--300~K. The quadratic component ($T<$ 90~K) in our data is suppressed rapidly between 50 and 85~K. This behavior is similar for single crystals. The crossover that we observe at 90~K, might therefore be interpreted as a sudden onset of a linear term above 90~K in combination with a gradual suppression of quadratic MR with increasing temperature. Interestingly, the linear MR appearing in single crystals ($x$ = 0.12--0.13) is present down to 50~K, and as a result, MR decreases monotonically with temperature.
\begin{figure}
\centering
\includegraphics[scale=\schaal]{Fig7
\caption{\label{FIG7} The magnetoresistance plotted versus the Hall resistivity. We observe scaling as $\Delta \rho/\rho_0 \propto |\rho_{xy}(B)|$, with the constant of proportionality independent of temperature. Film thicknesses and measurement currents are specified in the graphs.}
\end{figure}
The doping dependence of linear MR observed in single crystals strongly suggests a relation to the $1/8$ anomaly. Kimura \emph{et al.}~\cite{Kimura1996ipo} propose it to result from magnetic field enhanced fluctuations towards the stripe phase. An alternative explanation in terms of a van Hove singularity crossing the Fermi energy at $x \approx$ 0.13 has become obsolete by more recent angle-resolved photoemission spectroscopy \cite{Ino2002dde}. The absence of doping dependence for linear MR in thin films makes an explanation in terms of fluctuating stripes less likely. Dynamical incommensurate spin correlations, which might be indicative for fluctuating stripes, have been observed for the entire doping range $x$ = 0.05--0.25 \cite{Yamada1998dds}. Nevertheless, one would expect singular behavior near $x=1/8$ because many properties related to spin fluctuations are anomalous at this doping, such as the peak width in inelastic neutron scattering data \cite{Yamada1998dds} and the magnetic correlation length \cite{Kimura1999nss}. Moreover, it is not clear whether relatively low magnetic fields can affect stripes since $B> 70$~T is required to meet the energy scale typical for dynamical spin correlations ($\mu_\mathrm B B> 4$~meV). Linearity of MR holds down to roughly 1~T both in single crystals as in thin films. Lastly, if linear MR in LSCO would be related to stripes, it is unclear why thin films would have a different doping dependence than single crystals. The same reasoning holds for all intrinsic explanations for linear MR, e.g., spin-mediated mechanisms. An obvious difference between single crystals and thin films is the presence of antiphase boundaries in the latter, as discussed in section \ref{anisotropy}.
Recently, large linear and non-saturating MR was reported for non-magnetic silver chalcogenides \cite{Xu1997lmn,Husmann2002mgs} and InSb \cite{Hu2008cqr}. It was argued by Parish and Littlewood \cite{Parish2003nmh} that the observed low-field MR can arise from sample inhomogeneity, present in the form of nanowires of excess Ag in the silver chalcogenides and Sb droplets in InSb polycrystals \cite{Hu2008cqr}. The linearity originates from a misalignment between applied voltage and current paths, which results in the mixing of Hall and longitudinal voltages. Since our samples exhibit mobilities of $\mu \approx$ 5--7 cm$^2$/Vs (at 50~K) all our measurements are taken in the low-field ($\mu B \ll 1$) regime. For Ag$_{2+\delta}$Se it was shown \cite{Husmann2002mgs} that the magnetoresistance follows a modified Kohler's rule: $b(T)\Delta \rho/\rho_0 = f(\rho_{xy}/\rho_0)$, with $B$ and the carrier density $n$ entering implicitly through $\rho_{xy}/d = R_{xy}(B,n)$. In our case we find a surprisingly simple non-Kohler type scaling: $\Delta \rho/\rho_0 \propto |\rho_{xy}(B)|$, with the constant of proportionality being independent of temperature; see figure \ref{FIG7}. This suggests the linear term in the MR has the same origin as the Hall resistivity. Clearly the mixing of the Hall and longitudinal resistances would provide a straightforward explanation for this behavior.
One might wonder why linear MR for $x$ = 0.25 is slightly larger in magnitude than linear MR for $x$ = 0.10 and $x$ = 0.12, despite the fact that the antiphase-boundary resistivity is significantly smaller for $x$ = 0.25. It should be noted that also the resistivity of LSCO itself is much smaller for this doping and we expect the ratio between the two to determine the strength of linear MR. The inhomogeneity scenario also provides a natural explanation for absence of linear MR in much thicker LSCO films \cite{Balakirev1998oml}: the effects of the structural antiphase-boundaries might be washed out toward the thin film surface by the introduction of other types of defects, giving rise to more isotropic disorder. Moreover, the Hall voltage is smaller for thicker films, as it is inversely proportional to the film thickness.
Some questions remain concerning the inhomogeneity scenario. First, it is unclear why linear MR vanishes below 90 K. Both the longitudinal and Hall resistivity do not show apparent changes of behavior around 90 K. Second, we do not observe a strong sample-to-sample variation in the magnitude of linear MR [see figure \ref{FIG6}(c)], which might be expected if inhomogeneity is the underlying cause. Third, there is no dependence of linear MR on the Hall bar angle ($\alpha - \alpha_\mathrm{se}$). The answers to the last two questions may reside in the exact identification of inhomogeneity in our samples. Perfectly straight and parallel antiphase boundaries, with homogeneous $\rho_\mathrm{AB}$ might not give rise to linear MR as the current would be homogeneously distributed and flowing parallel to the Hall bar. Deviations from this perfect picture more likely cause linear MR and do not necessarily depend on $\alpha - \alpha_\mathrm{se}$. The small length scales of such imperfections might provide enough averaging to prevent sample-to-sample variations. Numerical calculations will have to corroborate the proposed scenario. If the mechanism would fail to account for our observations, an electronic origin (e.g. stripes) of linear MR will have to be reconsidered.
\section{Conclusion}
The transverse resistance $R_{xy}$ in zero magnetic field, usually background in a Hall measurement, provides valuable information about the microstructure of the material under study. We have used it to demonstrate that unit-cell-high substrate step edges are the dominant source of structural antiphase boundaries in LSCO thin films. The antiphase boundary resistivity was estimated to be $\rho_\mathrm{AB} \approx 10^{-9}$~$\Omega$\,cm$^2$ (room temperature). In addition, we show that for LSCO $R_{xy}$ can reveal structural phase transitions of the substrate on which the films are grown. Such transitions are usually difficult to detect and require advanced spectroscopic analysis equipment.
For the detection of stripes, conductance anisotropy is an important observable. We have shown that in LSCO thin films conductance anisotropy is dominantly caused by antiphase boundaries, which mask possible stripe effects. Future experiments in this direction will therefore require substrates with an extremely small vicinal angle, and Hall bars at sub-micron scale.
The silver chalcogenides have recently attracted interest because of their non-saturating linear MR, which make them suitable for use as magnetic field sensor \cite{Xu1997lmn,Husmann2002mgs}. The MR is linear down to surprisingly low magnetic fields in these materials. This has been explained by the presence of disorder, giving rise to the mixing of longitudinal and Hall resistances \cite{Parish2003nmh}. Our LSCO thin films show linear low-field MR in the entire doping range 0.10 $<x<$ 0.25. We have found the MR to scale with the Hall resistivity as $\Delta \rho/\rho_0 \propto |\rho_{xy}(B)|$ with a temperature-independent constant of proportionality. This suggests the linear MR of LSCO thin films is related to disorder as well. Structural antiphase boundaries generated from substrate steps are a likely source of disorder. However, linear MR also appears in single crystals of LSCO, although in a narrower doping range ($x$ = 0.12--0.13) \cite{Kimura1996ipo}. It is unclear why these crystals in particular would contain many antiphase boundaries. If the presence of such defects can be excluded experimentally, linear MR must have a different origin, at least in single crystals. In that case it will be worth reconsidering the role of stripes, which might similarly deflect the current from the longitudinal direction, causing the mixing of longitudinal and Hall resistances.
\ack
We gratefully acknowledge Jan Zaanen for fruitful discussions. This work is financially supported by the Dutch Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO) through VIDI (A.B.) and VICI (H.H.) grants, and the NanoNed program.
\section*{References}
\bibliographystyle{iopart-num}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,237 |
package org.wso2.carbon.osgi;
import org.ops4j.pax.exam.Configuration;
import org.ops4j.pax.exam.ExamFactory;
import org.ops4j.pax.exam.Option;
import org.ops4j.pax.exam.spi.reactors.ExamReactorStrategy;
import org.ops4j.pax.exam.spi.reactors.PerClass;
import org.ops4j.pax.exam.testng.listener.PaxExam;
import org.osgi.framework.Bundle;
import org.osgi.framework.BundleContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.testng.Assert;
import org.testng.annotations.Listeners;
import org.testng.annotations.Test;
import org.wso2.carbon.container.CarbonContainerFactory;
import org.wso2.carbon.kernel.utils.CarbonServerInfo;
import java.nio.file.Path;
import java.nio.file.Paths;
import javax.inject.Inject;
import static org.wso2.carbon.container.options.CarbonDistributionOption.copyFile;
/**
* Base OSGi class to test the OSGi status of the org.wso2.carbon.core bundle.
*
* @since 5.0.0
*/
@Listeners(PaxExam.class)
@ExamReactorStrategy(PerClass.class)
@ExamFactory(CarbonContainerFactory.class)
public class BaseOSGiTest {
private static final Logger logger = LoggerFactory.getLogger(BaseOSGiTest.class);
@Inject
private BundleContext bundleContext;
@Inject
private CarbonServerInfo carbonServerInfo;
@Configuration
public Option[] createConfiguration() {
return new Option[] { copyCarbonYAMLOption() };
}
@Test
public void testBundleContextStatus() {
Assert.assertNotNull(bundleContext, "Bundle Context is null");
}
@Test
public void testCarbonCoreBundleStatus() {
Bundle coreBundle = null;
for (Bundle bundle : bundleContext.getBundles()) {
if (bundle.getSymbolicName().equals("org.wso2.carbon.core")) {
coreBundle = bundle;
break;
}
}
Assert.assertNotNull(coreBundle, "Carbon Core bundle not found");
Assert.assertEquals(coreBundle.getState(), Bundle.ACTIVE, "Carbon Core Bundle is not activated");
}
/**
* Replace the existing deployment.yaml file with populated deployment.yaml file.
*/
private Option copyCarbonYAMLOption() {
Path carbonYmlFilePath;
String basedir = System.getProperty("basedir");
if (basedir == null) {
basedir = Paths.get(".").toString();
}
carbonYmlFilePath = Paths.get(basedir, "src", "test", "resources", "runtime", "deployment.yaml");
return copyFile(carbonYmlFilePath, Paths.get("conf", "deployment.yaml"));
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,738 |
is a passenger railway station in located in the city of Iga, Mie Prefecture, Japan, operated by the private railway operator Iga Railway.
Lines
Kuwamachi Station is served by the Iga Line, and is located 5.8 rail kilometers from the starting point of the line at Iga-Ueno Station.
Station layout
The station consists of a single side platform serving bidirectional traffic. The station is unattended. The platform is short and can only handle trains of two cars in length.
Platform
Adjacent stations
History
Kuwamachi Station was opened on July 18, 1922. Through a series of mergers, the Iga Line became part of the Kintetsu network by June 1, 1944, but was spun out as an independent company in October 2007. The station has been unattended since 1977.
Passenger statistics
In fiscal 2019, the station was used by an average of 109 passengers daily (boarding passengers only).
Surrounding area
Mie Prefectural Iga Hakuho High School
Ueno Kuwamachi Post Office
Okanami General Hospital
See also
List of railway stations in Japan
References
External links
Railway stations in Japan opened in 1922
Railway stations in Mie Prefecture
Iga, Mie | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,953 |
/* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.flowable.camel;
import static org.assertj.core.api.Assertions.assertThat;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import org.apache.camel.CamelContext;
import org.apache.camel.Exchange;
import org.apache.camel.ProducerTemplate;
import org.apache.camel.Route;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.mock.MockEndpoint;
import org.flowable.engine.test.Deployment;
import org.flowable.spring.impl.test.SpringFlowableTestCase;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
@Tag("camel")
@ContextConfiguration("classpath:generic-camel-flowable-context.xml")
public class SimpleProcessTest extends SpringFlowableTestCase {
@Autowired
protected CamelContext camelContext;
protected MockEndpoint service1;
protected MockEndpoint service2;
@BeforeEach
public void setUp() throws Exception {
service1 = (MockEndpoint) camelContext.getEndpoint("mock:service1");
service1.reset();
service2 = (MockEndpoint) camelContext.getEndpoint("mock:service2");
service2.reset();
camelContext.addRoutes(new RouteBuilder() {
@Override
public void configure() throws Exception {
from("direct:start").to("flowable:camelProcess");
from("flowable:camelProcess:serviceTask1").setBody().exchangeProperty("var1").to("mock:service1").setProperty("var2").constant("var2").setBody().exchangeProperties();
from("direct:receive").to("flowable:camelProcess:receive");
from("flowable:camelProcess:serviceTask2?copyVariablesToBodyAsMap=true").to("mock:service2");
}
});
}
@AfterEach
public void tearDown() throws Exception {
List<Route> routes = camelContext.getRoutes();
for (Route r : routes) {
camelContext.stopRoute(r.getId());
camelContext.removeRoute(r.getId());
}
}
@Test
@Deployment(resources = { "process/example.bpmn20.xml" })
public void testRunProcess() throws Exception {
CamelContext ctx = applicationContext.getBean(CamelContext.class);
ProducerTemplate tpl = ctx.createProducerTemplate();
service1.expectedBodiesReceived("ala");
Exchange exchange = ctx.getEndpoint("direct:start").createExchange();
exchange.getIn().setBody(Collections.singletonMap("var1", "ala"));
tpl.send("direct:start", exchange);
String instanceId = (String) exchange.getProperty("PROCESS_ID_PROPERTY");
tpl.sendBodyAndProperty("direct:receive", null, FlowableProducer.PROCESS_ID_PROPERTY, instanceId);
assertProcessEnded(instanceId);
service1.assertIsSatisfied();
Map<?, ?> m = service2.getExchanges().get(0).getIn().getBody(Map.class);
assertThat(m.get("var1")).isEqualTo("ala");
assertThat(m.get("var2")).isEqualTo("var2");
}
@Test
@Deployment(resources = { "process/example.bpmn20.xml" })
public void testRunProcessByKey() throws Exception {
CamelContext ctx = applicationContext.getBean(CamelContext.class);
ProducerTemplate tpl = ctx.createProducerTemplate();
MockEndpoint me = (MockEndpoint) ctx.getEndpoint("mock:service1");
me.expectedBodiesReceived("ala");
tpl.sendBodyAndProperty("direct:start", Collections.singletonMap("var1", "ala"), FlowableProducer.PROCESS_KEY_PROPERTY, "key1");
String instanceId = runtimeService.createProcessInstanceQuery().processInstanceBusinessKey("key1").singleResult().getProcessInstanceId();
tpl.sendBodyAndProperty("direct:receive", null, FlowableProducer.PROCESS_KEY_PROPERTY, "key1");
assertProcessEnded(instanceId);
me.assertIsSatisfied();
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,976 |
Leicester City announced on Sunday that they were parting ways with manager Claude Puel following the club's fourth straight home defeat, this time at the hands of Roy Hodgson and Crystal Palace.
It's been a long time coming for the former Southampton boss as questions over his future have been going on for months, but there isn't one reason why everything started to fall apart for Puel in the East Midlands.
It's only natural for fans to look for that one game, that one result, that one substitution even, which turned a recently departed manager's fortunes on its head, but for Puel at Leicester City it was a cocktail of things which has seen him leave less than 16 months after his first game in charge.
Firstly, there was a game quite early into his reign that affected the squad. But it wasn't losing to Manchester City on 10 February last year that caused the mood to drop at Leicester, it was the manner of their 5-1 thrashing which cut the deepest.
Puel's side went into half time at the Etihad level with Manchester City, but four second-half goals from Sergio Agüero ensured the Foxes would return to the King Power Stadium with their tails between their legs.
Following that result, Leicester City's consistency went out the window and they failed to go five games unbeaten in the Premier League until midway through this season, with defeats against the likes of Cardiff, Burnley and Bournemouth littering their results.
On top of their inability to maintain any sort of form, Leicester City haven't been playing a brand of football - just calling it that seems like a push in itself - which can get the fans on board, even when you aren't getting the rub of the green on the pitch.
Supporters aren't demanding another Premier League title to add to their collection. Heck, they're not even wanting to be in Europe, but fans at the King Power Stadium are rightly fed up at just making up the numbers in the top flight.
The only excitement fans have had aside from individual brilliance from their star players is when Leicester City clawed their way out of a relegation fight which they shouldn't have been in in the first place.
Puel's style not only failed to get fans excited, but it also didn't suit the nucleus of his squad.
What makes that worse is that the 57-year-old has had plenty of time to sign players who do suit his system, but instead Leicester City's squad is now flooded with players who are in limbo over their future.
Give Leicester City some Andre Villas-Boas madness. Been too long since my guy donned the PL touchline. But watch them give it to Mark Hughes until the end of the season. Brendan Rodgers another good shout or why not be really ambitious and call Paulo Fonseca in Donetsk.
During his first summer transfer window, Puel splashed out £45.4m on three players who together have racked up just 13 appearances across all competitions this season.
Çağlar Söyüncü, Danny Ward and Filip Benković simply haven't fit in with what Leicester City were trying to do under Puel, so much so that the latter has gone on to join Celtic on loan for the remainder of the season.
It hasn't completely been his own fault, however, as earlier this season Puel was tasked with steering Leicester City through an impossible and unimaginable situation following the death of the club's owner, Vichai Srivaddhanaprabha.
The tragedy which the club had to go through has left them needing to wipe the slate clean with this season, with nothing more than Premier League survival the target for everyone at the King Power Stadium.
They know they have to get back on track next season, so with Leicester City currently in very little danger of dropping back into the Championship, now is the perfect time for the club to build for the future and bring a new long-term head coach in. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,250 |
{"url":"https:\/\/math.stackexchange.com\/questions\/1814549\/a-hausdorff-space-with-all-proper-closed-subspaces-being-compact-is-a-compact-sp","text":"# A Hausdorff space with all proper closed subspaces being compact is a compact space.\n\nIf $(X,T)$ is a Hausdorff space such that every proper closed subspace is compact, prove that $(X,T)$ is compact.\n\nI know I have to show that $X$ has a finite subcover, but I'm not sure how to do this. I thought to let $X = \\bigcup_{A \\subset X} A$ such that $A$ is a closed compact subset, but I don't think that an infinite union of compact sets will always be compact.\n\nSuppose $x,y$ are distinct points of $X$ (if they don't exist we're done anyway..). Let $U$ and $V$ be disjoint neighbourhoods of $x$ resp. $y$.\nThen $X = (X \\setminus U) \\cup (X \\setminus V)$ (why) and both of these are proper (why?) closed subsets. Now use that a union of compact subsets is compact.\nHint: Given an open cover ${\\mathscr T}$ of $X$ and $O\\in{\\mathscr T}\\setminus\\{\\emptyset\\}$, consider the induced cover of $X\\setminus O$ by ${\\mathscr T}\\setminus\\{O\\}$.\nA cover of a set $X$ is a family of subsets $\\mathcal{F}=\\{A_i : i\\in I\\}$ such that $X=\\cup_{i\\in I} A_i$. A cover of a topological space is open if every set in it is open. A topological space $X$ is compact if every open cover has a finite subcover.\nLet $X$ be a top. space such that every proper closed subspace is compact and $\\mathcal{F}$ be an open cover of $X$. We may assume $X$ is nonempty. If $X\\in \\mathcal{F}$ then $\\{X\\}\\subseteq \\mathcal{F}$ is a subcover. Suppose this is not the case, then there must be $A_0\\in\\mathcal{F}$ open nonempty proper subset of $X$. $B=X\\setminus A_0$ is a closed proper subset of $X$ so by hypothesis it is compact and $\\mathcal{F}_B=\\{A \\cap B : A\\in \\mathcal{F}\\setminus \\{A_0\\}\\}$ is an open cover of $B$ so there exists a finite subcover $\\{A_1 \\cap B, \\ldots, A_n \\cap B\\}$. Now show that $\\{A_0, A_1,\\ldots, A_n\\}$ is a cover of $X$. Hence we conclude $X$ is compact.\nI do not see why we need the fact that $X$ is Hausdorff here. Also arbitrary union of compact sets need not be compact, for example take into account that a point is compact in any topological space.","date":"2020-02-18 23:06:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9523816108703613, \"perplexity\": 31.802801420925174}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875143815.23\/warc\/CC-MAIN-20200218210853-20200219000853-00191.warc.gz\"}"} | null | null |
God gets so angry He says He will not go with them to the Promised Land. He'll send a messenger, an angel, to be their GPS.
What happens next is an eye-opening conversation between a fed-up God and Moses who very much needs this fed-up God.
I grew up singing the Fanny Crosby song (1890), "He hideth my soul in the cleft of the rock that shadows a dry thirsty land. He hideth my life in the depths of his love, and covers me there with his hand, and covers me there with his hand!"
Whoa! The only thing I don't like about Fanny's song is that I didn't write those amazing words and have them put to music. What a beautiful tribute to the story of God putting Moses into a crevice in a rocky place and passing by and showing His amazing backside!
Though we don't have a direct story about how Fanny wrote this song, it's clear it is inspired by Exodus 33. Crosby's life is an inspiration. She must have been reading or someone else was reading and told her about Exodus 33. We do know she ministered among the poor and forgotten, vulnerable people in the late 1800s, ones needing the hand of God to hide them from the horrors of their existence.
Lord, it seems in this story that there's a contradiction of communion with you. On one hand, we see Moses in his tent, and You would "speak to Moses face to face, as a man speaks with his friend." But when Moses asks for You to show His glory, you only show your backside?! We can't handle taking in your full on glory and live to tell the tale.
As we follow you, God, we see you ahead, and we see your glory, but we also think we want to see your face--you have shown us your face in Jesus. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,997 |
Тру-о-Сьорф (, також відомий як вулкан Мурра () — згаслий вулкан на території міста Кьюрпайп.
Географія
Вулкан розташований в центральній частині острова Маврикій, в плато Кьюпайп. Кратер вулкана має чітку конічну форму, його діаметр дорівнює 300-350 м, глибина - 80 м. Сформований він був близько 2 мільйонів років тому, під час другого етапу вулканічної активності, в результаті якої був утворений острів. На дні кратера розташоване невеличке озеро Гран-Бассен, до якого приходять місцеві сім'ї маврикійців-індусів, для принесення дарів богині Ганзі. До кратера можна потрапити лише по крутій набережній, яка вважається небезпечною. Вода та мул закупорили кратер, зробивши його ще менш доступним.
На вершині вулкана організований великий оглядовий майданчик, з якого відкривається панорамний вид на Кьюрпайп, а також досить велика парковка. Це місце є однією з головних туристичних визначних пам'яток країни.
Клімат
Клімат помірний. Середня температура становить 19 °С.
Активність
Наразі вулкан сплячий, однак за словами експертів він може стати активним у будь-який момент протягом наступної тисячі років.
Примітки
Гори Маврикію
Сплячі вулкани | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,258 |
I thought that looking into finding some external work would help me toward up skilling but also giving me a bit of a confidence boost by socializing with different individuals working towards the same goal. After looking into what projects were available out there, I was able to find a group who were looking for someone to help in regards to 3d modelling who is able to help them model their project and help it come to life a little bit.
Without jumping too much into the project, it is basically a fantasy RPG game with a twist. Talking with one of the individuals on the project, I was tasked to have a play around with modelling a character one of the guys had drawn. This was my first proper attempt at modelling a character but here is the result so far.
I found it really helpful and I managed to learn a new thing or two so I really recommend checking it out. Now to continue with the project and i'll keep updating once I'm at a turning point with my character to show my progress at different points. Feel free to post any comments or suggestions on where I can improve or what I could look at to help, as I said character modelling is new to me so advice is welcomed!
'System is the one asking them what they did' this is to avoid cheating as it's a personal game for self-improvement. Although, in order for the app to be effective the user must be honest in order to really get the 'reward' from the system, which is to find a job. Considering flow for my app, I need to match an individual's skill level to what they want to get out of the game as each individual will be working at different levels e.g. someone who is looking for their first time job opposed to someone with work experience who has been job searching for 6 months. The reason for this is too make sure the app continues being engaging rather then for the user to use the app for 2 months and get bored. I could keep player engagement by creating daily activities or sending out motivational text alerts to anyone who has the app installed and congratulating them on their achievements.
I need to research into how people go about looking for a job so that other tasks can apply within my app rather then the user having to solely use the app. This means I will need to look into how someone typically goes about applying for jobs, and I should look at a range of individuals so that I can cater to most peoples needs. I have researched into a few job websites but the apps pretty much are the websites but in a more concise way, which is expected from an application. Hopefully with the use of the share option of my possible app idea, the app will have the ability to become popular and hopefully will help more people.
So, now for a more simplified version of my app breakdown.
The point of my application is to use game mechanics commonly associated with RPGs and apply these to individuals who want to search for a job and for those who may find the task a bore.
Although my app is aimed at this market, doesn't mean that it wouldn't be useful in other markets for example someone who is looking for a second job may use the app to help them when trying to find another job and would still have its usefulness. When presenting my idea earlier today, It was mentioned about me maybe considering the app for people who are still in work after they have got the job. I think I would add a small feature for this, maybe title it 'Months of service' and then give out small rewards for example every year they keep the job such as discounts to particular stores or sending out a congratulations email when they achieve the job.
From what I know, it's the only job search app which has gamification applied with RPG elements.
The only one I could find being directed at hull.
The only one which has customizable characters.
The only things I could find about gamification within work/job area was gamification being applied once you have a job, rather than helping you find a job.
This will be a free app as those who are looking for a job will most likely be living on a limited source of income.
Too sum up this moodboard, I wanted to make the individuals feel motivated so by using a particular colour scheme which creates that kind of mood it would only benefit the app. By looking at rewards and ways to show them, it will help the individual gain a sense of achievement and having a way of showing their success to others, using media such as facebook or twitter, could help them feel more motivated and help others feel inspired to keep trying for work.
I have been using Paletton to help me develop my apps colour scheme but have also been looking at colour psychology to help me work out which colours would be more effective and to make people respond a particular way to my application. On this note, I have decided to work with the colours green/blue.
GREEN, NATURAL, FRESH – New start?
I thought the first thing to do was to mind map some possible ideas, rather than do this by writing down my ideas or physically drawing them out, I thought I would look into a new piece of software to give my mind map a more professional look. By using a particular piece of kit to design it rather than using current suites or software I have, I thought it would not only give me nicer template designs but it would optimise my results as the program would have been developed for one reason in mind.
Habit RPG is an online application which allows you to gamifiy habits which allows you to gamify everything you do in life. It is effective because it not only offers rewards but puts consequences in place so that you feel like you need to keep checking the page and updating with the correct information, but also that you are trying your hardest to achieve. You are rewarded with experience points, which are given to you whenever you complete a task and as a result it allows you to level up over time. You are able to customise your character which really does help when applying yourself to the application as it made it more personalised for myself. Being able to use this on both my mobile device and my computer meant that I had no excuse from using the app and allowed me to use it while out and about, and once a task had been complete I was able to check it off.
This app allows you to do many things such as see your progress and share with the world. This app is aimed at achievers and socializers by the use of particular mechanics such as achievements and utilising social media to share scores or achievements. You can see how many days you have been smoke free, you can track how many cigarettes you have avoided and can see the amount of money and time you have saved. This can be very motivational and I think the app would be very effective for those who are wanting to quit. I am currently quitting smoking myself and feel like I follow a very similar process which works for me, which is also shown on the app for example I calculate how many cigarettes I have missed and I sum up the amount of money I have saved as a result., so maybe this app would help gamify the process for someone like myself to follow through.
Pain squad is an app aimed at young patients who are suffering from cancer and are going through treatment. The app gives the children the opportunity to voice their experience through the app by giving them a way of monitoring the pain, its simple and easy to navigate through and seems to do its job well. It gives the individuals videos to help keep them motivated in beating the pain, allowing them to upgrade to a higher rank keeping people motivated in posting on the app. The app is being used in many hospitals across Canada and has seen to be engaging and a effective solution to what may have been an issue for many. It gives the doctor to see the issue from the child's point of view allowing them to respond more effectively.
The next building in the works is located down hulls Silver Street and is what currently, I believe, is the KFM building, right next to the Garbos Bar and Grill. I have yet to take any photographs of my own for this building but I will be taking these tomorrow so they will be uploaded within my next post.
I have tried to gather my own research using the internet to see what was previously within this venue but sadly I have been unable to find any information in regards to this so tomorrow I hope to visit the History museum to help clear this up. For now, I thought I would begin designing the basics of the building because these have been around since before the 1960's so these would feature very similar in the buildings design.
This is where I stand currently with the modelling for my building. After visiting the history centre, I hope I will have a better idea of what the shop face will look like, if not then I will have to improvise by using what it looks like currently as a guide. The building at the moment is around 1,200 polys, I hope that I can keep the figure low so it works better within engine. The design is rather simple, similar to the kardomah building I did for whitefriargate so the techniques are all the same, only thing I did different is save a lot of time by just copying the windows and scaling them accordingly. The model is snapped to grid with the pivot set correctly and is the correct height and width to my current knowledge. I hope to have the building finished in regards to the modelling by the end of the week, then I can spend reading week having it textured.
Industry standard – What is expected and how can I improve? | {
"redpajama_set_name": "RedPajamaC4"
} | 9,607 |
\section{Introduction}
The supersymmetric extension of the standard model with an additional gauge
singlet superfield, the so called (M+1)SSM
\cite{UMC,NMSSM1,NMSSM2,Higgs,walls,neu2,RGE1,Steph,last}, solves naturally the
$\mu$-problem of the MSSM: Even for a scale invariant superpotential -- with a
coupling $\lambda S H_1 H_2$ among the Higgs superfields and the singlet
superfield $S$ -- an effective $\mu$-term $\mu = \lambda \langle S \rangle$ is
generated, if the scalar component of $S$ has a non vanishing vev. Such a vev
of the order of the weak scale can be generated through the standard soft
supersymmetry breaking terms, thus the weak scale appears exclusively in the
form of the supersymmetry breaking scale. Moreover, assuming universal soft
terms at a large (GUT) scale, the (M+1)SSM has the same number of free
parameters as the MSSM. Previous analyses of the parameter space of the model
\cite{NMSSM2,UMC,last} have shown that, as in the case of the MSSM, a large
region is consistent with the present experimental bounds on sparticle and
Higgs masses.
The particle content of the (M+1)SSM differs from the MSSM in the form of
additional gauge singlet states in the Higgs sector (1 neutral CP-even and 1
CP-odd state) and in the neutralino sector (a two component Weyl fermion).
These states mix with the corresponding ones of the MSSM, with a mixing angle
which is proportional to the coupling $\lambda$ above. Accordingly, the
phenomenology of the (M+1)SSM depends on a large extend on the magnitude of
$\lambda$:
For $\lambda \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; O(10^{-2})$ the masses and couplings, notably in the CP-even
Higgs sector, can deviate visibly from the ones of the MSSM \cite{Higgs};
however, in this region of the parameter space of the (M+1)SSM some fine-tuning
among the parameters is required in order to meet all the phenomenological
constraints \cite{UMC}.
For $\lambda \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; O(10^{-2})$ the mixing angles involving the singlet states
are quite small. Therefore, the Higgs and sparticle masses and couplings of the
(M+1)SSM are very close to the MSSM ones (for corresponding values of $\mu$ and
$B$ \cite{NMSSM2,UMC}), with additional quasi singlet states which have small
couplings to the gauge bosons and the MSSM sparticles. Accordingly, they have
small production cross sections, and they will not appear in sparticle decays
unless they represent the only kinematically allowed decay channel.
Assuming R parity conservation, this latter situation is realized if the quasi
singlet Weyl fermion (the singlino) is the LSP. Then the singlino will appear
in the final state of each sparticle decay, and the phenomenology of the
(M+1)SSM with a singlino LSP differs considerably from the one of the MSSM.
In a previous paper \cite{last} we have shown that this situation appears
naturally in the case of a gaugino dominated supersymmetry breaking: $M_{1/2}
\gg A_0,m_0$. Then, within the parameter space accessible at LEP2, the NLSP is
mostly a bino-like state. Hence all the processes involving sparticle
productions and decays will end up with a bino to singlino transition, and we
have studied the corresponding decay widths in \cite{last}. An important result
was that, for small values of $\lambda$ or for singlino masses close to the
bino mass, the bino life time can be so large that the bino to singlino cascade
appears at macroscopic distances from the production point, or even out of the
detector.
In the present paper we study the possible signals of the (M+1)SSM with a
singlino LSP at LEP2 in the various regions of the parameter space. First we
identify those regions which are ruled out by negative results of sparticle
searches in the context of the MSSM. In the remaining kinematically allowed
regions we present total event rates for various topologies, like 4 charged
fermion final states and missing energy, with or without displaced vertices.
Such topologies, with microscopic vertices, have been looked for at LEP2 in the
context of the MSSM or models with R parity violation. However the
corresponding efficiencies do not apply to the (M+1)SSM with a singlino LSP.
With estimated efficiencies, we find that considerable kinematically allowed
regions of the parameter space have not been tested at present, especially in
the case of macroscopically displaced vertices. The main purpose of the present
paper is to identify those topologies, for which further studies -- i.e.
estimation of efficiencies -- are required in order to interpret the available
or incoming data from LEP2 in the context of the (M+1)SSM with a singlino LSP.
It is a priori not clear whether negative results of sparticle searches would
constrain the (M+1)SSM with a singlino LSP more or less than the MSSM: The
final states associated with the pair production of a given sparticle (like the
selectron or chargino) will often be more involved in the (M+1)SSM as compared
to the MSSM, and the corresponding constraints on the cross sections are often
much weaker. On the other hand, the (M+1)SSM with a singlino LSP allows for a
process to be observable, which is invisible within the MSSM: the production of
a pair of binos. If the binos decay into singlinos plus additional observable
particles, LEP2 is sensitive to light binos, which would, however, escape
detection within the associated MSSM. (Here and below the associated MSSM
denotes the MSSM obtained after "freezing" the singlet vev, which generates
effective $\mu$ and B terms, and after dropping the gauge singlet states in
the neutralino and Higgs sectors.)
Thus the application of the LEP2 results to the (M+1)SSM
requires a case by case analysis, depending on the different regions of the
parameter space, which will be performed below.
In order to scan the complete parameter space of the (M+1)SSM we proceed as in
\cite{UMC,last}: First we assume universal scalar masses $m_0$, gaugino masses
$M_{1/2}$ and trilinear couplings $A_0$ at the GUT scale. Thus we scan over the
ratios $m_0/M_{1/2}$, $A_0/M_{1/2}$ and the Yukawa couplings at the GUT scale,
the absolute scale being determined at the end by requiring the correct value
of $M_Z$. For each point in the parameter space we integrate the
renormalization group equations \cite{RGE1} down to the weak scale, and
minimize the low energy effective potential including the full one loop
radiative corrections \cite{Higgs}. We check whether squarks or sleptons do not
assume vevs, diagonalize numerically the mass matrices and verify whether
applicable bounds on sparticle and Higgs masses are satisfied.
In contrast to \cite{UMC,last}, however, we have included as matter Yukawa
couplings not just the top Yukawa coupling $h_t$, but all the couplings of the
third generation $h_t$, $h_b$ and $h_\tau$. First, this makes our results more
reliable in the large $\tan(\beta)$ regime, and second this reveals a new
phenomenon: Within the (M+1)SSM with a singlino LSP and sparticle masses in the
reach of LEP2, the NLSP could possibly be the lightest stau $\widetilde\tau_1$.
(In the associated MSSM the lightest stau $\widetilde\tau_1$ would then be the
true LSP, i.e. a stable charged particle; this situation has been discussed in
\cite {mura}.)
The paper is organized as follows: In the next section we present the
lagrangian and discuss the different regions in the parameter space which are
relevant for the present investigations. In section three we study the
sparticle production processes which are kinematically allowed at LEP2, the
topologies relevant for searches in the context of the (M+1)SSM with a singlino
LSP, and the constraints on its parameters which could be already infered from
available data. The total number of events expected in those regions of
parameter space is given, for which the efficiencies still remain to be
determined. Conclusions are presented in section four.
\section{Parameter space of the (M+1)SSM with a singlino LSP} \label{secparam}
The superpotential of the (M+1)SSM is given by
\begin{eqnarray}
W & = & \lambda SH_1H_2 + \frac{1}{3}\kappa S^3 + h_t Q_3H_1U_{3R}^c \nonumber \\
& & + h_b Q_3H_2D_{3R}^c + h_\tau L_3H_2E_{3R}^c + \ldots \label{spot}
\end{eqnarray}
where $Q_3$ denotes the left handed doublet of quarks of the third generation,
$U_{3R}^c$ and $D_{3R}^c$ the (charge conjugate) right handed top and bottom
quarks, $L_3$ the left handed doublet of leptons of the third generation,
$E_{3R}^c$ the (charge conjugate) right handed tau. The ellipses in
(\ref{spot}) denote Yukawa couplings involving quarks and leptons of the first
two generations. The only dimensionful parameters of the model are the
supersymmetry breaking parameters (for simplicity, we do not display the terms
involving squarks and sleptons):
\begin{eqnarray}
{\cal L}_{soft} & = & \frac{1}{2} \left( M_3\lambda_3^a\lambda_3^a +
M_2\lambda_2^i\lambda_2^i + M_1\lambda_1\lambda_1 \right) + \mbox{h.c.} \nonumber
\\
& & - m_1^2|H_1|^2 - m_2^2|H_2|^2 - m_S^2|S|^2 \nonumber \\
& & - \lambda A_\lambda SH_1H_2 - \frac{1}{3}\kappa A_\kappa S^3 + \mbox{h.c.}
\label{Lsoft}
\end{eqnarray}
where $\lambda_3$, $\lambda_2$ and $\lambda_1$ (the 'bino') are the gauginos of
the $SU(3)_c$, $SU(2)_L$ and $U(1)_Y$ gauge groups respectively. The scalar
components of the Higgs in (\ref{Lsoft}) are denoted by the same letters as the
corresponding chiral superfields. These supersymmetry breaking terms are
constrained in the present version of the model by universality at the scale
$M_{GUT} \sim 10^{16}$~GeV. Thus, the independent parameters are: Universal
gaugino masses $M_{1/2}$ (always positive in our convention); universal masses
$m_0^2$ for the scalars; universal trilinear couplings $A_0$ (either positive
or negative); the Yukawa couplings $h_{t0}$, $h_{b0}$, $h_{\tau 0}$,
$\lambda_0$, $\kappa_0$ of the superpotential (\ref{spot}) at the scale
$M_{GUT}$.
The parameters at the weak scale are obtained by integrating numerically the
one loop renormalization group equations \cite{RGE1}. The Coleman-Weinberg
radiative corrections to the effective potential involving top/stop,
bottom/sbottom and tau/stau loops (beyond the leading log approximation)
\cite{Higgs} are taken into account. The results for the mass matrices, after
minimization of the effective potential, can be found in
\cite{NMSSM1,RGE1,Higgs,NMSSM2,UMC,last} and will not be repeated here. Mixing
terms are considered in the stop, sbottom and stau mass matrices.
Let us now discuss the parameter space of the (M+1)SSM with a singlino LSP
which is relevant for sparticle searches at LEP2. Since here the Yukawa
couplings $\lambda$ and $\kappa$ are quite small ($\lambda , \kappa \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;
O(10^{-2})$) and hence the singlet sector mixes only weakly to the non singlet
sector, it is possible to understand the gross features of the parameter space
with the help of analytic approximations to the integrated renormalization
group equations, the minimization of the effective potential and the mass
matrices \cite{RGE2,NMSSM2,UMC,last}. (The results in section \ref{sectop}, on
the other hand, are based on 'exact' numerical computations for $\sim 10^4$
points in the parameter space.)
First, we consider the neutralino sector. In our convention, the (symmetric)
neutralino mass matrix is given by \cite{HK}
\begin{eqnarray}
M^0 = \left( \begin{array}{ccccc}
M_2 & 0 & \displaystyle{\frac{-g_2h_1}{\sqrt{2}}} &
\displaystyle{\frac{g_2h_2}{\sqrt{2}}} & 0 \\
& M_1 & \displaystyle{\frac{g_1h_1}{\sqrt{2}}} &
\displaystyle{\frac{-g_1h_2}{\sqrt{2}}} & 0 \\
& & 0 & -\mu & -\lambda h_2 \\
& & & 0 & -\lambda h_1 \\
& & & & 2\kappa s \end{array} \right) . \label{masneu}
\end{eqnarray}
For small $\lambda$, the singlino is thus an almost pure state of mass
\begin{eqnarray}
M_{\widetilde S} \simeq 2 \kappa s .
\end{eqnarray}
and the vev $s$ of the scalar singlet can be estimated from the tree level
scalar potential:
\begin{eqnarray}
s \simeq -\frac{A_\kappa}{4\kappa} \left( 1 + \sqrt{1-\frac{8m_S^2}
{A_\kappa^2}} \right) . \label{svev}
\end{eqnarray}
Since $A_\kappa$ and $m_S$ are only slightly renormalized between $M_{GUT}$ and
the weak scale for small $\lambda$ and $\kappa$, $M_{\widetilde S}$ can be
written in terms of the universal soft terms at $M_{GUT}$:
\begin{eqnarray}
M_{\widetilde S} \simeq -\frac{A_0}{2}\left( 1 + \sqrt{1-\frac{8m_0^2}{A_0^2}}
\right) . \label{msing}
\end{eqnarray}
The condition for the minimum (\ref{svev}) to be deeper than the trivial one
reads at tree level
\begin{eqnarray}
|A_0| > 3m_0 \label{A0m0}
\end{eqnarray}
so that
\begin{eqnarray}
\frac{2}{3}|A_0| \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; |M_{\widetilde S}| \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; |A_0| .
\end{eqnarray}
Since the effective $\mu$ parameter turns out to be quite large,
\begin{eqnarray}
\mu^2 = \lambda^2 s^2 \simeq 2.5 M_{1/2}^2 -.5 M_Z^2 \label{mu},
\end{eqnarray}
the lightest non singlet neutralino is the (nearly pure) bino $\widetilde B$
with mass $M_{\widetilde B}$. From the approximate analytic diagonalization of
(\ref{masneu}) for $\tan(\beta) \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 5$ (which, from our numerical results, is
always the case for a singlino LSP), one obtains $M_{\widetilde B}$ in terms of
the universal gaugino mass $M_{1/2}$ as
\begin{eqnarray}
M_{\widetilde B} & \simeq & M_1 + \frac{\sin^2\theta_WM_Z^2M_1}{M_1^2-\mu^2} \nonumber
\\
& \simeq & .41M_{1/2} - \frac{4.10^{-2}M_Z^2M_{1/2}}{M_{1/2}^2-.2M_Z^2}
\label{mbino}
\end{eqnarray}
where we have used (\ref{mu}) and $M_1 = .41 M_{1/2}$. The second term in
(\ref{mbino}) is due to the bino/higgsino mixing. From (\ref{msing}) and
(\ref{mbino}) one finds that the necessary (resp. sufficient) conditions on the
universal terms for a singlino LSP are
\begin{eqnarray}
|A_0| \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; .6 M_{1/2} \quad ( \mbox{resp. } |A_0| \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; .4 M_{1/2} ) .
\label{A0M1/2}
\end{eqnarray}
The Yukawa couplings $\lambda$ and $\kappa$ of the (M+1)SSM are, in general,
constrained by the ratio $A_0/M_{1/2}$. From the absence of a deeper unphysical
minimum of the Higgs potential with $h_2=s=0$ the following inequality can be
derived:
\begin{eqnarray}
\kappa \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 4.10^{-2}\frac{A_0^2}{M_{1/2}^2} .
\end{eqnarray}
Since the singlet vev $s$ increases with decreasing $\kappa$ (cf.
(\ref{svev})), but the effective $\mu$ term should be of the order of the weak
scale, $\lambda$ and $\kappa$ should be of the same order of magnitude. From
our numerical analysis we find that the bare parameters $A_0$, $M_{1/2}$ and
$\lambda_0$ satisfy the (not very stringent) relation
\begin{eqnarray}
\frac{|A_0|}{M_{1/2}} \sim 4 \lambda_0^{.5\pm.3} ;
\end{eqnarray}
thus light singlinos are generally related to small values of $\lambda$ and
$\kappa$. Since the mixing angle of the singlino to the non singlet sector is
proportional to $\lambda$, all decay widths of sparticles into a singlino LSP
are at least proportional to $\lambda^2$. Furthermore, $\lambda$ can be
extremely small; then the NLSP life time is very large. This phenomenon,
already investigated in \cite{last}, will play an important role in the next
section.
Now we turn to the slepton sector. The lightest states are the 'right handed'
charged sleptons $\widetilde l_R$ and the sneutrinos $\widetilde\nu$. Since the
bare scalar mass $m_0$ is quite small (cf. (\ref{A0m0}) and (\ref{A0M1/2})) the
corresponding mass terms at the weak scale are determined, from the integrated
renormalization group equations, by $M_{1/2}$. Neglecting the mixing between
the right handed and the left handed sleptons, and using the known numerical
values of the electroweak gauge couplings appearing in the D terms, their
masses are (for medium or large $\tan(\beta)$)
\begin{eqnarray}
m_{\widetilde l_R}^2 = & m_E^2 - \sin^2\theta_W M_Z^2 \cos 2\beta & \simeq .15
M_0^2 + .23 M_Z^2 , \label{msel} \\
m_{\widetilde\nu}^2 = & m_L^2 + \frac{1}{2}M_Z^2 \cos 2\beta & \simeq .52 M_0^2
- .5 M_Z^2 . \label{msneu}
\end{eqnarray}
The limit on the sneutrino mass obtained from the $Z$ width, $m_{\widetilde\nu}
\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; M_Z/2$ \cite{Griv}, combined with (\ref{msneu}) gives a lower limit on
$M_{1/2}$:
\begin{eqnarray}
M_{1/2} \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 100\mbox{ GeV} . \label{M1/2min}
\end{eqnarray}
From (\ref{msel}) together with (\ref{mbino}) it follows that the sleptons
$\widetilde l_R$ are heavier than the bino for $M_{1/2} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 320$~GeV.
However, this result holds only for the charged sleptons of the first two
generations. For the third generation, the soft masses at low energy can be
smaller than the ones given in (\ref{msel}) and (\ref{msneu}) (depending on
$h_\tau$). Furthermore, the off-diagonal term in the stau mass matrix is given
by $h_\tau(\mu h_1 - A_\tau h_2)$, which is not necessarily negligible compared
to the smallest diagonal term. Thus, the lightest eigenstate $\widetilde\tau_1$
will be lighter than the right handed sleptons of the first two generations
$\widetilde l_R$ and can well be lighter than the bino even for $M_{1/2} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;
320$~GeV (hence for sparticle masses within the reach of LEP2).
In the chargino sector, within the present region of the parameter space, the
lightest eigenstate is essentially a wino of mass $M_2$ given in terms of
$M_{1/2}$ by
\begin{eqnarray}
M_2 = .82 M_{1/2} . \label{mcharg}
\end{eqnarray}
In the Higgs sector we can again make use of the fact that the non singlet and
singlet sectors are quasi decoupled. The direct search for Higgs scalars thus
proceeds as in the MSSM, and the present negative results do not impose more
stringent constraints on $M_{1/2}$ than (\ref{M1/2min}). (For large values of
$\lambda$, without singlino LSP, the Higgs phenomenology of the (M+1)SSM could,
however, differ substantially from the one of the MSSM \cite{Higgs}.)
Since the scalar Higgs quasi singlet state can possibly be produced in bino
decays in the (M+1)SSM, its mass $M_S$ will be of interest. From the tree level
part of the Higgs potential one finds for small Yukawa couplings
\begin{eqnarray}
M_S^2 \simeq \frac{1}{4} \sqrt{A_0^2-8m_0^2} \left( |A_0| + \sqrt{A_0^2-8m_0^2}
\right) ,
\end{eqnarray}
hence
\begin{eqnarray}
M_S \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \frac{|A_0|}{\sqrt 2} .
\end{eqnarray}
For later use we note that the coupling Higgs singlet - bino - singlino is
proportional to $\lambda^2$, thus the production of the Higgs singlet state in
bino decays will only occur for $\lambda$ not too small.
To summarize, the parameter space of the (M+1)SSM with a singlino LSP is
characterized by the universal gaugino mass $M_{1/2}$ being the dominant soft
supersymmetry breaking term. Both $A_0$ and, consequently, $m_0$ are bounded
from above in terms of $M_{1/2}$ by (\ref{A0M1/2}) and (\ref{A0m0}),
respectively. The Yukawa couplings $\kappa$ and $\lambda$ also have upper
limits of $O(10^{-2})$, and are possibly tiny.
The non singlet sparticles (with sizeable production cross sections) within the
reach of LEP2 are: The second lightest neutralino, essentially a bino
$\widetilde B$; the right handed sleptons $\widetilde l_R$ with masses given by
(\ref{msel}) and the lightest stau $\widetilde\tau_1$ which could be
substantially lighter; sneutrinos with masses given by (\ref{msneu}); and the
lightest chargino with a mass given by (\ref{mcharg}). Note that, for a value
of $M_{1/2}$ corresponding to a bino in the reach of LEP2, the bino is always
lighter than these sparticles, with the possible exception of the lightest stau
$\widetilde\tau_1$.
In the next section, we will discuss the different decays of these particles,
and compare the respective final states to sparticle searches at LEP2. This
will allow us to find out which parameter ranges of the (M+1)SSM have been
already ruled out, and which require further study.
\section{Topologies for sparticle searches at LEP2} \label{sectop}
\subsection{Bino decays with a singlino LSP} \label{3.1}
Sparticle searches in the (M+1)SSM with a singlino LSP differ in several
respects from sparticle searches in the MSSM: First, the presence of the
singlino LSP usually gives rise to additional cascades in sparticle decays. For
instance, pair production of binos is usually an observable process, whereas
for an equivalent MSSM (with comparable soft supersymmetry breaking terms), the
bino would correspond to the LSP, and this process would be invisible. Thus,
areas in the soft SUSY breaking parameter space accessible at LEP2 are larger
in the (M+1)SSM than in the MSSM, provided an adapted experimental analysis is
done. Second, the decay of the NLSP (the bino or the lightest stau) into the
singlino LSP is always proportional to a power of $\lambda$, which can be tiny.
In this case (or if the singlino LSP happens to be close in mass to the NLSP,
which is feasible in the (M+1)SSM with universal soft terms in contrast to the
MSSM) the NLSP to LSP transition can be rather slow, leading to macroscopically
displaced vertices.
In the following we can make use of the fact that the masses of most sparticles
in the (M+1)SSM with a singlino LSP depend essentially on just one parameter,
the universal gaugino mass $M_{1/2}$: For $M_{1/2}$ not too large ($M_{1/2}
\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 180$~GeV) $\widetilde B$, $\widetilde l_R$, $\widetilde\nu$ and
$\widetilde\chi_1^\pm$ can be light enough for pair production being
kinematically allowed at LEP2, cf. the dependence of their mass on $M_{1/2}$ in
section \ref{secparam}. On the other hand, for 180~GeV$\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; M_{1/2} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;
220$~GeV, only $\widetilde B$ pair production is kinematically feasible (with
the possible exception of staus).
Since all the sparticle decays in the (M+1)SSM with a singlino LSP proceed via
the decay of the bino $\widetilde B$ into the singlino $\widetilde S$ (with the
possible exception of the stau $\widetilde\tau_1$, see below), we will briefly
discuss the possible final states of this transition, using the results of
\cite{last}:
a) $\widetilde B\to \widetilde S \nu\bar\nu$: This invisible process is
mediated dominantly by sneutrino exchange. Since the sneutrino mass, as the
mass of $\widetilde B$, is essentially fixed by $M_{1/2}$ (cf. (\ref{msneu})),
the associated branching ratio varies in a predictable way with $M_{\widetilde
B}$: It can become up to 90\% for $M_{\widetilde B} \sim 30$~GeV, but decreases
with $M_{\widetilde B}$ and is maximally 10\% for $M_{\widetilde B} \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;
65$~GeV.
b) $\widetilde B \to \widetilde S l^+l^-$: This process is mediated dominantly
by the exchange of a charged slepton in the s-channel. If the lightest stau
$\widetilde\tau_1$ is considerably lighter than the sleptons of the first two
generations, the percentage of taus among the charged leptons can well exceed
$\frac{1}{3}$. If $\widetilde\tau_1$ is lighter than $\widetilde B$, it is
produced on-shell, and the process becomes $\widetilde B \to \widetilde\tau_1
\tau \to \widetilde S \tau^+ \tau^-$. Hence we can have up to 100\% taus among
the charged leptons and the branching ratio of this channel can become up to
100\%.
c) $\widetilde B\to \widetilde S S$: This two-body decay is kinematically
allowed if both $\widetilde S$ and $S$ are sufficiently light. (A light $S$ is
not excluded by Higgs searches at LEP1 \cite{LEP1h,LEP2h}, if its coupling to
the $Z$ is too small \cite{Higgs}). However, the coupling $\widetilde B
\widetilde S S$ is proportional to $\lambda^2$, whereas the couplings appearing
in the decays a) and b) are only of $O(\lambda)$. Thus this decay can only be
important for $\lambda$ not too small. In \cite{last}, we found that its
branching ratio can become up to 100\% in a window $10^{-3} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \lambda \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;
10^{-2}$. Hence, its length of flight is never macroscopic. Of course, $S$
will decay immediately into $b\bar b$ or $\tau^+ \tau^-$, depending on its
mass. (If the branching ratio $Br(\widetilde B\to \widetilde S S)$ is
substantial, $S$ is never lighter than $\sim 5$~GeV.) If the singlet is heavy
enough, its $b\bar b$ decay gives rise to 2 jets with $B$ mesons, which are
easily detected with $b$-tagging. (However, if the singlet mass is just above
the $b\bar b$ threshold -- typically, if $m_\Upsilon < M_S \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 15$~GeV -- $S$
could decay hadronically without $B$ mesons.) In any case, the hadronic system
-- or the $\tau^+ \tau^-$ system -- would have an invariant mass peaked at
$M_S$, making this signature easy to search for.
d) $\widetilde B\to \widetilde S \gamma$: This branching ratio can be important
if the mass difference $\Delta M \equiv M_{\widetilde B} - M_{\widetilde S}$ is
small ($\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 5$~GeV).
Further possible final states like $\widetilde B \to \widetilde S q\bar q$ via
$Z$ exchange have always branching ratios below 10\% and will not be considered
here.
\subsection{Constraints from MSSM-like selectron searches} \label{3.2}
Let us first consider the region in the parameter space where the invisible
decay a) of $\widetilde B$ dominates, which occurs for $M_{1/2} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 140$~GeV.
Then, right handed selectrons $\widetilde e_R$ are light enough for being pair
produced, and they decay as in the MSSM into an electron and a bino, which is
invisible regardless of its lifetime. Results of searches for selectrons with
MSSM-like decays have been published by Aleph \cite{2eA}, Delphi \cite{2eD}, L3
\cite{2eL} and Opal \cite{2eO}\footnote{In this paper, we use the results from
the LEP2 run at $\sqrt{s} = 181-184$~GeV. For recent updates at $\sqrt{s} =
189$~GeV, see Refs.~\cite{LEP189}}. Here, however, the analysis of the results
differs from the situation in the MSSM in two respects:
First, for a given mass of the selectron, the mass difference $m_{\widetilde
e_R}-M_{\widetilde B}$ is essentially known: for $m_{\widetilde e_R} = 65$~GeV,
e.g., we have $m_{\widetilde e_R}-M_{\widetilde B} \sim 20-30$~GeV. It turns
out that for the mass differences given in the present model, the experimental
efficiencies are always $\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 50\%$.
On the other hand, the branching ratio associated with the invisible decay of
$\widetilde B$ is never 100\%. (Even for $M_{1/2} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 140$~GeV, $\widetilde
B$ could still decay dominantly into $\widetilde S S$, if $\lambda$ happens to
be in the window $10^{-3} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \lambda \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10^{-2}$.) Thus, for each point
in the parameter space, we have to calculate the expected number of MSSM-like
events (2 electrons and missing energy) taking the corresponding branching
ratio into account.
The most detailed informations on the efficiencies, the numbers of background
and observed events, as a function of $m_{\widetilde e_R}$ and $M_{\widetilde
B}$, are given by Opal \cite{2eO}. From these results, we find that points in
the parameter space leading to $N \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10$ expected events with 2 acoplanar
electrons in the final state are excluded. This occurs in the region
\begin{eqnarray}
M_{1/2} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \mbox{125 GeV or } M_{\widetilde B} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \mbox{43 GeV} .
\label{liminv}
\end{eqnarray}
However, this region is not totally excluded by acoplanar electron searches: As
mentioned above, $\widetilde B$ could still decay dominantly into $\widetilde S
S$, if $\lambda$ happens to be in the window $10^{-3} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \lambda \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;
10^{-2}$.
Further MSSM-like processes associated with 2 leptons and missing energy in the
final state do not lead to additional constraints on the parameter space.
\subsection{Higher multiplicity final states without displaced vertices}
\label{3.3}
Next, we have to take into account visible $\widetilde B$ cascade decays,
leading to events with higher multiplicity. First we treat the case where all
sparticle decays take place within at most 1~ cm around the primary vertex,
i.e. $\lambda$ and $\Delta M $ not too small. The following pair production
processes have to be considered:
\begin{eqnarray}
\begin{array}{llclclcl}
\mbox{p.1:} & e^+ e^- & \to & \widetilde B \widetilde B , \\
\mbox{p.2:} & e^+ e^- & \to & \widetilde l_R \widetilde l_R^* & \to & l^+
\widetilde B l^- \widetilde B , \\
\mbox{p.3:} & e^+ e^- & \to & \widetilde\nu \widetilde\nu^* & \to & \nu
\widetilde B \bar\nu \widetilde B , \\
\mbox{p.4:} & e^+ e^- & \to & \chi_1^+ \chi_1^- & \to & l^+ \widetilde\nu l'^-
\widetilde\nu^* & \to & l^+ \nu \widetilde B l'^- \bar\nu \widetilde B .
\end{array} \label{proc}
\end{eqnarray}
Taking the bino decays a) -- c) in sect. 3.1 into account, the possible final
states are those listed in Table 1. (The radiative decay $\widetilde B \to
\widetilde S \gamma$ will be discussed below.)
Let us first consider processes with 4 visible fermions and missing energy. The
appropriate cascade decays of the binos leading to 4 charged fermions in the
final state are: visible decays b) $\widetilde B \to \widetilde S l^+ l^-$ or
c) $\widetilde B \to \widetilde S S \to \widetilde S b \bar b \mbox { or }
\widetilde S \tau^+\tau^-$ for the 2 binos in p.1 and p.3 (the final states
(i.1) and (i.3), i = 5 \dots 9, in Table 1); one bino decaying invisibly
through channel a) $\widetilde B\to \widetilde S \nu\bar\nu$, the other
decaying into $l^+ l^-$ or $b \bar b$ and missing energy through channels b) or
c) for p.2 and p.4 (the final states (i.2) and (i.4), i = 2,3,4, in Table 1).
In the case of the process p.4 we have used the fact that sneutrinos are
always lighter than the lightest chargino in the (M+1)SSM with a singlino LSP,
thus the latter decays exclusively into an on-shell sneutrino and a charged
lepton.
According to the discussion of the decay channel b) above, the charged leptons
$l^\pm$ in the final state can be the leptons of any generation. In the case of
light staus, the percentage of taus among the charged leptons can become up to
100\%. If the lightest stau $\widetilde\tau_1$ is the NLSP, p.2 and p.4 give 6
charged leptons plus missing energy in the final state. Only p.1 and p.3 lead
to 4 charged leptons (taus) plus missing energy, since, in this case, the only
decay channel for the bino is $\widetilde B \to \tau \widetilde\tau_1 \to
\widetilde S \tau^+ \tau^-$.
Thus, the final states of interest are $l^+ l^- l^+ l^-$, $l^+ l^- b \bar b$
and $b \bar b b \bar b$ plus missing energy. Since the $b$s can arise solely
from the decay c) $\widetilde B \to \widetilde S S \to \widetilde S b \bar b$,
the invariant mass of a $b \bar b$ system would always be peaked at $M_S$, cf.
the discussion above. However, for a given value of $M_{1/2}$, we cannot
predict the different branching ratios of $\widetilde B$ ($\lambda$ may or may
not be in the window where the decay into $\widetilde S S$ is dominant), hence
we cannot predict the ratios of the different final states associated to a
given process in (\ref{proc}). On the other hand, for a given value of
$M_{1/2}$ we know, with small errors, the masses $M_{\widetilde B}$,
$m_{\widetilde l_R}$, $m_{\widetilde\nu}$ and $M_{\chi_1^\pm}$ and the
corresponding production cross sections. For each point in the parameter space
obtained from the scanning described in the previous section, we have
calculated numerically the production cross sections of the proceses p.1-4,
taking into account possible interference terms between s-, t- and u-channels
\cite{cross}, for $e^+ e^-$ collisions at 183~GeV c.m. energy. In
Fig.~\ref{fig1} we show, for each point in the parameter space, the total
number of events with 4 charged fermions plus missing energy in the final state
as a function of $M_{1/2}$, assuming an integrated luminosity of 55~pb$^{-1}$.
We have already removed those points in the parameter space where $\widetilde
B$ decays dominantly invisibly through channel a), and which are excluded by
the negative results of selectron searches in the MSSM, see the discussion
above. Moreover, we have not shown the points where $\widetilde B$ decays
dominantly into channel d) $\widetilde B \to \widetilde S \gamma$ which will be
discussed separately below.
In Fig.~\ref{fig1} we observe a large number of events for $M_{1/2} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;
150$~GeV, which are due to the process p.3: If kinematically allowed, its cross
section is typically larger than the ones of p.1, p.2 or p.4. For $M_{1/2}
\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 150$~GeV, on the other hand, the number of events is essentially given by
the number of $\widetilde B$ being pair produced (p.1).
Events with 4 charged fermions plus missing energy in the final state have been
searched for at LEP2. The underlying processes were assumed to be: $\widetilde
t_1$ pair production with $\widetilde t_1 \to b l \widetilde\nu$
\cite{stop3b,2eD} and heavy neutralinos decaying via the Multi Lepton channel
\cite{neuML} in the MSSM; lightest neutralino pair production in models with
gauge mediated supersymmetry breaking (i.e. a gravitino LSP) and a stau NLSP
\cite{GMSBstau}; or any sparticle pair production process in the context of
models with R parity violation \cite{Rp}.
Standard backgrounds with 4 charged fermions and missing energy are small and
typically, after imposing appropriate cuts, the number of background events in
a given channel vary from 0 to 4, with a comparable number of observed events.
No excess has been observed. The given efficiencies vary roughly between 20\%
and 60\% depending, e.g. in ${/ \hskip - 3 truemm R_p}$ models, on the mass of
the unstable (intermediate) neutralino.
Of course we cannot apply these efficiencies to the processes listed in
(\ref{proc}). The kinematics of these processes is often very different from
the kinematics of the assumed underlying processes, and also various branching
ratios into different final states would have to be considered. (In particular
in the case of small mass difference $\Delta M$ the efficiencies for the
processes p.1-p.4 could be quite low.)
From Fig.~\ref{fig1} we can only deduce which range of values for $M_{1/2}$
could be excluded. For instance, assuming a minimal efficiency of 20\% for all
processes listed in (\ref{proc}), and assuming a total number of 4 expected
events excluded, we would conclude that the total number of actual events has
to be smaller than 20 implying a lower limit on $M_{1/2}$ or $M_{\widetilde B}$
of
\begin{eqnarray}
M_{1/2} \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \mbox{190 GeV or } M_{\widetilde B} \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \mbox{75 GeV} .
\end{eqnarray}
(In Fig.~\ref{fig1} we have indicated this example by a horizontal line.)
Events with 6 charged fermions in the final state can also appear in slepton or
chargino pair production (processes p.2 and p.4, the final states (i.2) and
(i.4), i = 5 \dots 9, in Table 1). However, the bino is always lighter than
these sparticles (with the possible exception of the stau, see below), and the
regime in the parameter space covered by $\widetilde B$ pair production (and 4
charged fermions in the final state) is always larger.
Next, we comment briefly the case d) where $\widetilde B$ decays dominantly
into $\widetilde S \gamma$. Note that this branching ratio can only be
important for a small mass difference $\Delta M = M_{\widetilde B} -
M_{\widetilde S} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 5$~GeV. This decay could lead to final states with just
2 isolated photons and missing energy (via p.1 and p.3) or 2 leptons plus 2
isolated photons and missing energy (via p.2 and p.4). In the first case ,
however, detection efficiencies are always very small due to the small mass
difference $\Delta M$ \cite{2gA,2gD,2gL,2gO}. Final states of the form $l^+ l^-
\gamma \gamma + {/ \hskip - 3 truemm E_T}$ have been searched for in
\cite{2eA,GMSB,chargg}, where gauge mediated supersymmetry breaking (i.e. a
gravitino LSP) was assumed. Again, however, the efficiencies corresponding to
the assumed underlying process do not apply to the present case due to the
small value of $\Delta M$. On the other hand, if the photons are soft enough to
be accepted as low energy neutral clusters in acoplanar lepton searches, the
MSSM constraint on the selectron mass $m_{\widetilde e_R} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 80$~GeV
\cite{2eA,2eD,2eL,2eO} applies, leading to a lower limit on $M_{1/2}$
($M_{\widetilde B}$) of
\begin{eqnarray}
M_{1/2} \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \mbox{175 GeV or } M_{\widetilde B} \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \mbox{67 GeV}
\label{limsel}.
\end{eqnarray}
Clearly this case requires a dedicated analysis depending on the various
detectors.
\subsection{Final states with neutral displaced vertices} \label{3.4}
Up to now, we have considered the case of a microscopic lifetime of $\widetilde
B$. For a small Yukawa coupling $\lambda$ or a small $\Delta M$, however, the
length of flight of $\widetilde B$ can become large, leading to macroscopically
displaced vertices \cite{last}. Let us first remark that, in this case, the
decay channel c) $\widetilde B \to \widetilde S S$ is impossible: If the decay
length of $\widetilde B$ is large, either $\lambda$ is very small and thus
outside the window $10^{-3} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \lambda \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10^{-2}$, or $\Delta M$ is
small such that the quasi-singlet Higgs scalar $S$ can no longer be produced on
shell. Furthermore, the region of the parameter space where the invisible decay
channel a) $\widetilde B \to \widetilde S \nu \bar\nu$ dominates has already
been treated above, regardless of the $\widetilde B$ lifetime: In this case,
selectron pair production (p.2 in (\ref{proc})) looks like in the MSSM. Taking
into account the dependence of this branching ratio on $M_{1/2}$, the
corresponding efficiencies and numbers of background/observed events, one finds
that the region (\ref{liminv}) can be completely excluded. (As a matter of
fact, since the decay channel c) plays no role for displaced vertices, the bino
decays always invisibly in this region of the parameter space.) Therefore, the
remaining decay channels for a bino with $M_{\widetilde B} \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 43$~GeV are:
b) $\widetilde B \to \widetilde S l^+ l^-$ and d) $\widetilde B \to \widetilde
S \gamma$. In the situation of a macroscopic length of flight, the cases of a
$\widetilde B$ decay inside or outside the detector have to be treated
separately.
If $\widetilde B$ decays inside the detector ('mesoscopic' decay length: 1~cm$
\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; l_{\widetilde B} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; $3~m, where $l_{\widetilde B}$ denotes the decay
length in the lab. system), the following topologies are possible:
$\bullet$ The processes p.2 (charged slepton pair production) and p.4 (chargino
pair production) give rise to 2 acoplanar leptons from the primary vertex plus
neutral displaced clusters (lepton pairs or photons) due to delayed $\widetilde
B$ decay. Searches for events with neutral clusters have not been published up
to now, due to vetos against such clusters in order to remove the background
from radiative events \cite{2eA,2eD,2eL,2eO}. However, for small values of
$\Delta M$ (mainly when $\widetilde B$ decays dominantly into $\widetilde S
\gamma$) such neutral clusters could be soft enough not to be vetoed (cf. the
discussion above on photons in the final state). In this case, the limit on the
selectron mass in the MSSM leads to the lower limit (\ref{limsel}) on $M_{1/2}$
($M_{\widetilde B}$).
$\bullet$ The process p.1 (bino pair production) leads to events with just
neutral displaced vertices and no activity at the primary vertex. Since, in
this case, $\widetilde B$ is the lightest visible particle of the model, this
process would allow to test a larger region in the parameter space than the
processes p.2 and p.4 discussed above. The expected event rates are as in
Fig.~\ref{fig1} for $M_{1/2} \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 150$~GeV. If $\Delta M$ is not too small,
the decay product of $\widetilde B$ would be charged leptons with at least 33\%
taus. Clearly this topology is the most difficult one to detect (since triggers
around the primary vertex will not be active\footnote{One could use, however,
an initial state radiative photon to trigger the event.}), and no constraints
on such processes have been published. On the other hand, for small $\Delta M$,
the photonic decay channel d) dominates. Searches have been performed within
the MSSM for $\chi_2^0$ pair production followed by a delayed $\chi_2^0 \to
\chi_1^0 \gamma$ decay \cite{2gD}. However, the efficiency for small mass
differences is tiny and this channel cannot be used. In the region of the
parameter space where this decay channel dominates, the relevant topology is 2
acoplanar electrons arising from selectron pair production, the photons being
soft enough for being accepted as extra neutral clusters in this search (cf.
above).
If $\widetilde B$ decays outside the detector ('macroscopic' decay length:
$l_{\widetilde B} >$3~m), the situation in the (M+1)SSM with a singlino LSP is
clearly the same as in the corresponding MSSM with $\widetilde B$ being the
true LSP. In particular, the MSSM constraint on the selectron mass can be
applied directly with the additional benefit that $m_{\widetilde e_R} -
M_{\widetilde B}$ is known in terms of $m_{\widetilde e_R}$. Hence, the lower
limit on $M_{1/2}$ ($M_{\widetilde B}$) is given by (\ref{limsel}).
The present constraints for the various ranges of $M_{1/2}$ (or $M_{\widetilde
B}$) and the various $\widetilde B$ lifetimes can be summarized in
Fig.~\ref{fig2}. On the bottom horizontal line of Fig.~\ref{fig2}, we plot
$M_{1/2}$ in the range of interest, and on the top horizontal line we indicate
the corresponding values of $M_{\widetilde B}$ (with $\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;$5~GeV accuracy). On
the vertical axis, we plot the $\widetilde B$ decay length in the laboratory
system. In this plane we have indicated in grey those regions (for to
$l_{\widetilde B} >$1~cm), which are excluded by negative results from
acoplanar electron searches. For $l_{\widetilde B} <$1~cm the total number of
events with 4 charged fermions and missing energy in the final state exceeds 20
in the striped region.
As mentioned above, in the (M+1)SSM with a singlino LSP, the NLSP could
possibly be a stau. Then, limits from MSSM stau searches can be applied.
Again, if $\lambda$ (or $m_{\widetilde\tau_1} - M_{\widetilde S}$) is
sufficiently small, the $\widetilde\tau_1$ lifetime can become large and give
rise to displaced vertices. Medium or long-lived charged scalars have been
searched for at LEP2 \cite{2eA,longL,GMSBstau}, and the corresponding
constraints can also be applied here. However, the lower limit on stau masses
does not correspond to a definite region in the $(l_{\widetilde B},M_{1/2})$
plane of Fig.~\ref{fig2} which is or not excluded, since even for large values
of $M_{1/2}$, $m_{\widetilde\tau_1}$ can still be relatively small. (Of course,
$\widetilde B$ pair production can still be used, where now the $\widetilde B$
decays always through the cascade $\widetilde B \to \widetilde\tau_1 \tau \to
\widetilde S \tau \tau$. Hence, the $\widetilde B$ lifetime is always very
small. If, in addition, the stau lifetime is also small, processes p.1 and p.3
in (\ref{proc}) give rise to the same topology as in the case of a bino NLSP: 4
charged leptons (taus) plus missing energy. As discussed before, this case is
included in Fig.~\ref{fig1}.)
\section{Summary and outlook}
We have seen that the final state topologies of the (M+1)SSM with a singlino
LSP can differ considerably from the MSSM, due to the additional $\widetilde B
\to \widetilde S X$ cascade. Since these topologies can be the first sign of
sparticle production at LEP2, it is very important to identify and to look
carefully for them.
In the present paper we have identified these topologies, and studied the
parameter space of the model in order to check whether there are regions not
excluded by negative results from MSSM-like sparticles searches, though
accessible at LEP2 (i.e. with a reasonable expected number of events).
Indeed we found several such regions, and the associate topologies have been
listed in Table 1: First, we can have 4 charged fermions of various kinds and
missing energy in the final state. Such final states have been looked for in
the context of the MSSM, e.g. in stop and neutralino searches, or in models
with R parity violation. However, the corresponding efficiencies within the
present model are not known up to now.
In Fig.~\ref{fig1} we have shown the total number of events which can be
expected within the present model as a function of $M_{1/2}$ (which can be
translated into $M_{\widetilde B}$ using (\ref{mbino})). Clearly, assuming a
small but non vanishing efficiency for the topologies of the present model, the
region $M_{1/2} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 140$~GeV, corresponding to $> O(10^2)$ total events,
could already be excluded from searches for 4 charged fermions. Of particular
interest is, however, the region $M_{1/2} \;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 150$~GeV where only $\widetilde
B$ pair production contributes to this topology; this process allows to test
the largest region in the parameter space. With the corresponding efficiencies
at hand one could expect, e.g., a sensitivity to a total number of $N > 20$ of
4 charged fermion events plus missing energy, which would allow to test the
region up to $M_{1/2} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 190$~GeV (or $M_{\widetilde B} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 75$~GeV) as
indicated by the horizontal line in Fig.~\ref{fig1}, or the striped region in
Fig.~\ref{fig2}. Note that final states with 6 charged fermions can only appear
after slepton or chargino pair production (processes p.2 and p.4 in
(\ref{proc})). The accessible parameter space is thus smaller than the one
covered by $\widetilde B$ pair production.
If the decay length of $\widetilde B$ is mesoscopic (1~cm$\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; l_{\widetilde
B} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;$3~m) and $\widetilde B$ decays visibly, new topologies appear: Either
two leptons at the primary vertex (from slepton or chargino pair production)
plus neutral displaced clusters due to the delayed $\widetilde B$ decay, or
just neutral displaced clusters from $\widetilde B$ pair production. The latter
process is even more promising since it allows to test a larger region in the
parameter space, although it is certainly the most difficult to trigger on.
Again, the total number of expected events, as a function of $M_{1/2}$ (or
$M_{\widetilde B}$), can be deduced from Fig.~\ref{fig1}. Now, however, the
estimation of the corresponding efficiencies is much more delicate. On the
other hand, the decay channel c) $\widetilde B \to \widetilde S S$ never
appears in this range of the decay length $l_{\widetilde B}$, and the number of
possible final states is reduced. (Now, the region $M_{1/2} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 125$~GeV can
already be excluded: Here the bino decays nearly always invisibly, and the
negative results from acoplanar leptons plus missing energy searches --
associated with the process p.2 in (\ref{proc}) -- can be applied. This is
indicated in Fig.~\ref{fig2} in form of the grey region for 1~cm$\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;
l_{\widetilde B} \;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;$3~m.)
Herewith we would like to encourage searches for these unconventional
topologies, in order to cover the entire parameter space of the (M+1)SSM with a
singlino LSP. If no excesses are observed at LEP2, we will have to turn to
larger c.m. energies at the Tevatron (Run II), the LHC or -- hopefully -- the
NLC. Again, the (M+1)SSM with a singlino LSP predicts unconventional signals
for these machines, like additional decay cascades (as compared to the MSSM) or
displaced vertices. The details of these topologies and the expected event
rates as a function of the parameters of the (M+1)SSM will have to be
considered in the near future.
\vspace{1cm}
\noindent{\Large\bf Acknowledgments}
\vspace{.5cm}
It is a pleasure to thank L. Duflot for helpful comments. Many useful
discussions in the framework of the French workshop ``GDR Supersym\'etrie'' are
also acknowledged.
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,779 |
package org.apache.ignite.testsuites;
import org.apache.ignite.IgniteSystemProperties;
import org.apache.ignite.internal.processors.cache.persistence.db.file.IgnitePdsPageReplacementTest;
import org.junit.Test;
/**
* Page replacement light variant of test for native direct IO (wastes real IOPs on agents)
*/
public class IgnitePdsReplacementNativeIoTest extends IgnitePdsPageReplacementTest {
/** {@inheritDoc} */
@Override protected long getTestTimeout() {
return 15 * 60 * 1000;
}
/** {@inheritDoc} */
@Override protected int getPagesNum() {
// 1k - passed, 20k - passed, 64k - failed
return 20 * 1024;
}
/** {@inheritDoc} */
@Test
@Override public void testPageReplacement() throws Exception {
System.setProperty(IgniteSystemProperties.IGNITE_USE_ASYNC_FILE_IO_FACTORY, "false");
super.testPageReplacement();
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 592 |
Adam Levin Guest Posts On The Jewish Influence In The Instructions
Jewcy has asked me to describe "the Jewish influence" on THE INSTRUCTIONS and I'm not finding it very easy to do. Even after all kinds of narrowing and qualifying, it seems impossible to get it down to just one person. I tried to pretend, for example, that by "the Jewish influence" on THE INSTRUCTIONS, what Jewcy really meant was, "the TWO BIGGEST Jewish-AMERICAN influenceS" on THE INSTRUCTIONS, and I still wound up with a four-way tie of influence between Sean Connery, Kate Moss, this guy Patrick who used to mow our lawn, and–obviously–Rutger Hauer.
Probably best to start out describing Rutger Hauer, since his influence is the one that gets me the most anxious. So. Although we look alike in the face, Rutger Hauer and I have different taste in clothing. I, for example, pretty much exclusively wear the standard Chicago fuck-you-I-don't-have-to-dress-up-you-New York/Hollywood-pussies-who-are-always-trying-to-get-me-to-dress-up-and-plus-you-can't-tell-if-I've-got-muscles-under-here-or-am-skinny-or-even-maybe-kinda-fat-or-deformed uniform of blue hoodie and blue jeans and brown sneakers with some occasional variance re. hoodie color, whereas Rutger Hauer's is a no-hoodies-outside-the-gym policy. Also, he played a replicant in BLADE RUNNER, the screen adaptation of Philip K. Dick's DO ANDROIDS DREAM OF ELECTRIC SHEEP, a book I've never read, but–who knows–one day might, though I have seen the movie, enjoyed the movie, and also played a replicant in the movie.
Next up is Patrick who used to mow our lawn, aka "Patrick who used to mow our lawn in Buffalo Grove, IL and then in Highland Park, IL, which is roughly 20 minutes by car from Buffalo Grove, IL": This guy, Patrick, moved all the way from Buffalo Grove to Highland Park so that he could continue to mow our lawn. He had a dog called Pony and a certain way about him, and Pony and the way both deeply affected me.
Sean Connery couldn't get a date to prom, I once read, whereas I could, and did, though, like Patrick–and unlike Connery who, because he couldn't get a date to prom didn't have a prom-date who smoked crack–my prom-date smoked crack. True story. Kind of. The part you think isn't, I mean. Unless the part you think isn't is the part about Patrick. Or you want to get really dubiously technical about the distinction between crack and freebase, but we're talking about Connery, who played Tony in Who's The Boss, which I saw a couple times, but only a couple since it wasn't as good as the book, which got me sad.
As for Kate Moss, I was totally fucken kidding. Kate Moss's influence on THE INSTRUCTIONS was minimal.
Adam Levin LOS ANGELES McSweeney's NEW YORK San Francisco san fransisco
electric chainsaw buying guide says:
Spot lets start work on this write-up, I honestly feel this web site needs a great deal more consideration. I'll more likely once more to study additional, thank you for that information.
Marva Kalamaras says:
Many thanks for sharing this very good post. Very inspiring! (as always, btw)
Joesph Pfanstiel says:
The over unity magnetic MICR toner that you pick has to be specifically designed for that particular screen-print website inside the printer. Guarantee your toner continues to be carefully tried to get reliable sign psychic readings, image permanence along with uniformity, and exceptional advantage acuity. Toner insurance coverage need to be reliable with no external toner lay down.
Spotlight On: Minimalist Soul Duo Silk Rhodes
Win a Date With Max Greenfield!
Jewish Authors Land on the New York Times' 100 Notable Books of 2014 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,863 |
{"url":"http:\/\/math.stackexchange.com\/questions\/418704\/are-real-numbers-also-hyperreal-are-there-hyperreal-epsilon-between-a-and","text":"# Are real numbers also hyperreal? Are there hyperreal $\\epsilon$ between $-a$ and $a$ for any positive real $a$?\n\nThe set of all hyper-real numbers is denoted by $R^*$. Every real number is a member of $R^*$, but $R^*$ has other elements too. The infinitesimals in $R^*$ are of three kinds: positive, negative and zero.\n\nI think zero is not an extension in the set of real numbers.\n\nQuestion 1: Can we call any real number a hyper-real number, too? For example, $2$ is a real number, can we say that $2$ is a hyper-real number?\n\nQuestion 2: Does the set of hyper-real numbers $R^*$ include such infinitesimals say $\\epsilon$, such that $-a<\\epsilon<a$ for every positive real number $a$?\n\nAddition: Is it true that if $\\epsilon$ is a positive infinitesimal, then $\\epsilon>0$. However, $-\\epsilon$ which is a negative infinitesimal is less than zero. But $0, \\epsilon$ and $-\\epsilon$ are greater than any negative real number and are less than any positive real number?\n\n-\nQ1: yes (second statement), Q2: yes (zero is included and is real) \u2013\u00a0 kaine Jun 12 '13 at 19:22\nIn the future, please try to make the title of your question more informative (I've done it for you now). E.g., Why does $a<b$ imply $a+c<b+c$? is much more useful for other users than A question about inequality. From How can I ask a good question?: Make your title as descriptive as possible. In many cases one can actually phrase the title as the question, at least in such a way so as to be comprehensible to an expert reader. \u2013\u00a0 Lord_Farin Jun 12 '13 at 19:27\n\nYes to both questions:\n\nNote that in your definition (second statement) the reals are among (\"members of\" and so included in) the hyper-reals.\n\nEvery real number is a member of $R^\u2217$, but $R^\u2217$ has other elements too.\n\nAnd since zero is a real number, then, it is also in the hyperreals. And $-a < 0 \\lt a$ for all positive $a$.\n\n-\nI've extended my question. Please can you answer what I've added. Thanks. \u2013\u00a0 Samama Fahim Jun 12 '13 at 20:43\nHyper reals contain numbers that are greater than any real number, and less then any real number; the inverse of these numbers are infinesimal numbers, also hyperreals. And hence, there are many infinitesimal numbers $\\epsilon$ such that for positive real $a$, $-a\\lt \\epsilon \\lt a$ \u2013\u00a0 amWhy Jun 12 '13 at 21:07\n@amWhy: Nice way to handle a moving target posting ... :-) +1 \u2013\u00a0 Amzoti Jun 13 '13 at 0:30\n\nJust like every natural number is also an integer, every integer is also rational, every rational is also real, every real is also a hyperreal. So yes, $2$ is a hyperreal number.\n\nThe system of hyperreal numbers contains many infinitesimal numbers $\\epsilon$ that satisfy $-a<\\epsilon <a$ for all positive real $a$. For instance, the hyperreal represented by $(1,1\/2,1\/3,1\/4,\\cdots)$.\n\n-","date":"2015-02-01 01:09:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8961927890777588, \"perplexity\": 481.2880108296624}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-06\/segments\/1422115857200.13\/warc\/CC-MAIN-20150124161057-00105-ip-10-180-212-252.ec2.internal.warc.gz\"}"} | null | null |
Growing plants, flowers and herbs indoors is, for a variety of reasons, time well spent. Gardening of any sort, whether outdoors if you happen to have a garden or backyard or indoors if you don't, offers a relaxing way to take out of day to day life and try something different. The plants themselves offer beauty, a sense of achievement and, in the case of herbs, an variety of extra ingredients for the kitchen. They are also really good for our health; having plants around the home has been shown to be of real value for relaxation and keeping healthy. Plants of all types naturally minimise humidity levels and air temperatures, and keep air hydrated to a good level. They have also been found to keep dust and pollutant levels down, acting as a first defense against all sorts of unseen factors blowing around.
Finding stylish ways to keep plants indoors is not always the easiest, especially when you want your interiors to look smart and not like a tool shed! Thankfully though, there are a good deal of options. Here at Dotmaison we love to pay attention how best to house plants indoors. A product such as the Nesta White Speckle Planter by Umbra (above) allows plants to be hung or placed around the home using a clever base and frame. The attractive speckled ceramic bowl provides a visible attractive way to showcase gardening adventures indoors.
Another sophisticated method of display is offered by the Greenhouse on Stand with Glass Dome by Sagaform. Capable of making the best environment for growing cuttings and herbs, the glass dome protects plants in their early stages. Once they begin to get a little stronger it can be removed to allow even more space. Created from stoneware and glass, the Greenhouse will look fantastic in any room of the home, and is brilliant for getting the best out of plant cultivation.
The Plant Holders by Ferm Living act as a masterclass in how to house plants indoors. Made from an anti-rust treated metal, the holders are sure to offer a long life in keeping plants secure and healthy. The holders work brilliantly in holding the Ferm Living's hexagonal pots and are a stylish and sensible answer to keeping interior plans.
Eva Solo's self-watering pot makes growing plants indoors even easier. The design of the pot acts as an extra root, meaning that the plant can easily use any water that is in the pot's base. This sensible and ingenious system means correctly planted herbs and plants will always be able to get enough water, allowing them to grow healthy and strong. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,039 |
Fresh Italian bread halves topped with garlic sauce, fresh chopped tomatoes, creamy mozzarella, imported Parmesan cheese, olive oil & oregano.
Oven baked breadsticks glazed with a seasoned garlic sauce, dusted with herbs, Parmesan cheese & served with our signature pizza dipping sauce.
Fresh dough covered with garlic white sauce, topped with mozzarella, and cut into 10 pieces. Served with a side of marinara.
Our Fresh homemade dough tied into knots, baked & covered with garlic, butter, parmesan, oregano, & a side of marinara.
Romaine Lettuce topped with vine ripe tomatoes, white onions, crisp green peppers, black olives, mozzarella, croutons & oregano.
Romaine lettuce topped with pepperoni, Genoa salami, premium turkey, vine ripe tomatoes, white onions, crispy green peppers, black olives, mozzarella, croutons, Parmesan & oregano.
Romaine lettuce topped with Grilled Chicken, vine ripe tomatoes, white onions, crispy green peppers, black olives, mozzarella, croutons, parmesan & oregano.
Romaine lettuce topped with premium turkey, vine ripe tomatoes, white onions, crispy green peppers, black olives, mozzarella, croutons, Parmesan & oregano.
Romaine lettuce topped with vine ripe tomatoes, white onions, crispy green peppers, black olives, green olives, pepperoncinis, feta cheese, croutons & oregano.
Genoa salami, Pepperoni, Tender Sliced Ham topped with Tomatoes, Lettuce, Mozzarella & a swipe of Mayo.
Tender Ham topped with Cheese, Lettuce, Tomato & a swipe of Mayo.
Tender Rib Eye Steak topped with Mozzarella, Lettuce, Onion, Tomato, & Mayo.
Grilled Chicken, green peppers, onions, mushrooms, mayo & mozzarella.
Select cuts of turkey topped with mozzarella, lettuce, ripe tomatoes & mayo.
Fresh mushrooms, onions, green peppers, black olives, ripe tomatoes, topped with mozzarella & a side of mayo.
Grilled Chicken Smothered in BBQ Sauce, Topped with Onions & Mozzarella.
Generous Cuts of White Turkey, Premium Ham Topped with Lettuce, Tomato, Mayo & Cheese.
Marinated grilled chicken smothered in mozzarella topped with onions, ripe tomatoes, lettuce & a swipe of mayo.
Marinated Grilled Chicken, smothered in Pesto Sauce, topped with Mozzarella & Mayo.
Marinated Grilled Chicken smothered in Buffalo Sauce, topped with Onions, Jalapenos, Mozzarella & Mayo.
Crispy Chicken Tenders topped with Mayo, Lettuce, Tomato & Mozzarella.
Chicken tender white chicken smothered with bbq sauce, crispy bacon, onions, pineapple & mozzarella cheese.
Mozzarella cheese topped with ricotta, provolone, Romano cheese & cheddar.
Our homemade pesto sauce topped with grilled chicken, tomato, mozzarella & Parmesan cheese.
Marinated chicken breast, ripe tomatoes, onions, fresh jalapeno & banana peppers.
Calzone: Fresh homemade dough folded over and generously stuffed with mozzarella and ricotta cheese. Glazed with seasoned garlic sauce, sprinkled with cheese, oregano & served with our signature marinara sauce. Stromboli: Fresh dough folded over & generously stuffed with our homemade tomato sauce and mozzarella cheese. Glazed with seasoned garlic sauce, sprinkled with cheese, oregano & served with our signature marinara.
Pepperoni, sausage, ham, bacon & meatball.
Spinach & Tomatoes, add Ham & Salami.
Select 3 toppings from the pizza toppings. | {
"redpajama_set_name": "RedPajamaC4"
} | 274 |
Home » Rockford, IL Star Furniture Co Factory Collapse, Mar 1890
Rockford, IL Star Furniture Co Factory Collapse, Mar 1890
Illinois |
Building Collapses |
NARROW ESCAPE OF WORKMEN.
ROCKFORD, Ill., March 25.---Special Telegram---About 8 o'clock this morning the new factory of the Star Furniture Company collapsed, ruining it utterly. Twenty men at work on the structure barely escaped with their lives. The building was four stories high and the roof had just been put on. It was a frame building and was to have been veneered over with brick. There was a very high wind this morning and the windows not yet being in the building went down. The twenty men at work inside were warned by a sudden bulging of the walls and barely escaped before the structure went down. The loss is from $2,000 to $5,000, according to the value of the material in the wreck. C. E. Carison was the contractor, but the company furnished its own material. Some think the catastrophe was due to faulty construction.
The Sunday Inter Ocean, Chicago, IL 26 Mar 1890
Illinois Disasters
Mining Explosions and Accidents (53)
Jersey City, NJ Building Collapse Nov 1890
Savannah, GA Boarding House Collapse, Jul 1890
Chicago, IL Construction Collapse, Nov 1993
Rosemont, IL Stadium Roof Collapse, Aug 1979
Chicago, IL Disastrous Boiler Explosion, Jan 1890 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,758 |
Kuša (jap. 倶舎宗 ) je jedna z šesti buddhistických škol,tzv. narských škol (Kuša, Sanron, Džódžicu, Hossó, Kegon a Ricu), které se dostaly z Číny do Japonska v 7. a 8. století. Učení školy Kuša, které přinesli do Japonska v roce 658 mniši Čicu a Džitacu, bylo však prakticky od svého počátku vnímáno jako součást školy Hossó. Ta však přežila až do dnes. O první jmenované máme poslední zmínky z 9. století.
Škola Kuša se odvozovala z indické Sarvástivády, jejíž učení vycházelo ze spisu Abhidharmakóša, který napsal učenec Vasubandhu. Spis shrnoval nauky o dharmách a uváděl myšlenku, že v současnosti existuje vše: minulost, přítomnost i budoucnost.
Reference
Buddhismus v Japonsku | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 303 |
Q: Automatically call library function on application startup Say I have some library library.aar and it has a function called initialize. I also have an app and the gradle file has been modified to include library.aar as a dependency.
Rather than calling library.initialize(); from within the application code, I want library.initialize() to be called automatically when the application starts up.
Is this possible? Perhaps there is a way for my library to listen for application startup and then call initialize on itself?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 713 |
{"url":"https:\/\/www.rhayden.us\/differential-equation-2\/k-lcb.html","text":"## Lcb\n\nThe determinant of the coefficient matrix of the homogeneous system is ( -<5: \u2014 a), which is negative. We therefore know immediately that the steady-state equilibrium is a saddle point.\n\nBv theorem 24.2, the solutions to the system of differential equations in (25.7) and (25.8) aie where r\\ and r2 are the eigenvalues or roots of the coefficient matrix in equation (25.9), Ci and C2 are arbitrary constants of integration, and k and K are the steady-state values of the system, and serve as particular solutions in finding the complete solutions.\n\nIf A denotes the coefficient matrix in equation (25.9). its characteristic roots (eigenvalues) are given by the equation tr(A) 1 \/-r-\n\nwhere ir{A) denotes the trace of A (sum of the diagonal elements). The roots of equation (25.9) then are r|, rj = \u00b1V5Z 4- a\n\nThe steady-state values of X and K arc found by setting X =s 0 and K = 0. Doing this and simplifying yields\n\nSolving these for A and K give the steady-state values\n\nBecause the steady state is a saddle point, it can be reached only along die saddle path and only if the exogenously specified time horizon, T. is large enough to permit it to be reached.\n\nThis leaves only the values of the arbitrary constants of integration to he determined. As usual, they are determined using the boundary conditions K(0) -K,) and X(T I = 0. First, requiring the solution for K (\/) to satisfy its initial condition gives\n\nAfter simplifying, this gives\n\nNext, requiring the solution for X(t) to satisfy its terminal condition gives 0 = C\\er,T + Cj\u00ab?'-7 +X\n\nfrom which we get an equation for C2 in terms of CV\n\nSubstituting this into the expression for C\\ and simplifying gives the solution for C\\\\\n\n2a(K{) - K) + (r2 - 8)ke\"'-~lr{ -8)- (r2 - S)e<r<~r-)T\n\nFigure 25.2 Solution path 1,(1) for investment when A'(, < K: solution path \/2(i) for investment when K\u201e > K\n\nSubstituting this solution into the equation for C2 and simplifying gives the explicit solution for C2:\n\n-lit(Ko - K)e{ri~r-,T -X(r, - 8)e.^-r n -8 - (r2- S)e(r'-r\"-)r\n\n### This completes the solution.\n\nThe optimal path of investment is obtained using equation (25.6). If we denote the solution for k(t) in equation (25.10) as k*(t), then the solution for investment, denoted \/*(\/) is\n\nThis solution gives the path of investment that maximizes total profits over the planning horizon. Figure 25.2 shows two possible solution paths. When Ko < K. the solution is a path like l, (\/) that starts high and declines monotonically to 0 at time 7. When Ka > K, the solution is a path of disinvestment like l2U) that stays negative from zero to T.\n\n### An Economic Interpretation of A and the Hamiltonian\n\nWe introduced k(\/) as a sequence or path of Lagrange multipliers. It turns out that there is a natural economic interpretation of this co-state variable. Intuitively k(t) can be interpreted as the marginal (imputed) value or shadow price of the state variable x(t). This interpretation follows informally from the Lagrange multiplier analogy. But it also follows more formally from a result that is proved in the appendix to the chapter. There it is shown that \/.(()) is the amount by which J' (the maximum value function) would increase if a(0) (the initial value of the stale variable) were to increase by a small amount. Therefore k(0) is the value of a marginal increase in the state variable at time i \u2014 0 and therefore can be interpreted as the most we would he willing to pay (the shadow price) to acquire a hit more of it at time t = 0. By extension. ).(t) can be interpreted as the shadow price or imputed value of the stale variable at any time t.\n\nIn the investment problem just examined. A (?) gives the marginal (imputed) value or shadow price of the firm's capital stock at time \/. Armed with this interpretation. the first-order condition (25.5) makes economic sense: it says that at each moment of time, the lirm should carry out the amount of investment (hat satisfies the following equality:\n\n21U)\n\nThe left-hand side is the marginal cost of investment; the right-hand side is the marginal (imputed) value of capital and. as such, gives the marginal benefit of investment. Thus the first-order condition of the maximum principle leads to a very simple investment rule: invest up to the point that marginal cost equals marginal benefit.\n\nThe Hamillonian function loo can be given an economic interpretation. In general. H measures the instantaneous total economic contribution made by the control variable toward the integral objective function. In the context of the investment problem, \/\/ is the sum of total profits earned at a point in time and the accrual of capital that occurs at that point in time valued at its shadow price. Therefore H is the instantaneous total contribution made by the control variable to the integral of profits. J. It makes sense then to choose the control variable so as to maximize H at each point in lime. This, of course, is what the maximum principle requires.\n\n0 0","date":"2019-01-17 07:16:08","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8792402148246765, \"perplexity\": 988.9327831552515}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-04\/segments\/1547583658844.27\/warc\/CC-MAIN-20190117062012-20190117084012-00631.warc.gz\"}"} | null | null |
A Brief Introduction on the Essence of Shurangama Sutra 11111111111
Uploaded by Thongdat109
saveSave A Brief Introduction on the Essence of Shurangama ... For Later
JIABS 6-1
C T SHEN
Dzi Beads Suitable for Each Zodiac Signs in 2014
Dalai Lama-37 Practices
Tsung-Mi and the Single Word "Awareness" (Chih)
AvalokiteSvara 7 Qualities
Making Sense of Tantra
Avalokiteshvara Practice - TL August 2006
The Ornament of Clear Realization - A Commentary on the Prajnaparamita of the Maitreya Buddha
Ok-3-Find Out the Factors to Show That There Was a Concept of the Coming Buddha Before the Emergence Ok
Should Buddhism be Considered a Religion?
The Healing Power of Mind
R.H
Analysis of r.c. Zaehner's Theory of Mysticism
EFT2_U2_1.5_Circles_of_discussion.pdf
art%3A10.1007%2FBF01089255.pdf
40413449-Buddha-and-the-gospel-of-Buddhism-1916.txt
A Brief Introduction on the Essence of Shurangama Sutra
Rescuing Ananda: The Conceptual Framework and Instructions for Practice in
therangama Sutra By David Rounds
1. General Characteristics of the ra gama 2. The Conceptual Framework of the
Discourse 3. Instructions in Practice 4. Guidelines for Advancement 5.
ConclusionFootnotes About Author: David Rounds
1. General Characteristics of the ragama
Of the worlds religious masterpieces, the Buddhas discourse known the ragamaSutra[1] is
perhaps the least familiar to Western readers. Unlike such discourses as theLotus Sutra, the Heart
Sutra, and the Diamond Sutra, which survive in their original Sanskrit versions, and which have been
studied in the West for well over a century, the ragama is no longer extant in an Indic original,
and the text is preserved only in an eighth-century translation into Chinese. The text as we have it,
consisting of some 63,000 characters in ten rolls, is tersely expressed, densely argued, and subtly
allusive, resorting often to rare characters and to transliterations of Sanskrit terms, such that even
devout and erudite Chinese readers are not infrequently puzzled as to the meaning. It is perhaps not
surprising, then, that there have been few attempts at translation of this text into the European
languages.[2]
Buddhists in the Chinese tradition, however, have long considered the ragama to be of central
importance. The Sutra is valued, first, for a unified sequence of teachings that elucidate a series of
fundamental religious questions. The text first develops a philosophy of mind and of perception, and
then bases on that foundation a series of instructions in spiritual practice, in particular in the deep
mental absorption known as samdhi. To this end the sutra offers in great detail a precise prescription
for the systematic withdrawal of the sense-faculties from engagement with the physical world. Such a
disengagement, when carried out correctly, can result in lasting illumination -- a teaching that is
central to Buddhism and that appears in some form in the esoteric traditions of all the other great
religions.[3]
The philosophical density of the ragama discourse is lightened by the manner of its general
presentation. The first two thirds of the Sutra consist of a dramatic dialogue between the Buddha and
his young cousin and attendant nanda. Other interlocutors intervene at critical moments, each
speaking in a distinct voice as they give testimony to their experience with spiritual practice, or as
they pose a question that nanda does not yet have the spiritual depth to ask. But most of the
philosophical argument, and the spiritual guidance that follows it, are conveyed to the reader through
the drama of nandas personal story. During the long hours of his conversation with the Buddha, we
see the young monk seesaw between impertinence and remorse, between astonishment and gratitude,
between bewilderment and enlightened understanding. His plucky earnestness adds to the discourse
the unexpected element of charm. Despite the focus on nanda, however, his story is explicitly
presented merely as an example; it is itself a parable. From the beginning, the Buddha makes it clear
that his instructions are meant not only for nanda but for beings of the future that is, for us.
Accordingly, nandas gradual and successful struggle to understand and to awaken brings to
dramatic life the struggle that can still be expected by anyone who sets foot on a spiritual path.
To the Western reader, the ragamas format suggests a similarity to the dialogues of Plato. But
Platos manner of uncovering truth through Socrates sly cross-examinations of his hapless
interlocutors is in fact very different from the pattern we encounter in theragama. The Buddha
and nanda engage for much of the Sutra in formal debate, according to the rules of what is now
called Buddhist logic. In the monastic universities of classical India, Buddhist monks were trained in
logical debate in order to sharpen their minds and also to win over adherents from other schools and
sects. To some extent it was an intellectual sport. Buddhist debate was transmitted to Tibet, where
young monks are still trained in it. In China it was championed by the great seventh-century
translator Xuan Zang, who brought back several texts on logic from India.[4]
Briefly, Buddhist logic (Skt. hetu-vidy; Ch. yin ming , literally, the clarification of causes),
originally followed five steps:
1. A proposition that one is undertaking to prove;
2. The reason that the proposition is claimed to be true;
3. One or more instances of the proposition at work in ordinary experience;
4. Application of the instances to the proposition;
5. Conclusion, usually by restatement of the proposition, now demonstrated.[5]
These five were later reduced to three, in effect leaving out the last two of the five:
1. Thesis
3. Examples
a) positive instance
b) negative instance [6]
The Buddha uses elements of both these procedures in the ragama.
It was permissible, when using the five-step sequence, to omit the first step, and the Buddha does so
often in the Sutra. He merely hints where he is headed by raising an issue in terms of a question. The
result is that the last two of the five steps the application of the examples and the conclusion
frequently come to nanda and the others in the Buddhas audience as a surprise, which may be
accompanied by astonishment, confusion, or delight in finally understanding. Here is a summary of
one of many instances of the clarification of causes in the ragama:
1. Proposition: it is the mind, not the eyes, that see (in the text this step is implicit rather than stated);
2. Reason: our visual awareness is active even if nothing is being seen.;
3. Instance found in ordinary life: In the Buddhas words, If you asked a blind man on the street,
Do you see anything? he would no doubt answer, All I see is darkness.
4. Application of the instance: Reflect upon what that might mean. Although the blind man sees only
darkness, his visual awareness is intact.
5. Conclusion: The eyes themselves simply reveal visible objects; it is the mind that sees, not the
eyes.
This sequence comes early in the discussion, and Buddha does not at this point explain its
Later he will point out that since seeing is actually a function of the mind, not the eyes, our visual
awareness is fundamentally independent of the presence of visible objects. Therefore it must be
possible to withdraw our visual awareness from the grip of visual objects and the same must be
possible for all the other senses as well. Awareness, thus freed, may become purified, and may
eventually be transmuted into illumination.
One aspect of Buddhist logic in particular lends to the ragama much of its distinctive style. This
is the reliance on positive and negative instances as proofs of a thesis. While the positive instances,
like the encounter with the blind man in the street, serve to prove an assertion by showing how it is at
work in daily life, the negative instances are given as proof that any assertion contrary to one being
defended would result in absurdities. Most important here is that the rules of logic required that the
instances be drawn not from doctrine or theory, which ones opponent in debate might find
unpersuasive, but rather from the experiences of ordinary life -- experiences which an opponent could
not plausibly discount. It is to this requirement that we owe many of the glimpses that the Sutra gives
us into the daily routines of the monastic community and of the citizens of the nearby city of rvast.
[7] We read of the monks seated with their alms-bowls, busy rolling up their food into balls to be
eaten with the fingers, in the Indian manner. We hear of householders digging wells for new
dwellings and local healers holding up bowls to the full moon to collect dew that they will mix inito
their herbal potions. We meet a monk who has spent his life repairing potholes in the public roads and
a king who despairs because he is growing old. However abstract or subtle the discourse may
frequently seem, then, it is deeply colored with a sense of time and place, with the sights and sounds
and people of Northern India in its early classical era. This underlying Indian tint keeps seeping up
into the Chinese surface of the translated text -- reminding the reader that the nearby river, often
invoked in the flow of argument, is not the Yangzi but the Ganges.
Considered as a whole, the entire edifice of the Buddhas discourse is a masterwork in the
architecture of logical argumentation. Each level of theory established by the Buddha elegantly
supports the next level.
The entire discourse is girded together by cross-references, parallelisms, reiterations of previous
assertions, and anticipatory summaries of what is to come. The present essay is a preliminary attempt
to read the plans of this exemplary scriptural edifice.
Many generations of Chinese readers, Buddhist and otherwise, have admired and esteemed one
further aspect the ragama Sutra, namely, the virtuosic literary elegance of the translation itself.
Except for a few verses, the entire Chinese text unrolls in a sequence of four-character phrases, which
are in effect a metered prose. This four-character pattern imbues the discourse with a vigorous and
stately majesty. For all these reasons, then, in China the ragama Sutra has for many centuries
been the subject of written commentaries by illustrious monastic scholars,[8] a topic for public
lectures and spoken exegeses, and the object of devout private study, recitation, and memorization.
It should be noted, before proceeding further, that the authenticity of the ragama has been
challenged by some modern scholars, who have held that, since no Indic original is extant, the text is
no translation at all, but rather an original composition in the Chinese.[9]The timing of the translation
has been questioned, and textual anomalies that might suggest the interpolation of purely Chinese
cultural elements have been identified. There are strong reasons to believe, however, that the original
text can only be Indian. It is true, for example, that during Buddhisms earliest centuries in China,
spurious or corrupted Buddhist texts were circulated; but by the early eighth century, when
the ragama translation appeared,[10]Chinese monastic scholars had become sufficiently vigilant
to denounce inauthentic texts.[11] Further, while some details in the text do seem to arise from a
Chinese context, these could merely represent choices made by the translators to substitute Chinese
equivalents or analogues for unfamiliar Indian elements that were present in the original.[12] There
are, besides, a least an equal number of details that point to an Indian substratum beneath the Chinese
surface.[13]
The most persuasive internal trace of a South Asian original, however, is the presence of two
indisputably Indian elements that play leading roles in the text. One of these, already mentioned here,
is the clarification of causes in Indian Buddhist logic. The other is the ragama Mantra, which
the Chinese text leaves untranslated, and which lies at the heart of the Sutras instructions for spiritual
practice. Last, and most important of all in this dispute as to the Sutras authenticity, is the fact that
the ragama has been widely accepted in China as canonical for well over a thousand years. Such
acceptance reflects the view that a religious texts authenticity must be measured by its effectiveness
as a guide to spiritual and moral practice. From this orthopraxic point of view, the ragama may
be correctly deemed to be authoritative simply because generations of advanced practitioners have
revered it, have followed its instructions, and have explained it to others as a reliable prescription for
moral purification and spiritual advancement, even as far as enlightenment.[14] To
the ragamas many admirers, then, the history of the text is, in the end, of no great importance,
and the dispute surrounding its origin is irrelevant.[15]
2. The Conceptual Framework of the Discourse
As the Sutra opens, nanda, alone on the road, falls under a spell that is recited by a courtesan, and
he is on the brink of breaking his vow of celibacy. The Buddha senses from a distance his young
cousins distress, and, having made a transfiguration of a Buddha appear above his head, he recites
through this transfiguration a mantra[16] -- the ragama Mantra -- which defeats the courtesans
spell. The Buddha then sends a senior member of his assembly, the Bodhisattva Majur, who in the
Mahayana tradition embodies wisdom, to rescue nanda and to bring both monk and courtesan
before the Buddha. Amidst the assembly of monks and a throng of lay adherents, nanda now finds
himself face to face with his teacher. Deeply mortified, he requests instruction so that he can avoid
further error. This is the request for
Dharma with which most of the Buddhas discourses begin.
In his answer, the Buddha makes clear that nandas error was not entirely that he permitted sexual
desire almost to overwhelm his monastic resolve. Equally in error was his laxity in his practice of
mental concentration, a laxity that left him vulnerable to enticement by the courtesans spell. What
nanda lacked was samdhi, a concentration firm enough to resist disturbance and intrusion. The
Buddha proposes, then, to answer nandas request for instruction by teaching him how to perfect his
samdhi. The Sutra says,
Then the World-Honored One, before the great assembly, extended his golden-hued arm, [comforted
nanda by] passing his hand over the crown of nanda's head, and said to nanda and to all
gathered there, "There is a samdhi called 'The Great and Royal ragama that is Proclaimed from
Above the Buddhas Head and is the Perfection of the Myriad Practices.' It is a wondrous and
magnificent Path, the unique portal through which the Buddhas in all ten directions have passed in
order to transcend the conditioned world.[17]
The Buddha then launches a dialogue which continues on throughout most of the Sutra. He begins by
asking nanda to consider where his mind is located. nanda offers the evident answer that his mind
is to be found in his body. The Buddha, however, with his superior command of logic, quickly
disposes of this widely held supposition, and of six more possibilities that nanda offers. The young
monk is left with the bewildering conclusion that his mind is neither inside his body, nor outside it,
nor somewhere between, nor anywhere else. The Buddha then compounds his cousins confusion by
stating that there are fundamentally two kinds of mind first, the ordinary quotidian mind of which
we are aware and which is entangled, lifetime after lifetime, in the snare of illusory perceptions and
random thoughts; and, second, the everlasting true mind, which is our real nature, and which is the
state of the Buddha.
nanda, what are the two fundamentals? The first is the mind that is the basis of death and rebirth
and that has continued for the entirety of time, which has no beginning. This mind is dependent on
perceived objects, and it is this that you and all beings make use of and that each of you consider to
be your own nature.
The second fundamental is enlightenment, which has no beginning; it is the original and pure
essence of nirvana. It is the original understanding, the real nature of consciousness. All conditioned
phenomena arise from it, and yet it is among those phenomena that beings lose track of it. They have
lost track of this fundamental understanding, though it is active in them all day long, and because
they remain unaware of it, they make the mistake of entering the various destinies.
nanda will at first have none of this. Here he speaks for all who pride themselves, as he does, on
their discursive intelligence. When the Buddha, seeing nandas skepticism, asks him what he takes
to be his mind, nanda answers,
The [Buddha] has just now been asking me about my minds location, and my mind is what I have
been using to determine where it might be. My mind is that which has the capability of making such
determinations."
The Buddha exclaimed, "nanda! That is not your mind!"
nanda protests:
If this activity of comprehension is not the mind, then I have no mind, and I am the same as a clod
of earth or a piece of wood.
The Buddha reassures his cousin that there is indeed another kind of awareness. It is the true mind
that original and pure essence of Nirvana just mentioned. It is given a number of other designations
in this discourse, among them the Buddha-nature, the Matrix of the Thus-Come One (that is, of the
Buddha), and the Suchness of Reality. This ultimate reality is also the enlightened mind inherent in
all beings. However, it has remained outside of nandas calculations. He has simply not been aware
that he is endowed with it, because he is immersed in the disordered activities of his mundane
thoughts and is distracted by his unrelenting interactions with the sense-data that he takes to be the
world. Absorbed in his own drama, he cannot enter samdhi, much less proceed through samdhi to
awakening. In this he stands in for us.
Since nanda is someone who proudly identifies himself with his ability to think, he is at a
disadvantage in his search to awaken to his true mind. One cannot think ones way into
enlightenment, because the awakened mind is beyond thought. Put otherwise, what is most worth
understanding is beyond conception; what most needs describing cannot be described. Nevertheless,
the Buddha in the ragama discourse is willing to take exception to this generally recognized
conundrum. Since words are what nanda trusts and understands, and since thinking is what nanda
relies on, the Buddha is willing to make use of words and thought to wake his cousin up. The Buddha
concludes this first section of
the Sutra with this promise:
I now will raise for all of you a great banner of Dharma, so that all beings throughout all ten
directions can gain access to what is wondrous, subtle, and hidden: the pure mind that understands.
The Buddha proceeds by bringing to our attention to the simple fact that we are aware. Taking visual
awareness as the paradigm, he examines awareness through a series of illustrative vignettes, several
of them involving other speakers. He demonstrates that, while things move in the field of our visual
awareness, our awareness itself remains still. The body ages, but our awareness endures. What lies
within our field of awareness may be in daylight or in darkness, may be obscured or clear, but our
awareness itself is unchanged, no matter what conditions appear within its scope. Our field of visual
awareness encompasses visible objects but is not itself an object; it lacks both shape and extension.
Still, it cannot be said to be clearly distinct from the things which are in it, at the same time that it is
cannot be said to be identical to them. Our visual awareness is active even amidst total darkness, as
the Buddha shows with the already cited example of the blind man who is nevertheless aware of the
darkness around him.
All this applies not only to visual awareness but also to the five other kinds of awareness: our
awareness of sounds, of smells, of tastes, of tangible objects, and of the thoughts in our minds. A
plenitude of sense-objects exists within each of these fields of awareness, but the awareness itself is
independent of them. As the Buddha says later, The essential capacity to hear is never absent, no
matter whether there is sound or silence.
Gradually during this investigation, the Buddha gives hints that our capacity for awareness is more
central than we may have supposed. Having shown nanda that his visual awareness is independent
of the phenomena of which it is aware, the Buddha concludes that this awareness must be at least an
aspect of that true mind that nanda has lost track of. What is not affected by conditions must be
what you fundamentally are. If that is not you, what is? For one thing, as he gently reassures the
aging King Prasenajit, who is frightened at the prospect of his mortality, since our visual awareness is
independent of conditions, it must belong to what survives the traumas of death and rebirth.
nanda, surrendering to the force of the Buddhas logic, grants, My awareness is indeed my
wondrous true nature. But, of course, it is not that simple. It is the essential nature of awareness that
is the enlightened mind, not our ordinary awareness that we experience day after day. This ordinary
awareness has been distorted; it is diseased. The Buddha compares us to a man who sees circles of
color around a lamp because he suffers from a disease of the eyes. Yet it is possible for the mans
eye-disease to be corrected, so that he can see clearly, and it is possible for our distorted awareness to
be purified, so that we see the world as it really is.
It is probably worth noting at this point that the radical skepticism expressed in this ancient document
is,
in modern times, no longer radical at all. The speculations of David Hume and Immanuel Kant not
to mention the invention of the microscope and telescope and every other device that sees and hears
better than we do have long made it clear that the world we see with the naked eye and hear with
the unaided ear is not the world as it really is. We do not hear the high and low frequencies and do not
see beyond the visual spectrum. Our visual and aural fields are distorted by the limitations to our
visual and aural awareness. These limits are not merely those of scope, but of value and of
perspective. As the Buddha will explain to nanda later on in the ragama discourse, we also
allow ourselves to be fooled in that we interpret everything we perceive as good, bad, or indifferent,
assigning it a value measured by what it will mean to us. In this way, we distort the world by
responding to it with desire or revulsion. Further, we divide the contents of our experience into what
is us and what is not us what is self and what is other. Everything is colored by the perspective of
self, just as the lamp is distorted by the eye-disease of the viewer in the Buddhas analogy
What, then, is this enlightened mind that is the purified essence of our awareness, and what might it
mean to purify it? To prepare nanda for an answer to these questions, the Buddha turns next to one
of the doctrines that is most distinctive to Buddhism the doctrine of emptiness. All the things that
we perceive in this world are impermanent, and, furthermore, all things are merely concatenations of
ingredients and circumstances. They exist, but they have no reality that is their own. They are
constructs; ontologically they are empty. Our very selves are merely mental constructs, which as
modern developmental psychology confirms we put together as children in order to navigate among
our experiences. In the ragama Sutra, the Buddha approaches this doctrine through our senseapparatus. In a series of arguments that follow the pattern of the clarification of causes, he briefly
examines in turn our six senses (eyes, ears, nose, tongue, skin, and mind), then the six kinds of senseobjects (visual objects, sounds, smells, tastes, tactile objects, and thoughts that is, objects of mind);
also the six kinds of sense-consciousnesses that arise when the sense-faculties encounter senseobjects (that is, seeing, hearing, smelling, tasting, touching, and awareness of the mental contents);
and finally the elemental qualities that characterize the world of sense-objects (fire, or heat; earth, or
solidity; water, or liquidity; wind, or movement; space; living beings ordinary sense-awareness; and
beings consciousness). All these, the Buddha says, are illusory and unreal. They do not exist as
independent natures with their own being. Thus the universe of objects that we perceive is, in this
sense, empty.
And yet, at the same time that mind and world are empty, in that they are unreal and have no
independent being, one cannot say that they do not exist. The universe is empty, but this emptiness is
not the same thing as a void. It is not only that the universe teems with all manner of sentient beings
and insentient things, contingent and impermanent thought they may be. It is also that, at a more
profound level, this empty universe, and all beings in it, are suffused with an ultimate reality. In
the ragama Sutra and elsewhere, the Buddha calls this ultimate reality the Matrix of the ThusCome One in Sanskrit, the Tathgata-gharba, literally, the womb of the Buddha. It is from this
Matrix that the world and the mind come forth.[18]
Enlightenment, then, brings not only an awakening of the fundamental Buddha-nature within ones
own mind that original true mind which we have forgotten. It brings also an apprehension of
ultimate reality, because these twothe true mind and the reality of the universe otherwise named
the Buddha-nature and the Matrix of the Thus-Come One are one and the same. Thus in
the ragama, numerous terms of praise apply equally to the true mind and to the reality of the
universe: wondrous ( miao), original ( yuan,) fundamental ( ben), true ( zhen), genuine (
shi), enlightened ( jie), illuminative understanding ( ming), luminous ( guang), unchanging (
chang), pure ( qing jing), the suchness of reality ( zhen ru); and -- most important to
the ragama discourse -- the enlightened nature of our awareness ( jian xing). The Buddha
tells nanda:
In the Matrix of the Thus-Come One, the nature of your visual awareness is your enlightened
understanding, and the essence of enlightenment is your awareness that understands. Fundamentally
pure, it extends throughout the Dharma Realm. The extent to which beings are aware of its real
nature depends on the capacities of their minds. Just as the awareness of one sense-faculty, the eye,
extends throughout the Dharma Realm, so also do the wondrous, resplendent powers of hearing,
smelling, tasting, tactile awareness, and mental awareness extend throughout the Dharma Realm.
They fill up the entirety of empty space. [19]
How, then, is it that we have lost sight of our origins in this Matrix, which is it say, how is that we
have forgotten our true minds? Another of the Buddhas senior disciples, Prn a-maitryani-putra,
now rises to ask this question.
World-Honored One, if in fact the skandhas,[20] as well as the twelve sites, which consist of the
sense-faculties and their objects, and also the eighteen capacities [for perception] and the rest, are
all the Matrix of the Thus-Come One, which is itself fundamentally pure, then how is it that the
mountains, the rivers, the earth, and everything else in the world of perceived objects -- all
conditioned phenomena suddenly come into being?
The Buddhas answer focuses on one essential event: the separation of the perceiver from the
perceived, of the self from the other. He describes it as adding an understanding to an already existing
understanding.
Although nothing need be added to enlightenment, once an understanding is added, that
understanding must understand something. Once the category of something understood is
mistakenly established, the category that which understands is mistakenly established as well.
As a result, the original, enlightened awareness, which had been a single unity, divides into six sensefaculties and their six objects. What is now a perceived universe is further divided into the desirable,
the undesirable, and the neutral, and as a result there is disgust and desire. Karma is now in play, and
with it emerge the various levels of being and the cycle of death and rebirth.
This account, summarized here, of the separation of ultimate reality into subject and object is all the
Buddha is willing to offer as the original impetus for the coming into being of mind and world. He
gives us no Creation myth and imagines no Creator. If the primordial separation into subject and
object is our fall, it is a psychological one, a rift at the foundation of consciousness, not a moral fall.
There is no original sin; there is only the original loss.
While the Buddha is willing to explain how this loss comes about, he does not offer a similar
explanation as to why it comes about. Prn a-maitryani-putra requests that the Buddha shed light on
this mystery:
I venture to ask the Tathgata why all beings are deluded.
The Buddha replies with a parable. In nearby rvast, a neighborhood idiot named Yajadatta thinks
that the head he sees in the mirror one morning is not his own. In a panic he concludes that the head
he does see in the mirror belongs to a ghost. He runs madly out into the street. The Buddha asks
Prn a:
What do you think? What caused this man to run madly about for no good reason?
Prn a replied, He was clearly insane. That and nothing else was the cause.
The Buddha said, Wondrous enlightenment is perfect understanding, a wondrous understanding
that is fundamentally perfect. How then could it be the basis of the delusions we have been
discussing? This, then, is the cause of delusion: it arises from delusion. Merely realize that your
delusion has no ultimate basis, and the basis of your delusion will disappear The same may be
said even more emphatically of Yajadatta. What he experienced that day in the city had no basis and
so was fundamentally unreal. Was there any reason for him to have become afraid that he had lost
his head and to have rushed madly about?...The same is true of delusion. How could it be based in
There can be no reason for our slipping from enlightenment into delusion. First, it contradicts logic to
suppose that perfect understanding could be the cause of delusion; and, second, delusion would not
be delusion if it had a rational basis. And that is all the Buddha will give us here. The Sutra does not
offer an elaborate account of cosmological origin. The Buddhist view is that the universe is created
not by a deity but by karma. Unlike the Western religions, Buddhism does not see the universe in
terms of a finite and linear narrative guided by the purposes of an ultimate purposer. The impetus of
the Buddhas ministry was simply the plain fact that we are caught up in an unsatisfactory existence,
and he offers nothing more nor less than a way out. Thus in the ragama, the Buddha concludes
his parable of Yajadatta with a prescription:
All that is needed is for you not to follow after the distinctions you make concerning the world,
beings, and retribution in accord with karma.
3. Instructions in Practice
Not to follow: this prescription to declare our independence from sense-objects leads us back to the
occasion for the Sutra itself. nandas lack of samdhi left him vulnerable to disaster because he
permitted his hearing to take in the courtesans spell. He shouldnt have been listening to the spell.
He should have been focusing his hearing inward. The ragamadiscourse has now reached its
second stage. The Buddha has erected a conceptual framework, and he is ready to clothe the
framework with instructions for practice. nanda does not immediately perceive this, and he now
speaks up to challenge the Buddhas argument that no cause can be ascribed to the primordial
division between self and other. Briefly the
Buddha revisits the parable of the foolish Yajadatta, and then he gives nanda a sound scolding:
Despite your many eons of accumulated learning, you were not able to escape your difficulty with
the young Matanga woman [the courtesan]. Why did you need me to recite the ra gama Mantra
for you? In the young Matanga woman the fires of lust have been extinguished, and instantly she has
become a Sage who must return only once.[21]
In other words, the courtesan has surpassed nanda in her level of enlightenment. Listening to the
ragama discourse has lifted her quickly to the third level of the Sage. nanda has long been stuck
at the first stage.[22] This is sufficiently mortifying to sap the young monks appetite for
argumentation. For the remainder of the Sutra, he largely confines his remarks to requests for further
teaching. The young intellectual is now ready to be taught how to practice the ragama Samadhi.
He tearfully confesses:
The Thus-Come One has also admonished me for pursuing erudition at the expense of spiritual
practice. Now, therefore, I am like a wanderer who unexpectedly meets a celestial king, who bestows
upon him a magnificent house. The house is his, yet in order to go in, he will still need to find a door.
I only hope that the Thus-Come One will not withhold his compassion from all of us in this assembly
who are covered in darkness, and that he will show us the road that leads from our original resolve
to certain attainment of the Thus-Come Ones complete nirvana.
The Buddha responds by returning to the subject of sense-perception. He points out that some of the
six senses have a wider range than others; for example, we cannot see behind us, but our hearing is
omni-directional. He then reassures nanda that he needs to concentrate in his practice on only one
of the senses, since when one of them is purified, all of them will be purified. The work of samdhi,
he repeats, is not to follow. That is, it is to learn not to allow ones attention to be distracted by
sense-objects, not to allow ones mind to evaluate these objects as good, bad, or neutral, and not to
allow ones emotions to respond with desire or revulsion. He says:
Extricate one sense-faculty by detaching it from its sense-objects, and redirect it inward so that it
may return to what is original and true. Then it will radiate with the light of your original
understanding. This brilliant light will shine forth and detach the other five sense-faculties until they
too are completely free of sense-objects.
The Buddha then asks his son Rhula, who has joined his monastic retinue, to strike the temple bell,
in order to demonstrate once again that hearing continues whether there is sound or silence. Finally,
he asks the enlightened sages in the assembly to tell nanda how they became enlightened.
When you first made the commitment to realize enlightenment, which one of the eighteen constituent
elements of perception did you waken in order to break through all obstructions? By what method did
you enter samdhi?
Thus begins what is perhaps the most celebrated passage in the Sutra: the testimony of the twentyfive sages, of whom the last is the Bodhisattva Avalokitevara. Each sage identifies as his avenue to
enlightenment one of the six sense-faculties, six sense-objects, six sense-consciousnesses, or seven
elemental qualities (thus a total of twenty-five.) Each explains how, by contemplating their awareness
as ultimately independent of sense-faculties and sense-objects, they broke their attachment to the
world of the senses. The sage Gavmpati, for example, testifies in part:
My contemplation was that the knowledge of flavors does not come from the tongue-faculty and
does not come from any object of taste Within, I let go of my mind and body, and without, I took my
leave of this world. I left the three realms of existence far behind, like a bird escaping from its cage.
I departed from all impurity and was done with all sense-objects, and my Dharma-eye became clear.
So it was that I became a Sage. The Thus-Come One himself verified that I needed no further
instruction. The Buddha has asked us how we broke through all obstructions. I believe that turning
our awareness of flavor around to reflect upon itself is the best method.
As for the seven sages who focused on one of the seven elemental qualities, their contemplations
were of the identity of body, mind, and world, and therefore the ultimate unreality of the division into
subject and object. The youth Moonlight testifies:
I saw too that the water inside my body was no different from the water outside of my body. Even as
far away as the fragrant seas of the Royal-Floating-Banner Buddha-land, the fundamental nature of
water is one and the same. My body vanished. Then the fundamental nature of the water in my
body and of all the waters of the fragrant seas in worlds throughout the ten directions merged into
true emptiness.
Finally, the Bodhisattva Avalokitevara describes his practice for purification of the ear-faculty. He
First I redirected my hearing inward as if to go against a stream ,and then external sounds
disappeared. With its direction reversed and with sounds stilled, both sounds and silence cease to
arise. So it was that, as I gradually progressed, what I heard and my awareness of what I heard came
to an end. Even when that state of mind in which everything had come to an end disappeared, I did
not rest. My awareness and the objects of my awareness were emptied, and when that process of
emptying my awareness was wholly complete, then even that emptying and what had been emptied
vanished. Coming into being and ceasing to be themselves ceased to be. Then the ultimate stillness
was revealed. All of a sudden I transcended the worlds of ordinary beings, and I also transcended the
worlds of beings who have transcended the ordinary worlds. Everything in the ten directions was
fully illuminated.
Here we have the esoteric teaching that is central to the ragama: freeing the sense-faculties from
the world of sense-objects leads first to everything disappearing, and then to illumination. As the
Bodhisattva Avalokitevara puts it:
Once perceived objects had disappeared from my mind as I turned the light of my understanding
inward, my body and mind and the entire Dharma-Realm [that is, the universe] were as bright and
translucent and flawless as crystal.[23]
When his hearing, as he says, became all-pervasive, he was able to hear the cries of beings
everywhere: thus one Chinese translation of his name is Guan Shi Yin ( , The One Who Hears
the Voices of the World). He is Buddhisms exemplar of compassion, and when Buddhism traveled to
East Asia, he became an object of worship and petition in his feminine form. He tells
the ragama assembly that he is able to appear before various classes of beings in their own forms,
to grant their wishes, and to deliver them safely from fearsome situations. The celebrated twenty-fifth
chapter of the Lotus Sutra (On the Universal Doorway) similarly celebrates the Bodhisattva
Avalokitevaras salvific powers; the difference is that in
the ragama, he explains how those powers were gained.
Because I did not listen to sounds, I was able to contemplate the listener within. Now I can hear the
cries of suffering beings throughout the ten directions, and I can bring about their liberation.
Once sounds were so purified that they ceased being perceived objects, the sense-faculty and its
objects were completely interfused, so that there was nothing that perceived and nothing that was
perceived. Therefore, I can cause beings burdened by anger and hatred to be free of their
enmity.[24]
When Avalokitvara has finished speaking, the Buddha asks the Bodhisattva Majur to recommend
one of the methods described by the twenty-five sages, so that nanda and beings of the future can
know which method will lead them most easily to success. Majur responds with a 250-line
verse in summary of the ragama teaching and the sages twenty-five methods. He concludes by
recommending the hearing-practice of Avalokitevara:
I now respectfully say this to the World-Honored-One -Who is the Buddha that appeared here in this Sha World
In order to transmit the essence of true teaching
Meant for this place--: it is that purity is found through hearing.
Those who will wish to gain a mastery of samdhi
Will surely find that hearing is the way to enter.
And a little further on:
Great Assembly! nanda! Halt the shadow-play
Of your distorted hearing!. Turn your hearing round and listen
To your true nature. Then youll realize the unsurpassed
Enlightenment. This is the genuine way
To break through all obstructions.
The instruction of nanda is not yet quite complete, however. One crucial point remains to be made.
How is it possible, after all, to turn the hearing around and listen to ones true nature? How can one
stake out an independence from the data of the senses? The Buddha is now ready to offer a frank
answer to this question.
No matter how much you may practice in order to transcend the stress of experiencing senseobjects, you will never transcend that stress until you have freed yourself from sexual desire.
In the Chinese the word is yin (), literally, lust: lust not only for sexual experience, but for senseexperience in general. What keeps us tied to sense-objects -- what prevents us from being able to
liberate our awareness and gain illumination is simply our desire for sense-experiences, of which
sexual experience is the most compelling of all. Thus the Buddha returns to his initial teaching, the
Four Noble Truths, which he transmitted to the five ascetics in the Deer Park soon after his own
enlightenment. These truths tell us that this life of ours is unsatisfactory and that the root of our
dissatisfaction is our habit of craving sensation and experience. Only by extinguishing this craving
can one gain freedom..
You must purge yourself of the most subtle promptings of lust, both physical and mental Then
there will be hope that you may realize the enlightenment of the Buddhas.
To this the Buddha adds three other prerequisites necessary for success in ones spiritual practice.
First, one must free oneself of violence, both in ones actions and in ones thoughts (included in this
instruction is an injunction against eating meat). Second, there must be no coveting and no theft; and,
lastly, one must never make prideful false claims to spiritual accomplishment.
Finally, the Buddha offers an expedient for hastening the practitioners entry into the ra gama
Samdhi. This expedient is recitation of the ragama Mantra, that same counter-spell by which, at
the outset of the Sutra, the Buddha rescued nanda from what he delicately refers to as your
difficulty with the Matanga woman. The Buddha himself calls the mantra The Great White
Canopy, because it takes those who recite it under its protection. At nandas request, the Buddha
now proclaims the mantra for a second time, and this time its 544 lines are given in the text. The
Buddha giv
Edited by An Eternal Now 07 Jun `09, 1:22AM
07 Jun `09, 1:22AM
4. Guidelines for Advancement
With the teaching concerning the ragama Mantra, the Buddha has now completed his instructions
for entering the ragama Samdhi. The goal of the narrative has been gained: nanda has been
granted what he needs to avoid further error. But the Sutra itself is not complete. Confident that he
can at last make spiritual progress, nanda asks for an explanation of the various levels of
enlightenment through which he hopes to soon begin his advance. In answer, the Buddha first backs
up to briefly describe twelve levels of unenlightened beings, which he categorizes according to the
manner of their being born. Next, in a passage of daunting terseness, he summarizes fifty-five stages
of the Bodhisattvas progress toward Buddhahood a topic treated expansively in the Avatamsaka
Sutra. Next, having heard about the highest levels of being, nanda, ever curious, asks about the
lowest levels, that is, about the hells, of which, he says, he and his fellow monks are ignorant. The
Buddha complies by launching into a hair-raising description of the tortures of the nether regions,
which beings experience as the karma incurred by their misdeeds in their lives as people. The Buddha
continues by summarizing the gradual ascent through other levels of rebirth, as ghosts, animals,
people, adepts, gods, and beings addicted to violence (asuras).
Finally, in a lengthy and important section that can stand alone,[25] the Buddha warns against fifty
negative states of mind in which serious practitioners can become trapped if they fall prey to greed,
pride, or confusion. This last teaching serves as a warning that the ragama practice of entering
samdhi through redirecting the senses inward carries with it some degree of peril. It is essential to
recognize that, to avoid not only error but serious mental endangerment, any intense spiritual practice
must be pursued in the context of a proper spiritual community, surrounded by others devoted to pure
conduct, under the guidance of a wise and skillful teacher.
It is perhaps appropriate for me to conclude on a personal note by lamenting the obscurity into which
this magnificent scripture has fallen, while, at the same time, many lesser works have been translated
by Western scholars and widely circulated. The charge that theragama Sutra is inauthentic and
apocryphal has not been without its effect. The discourse is rarely if ever taught, for example. in
university courses on Buddhism, and it is probable that the great majority of Western Buddhists have
neither read it nor even heard of it. To speak frankly, it is unclear what legitimate grounds there can
be for scholars who are not themselves Buddhist masters, or even Buddhist practitioners, to
confidently present themselves as qualified to rule on the proper place that a scripture should hold in
the Buddhist canon. One can only be puzzled by the easy but unstated assumption that uncertainties
concerning textual history should be sufficient to diminish the immense religious and literary stature
that a scripture has maintained for 1,300 years in its own tradition. There is here a whiff of the
Western cultural arrogance that Edward Said famously characterized as orientalism.[26] I can only
venture to hope that the new translation being prepared by the team of which I am member will serve
some small role in bringing theragama more into the light.
[1]The Sutra (T. 945) is generally known in Chinese as Da fo ding shou leng yan jing,
.; the complete title is Da fo ding ru lai mi yin xiu zheng liao yi zhu pu sa wan heng shou leng
yan jing . It is not to be confused with
the ragamasamdhi-stra (T. 642, in two rolls), which has been translated by tienne Lamotte,
[2] The only complete translation in English is that by the Buddhist Text Translation Society, with
commentary by the Ven. Master Hsan Hua, in eight volumes, revised edition 2000. An earlier,
incomplete translation by Charles Luk, with excerpts from the commentary of Han Shan Deqing, was
published by Rider in 1966. There is also a Tibetan translation of the Chinese text. A group of
colleagues, including the author of the present paper, is engaged in a revised translation for the
Buddhist Text Translation Society, to be published in 2008 in tandem with a translation into Spanish.
[3] As admirably demonstrated by Frithjof Schuon in his De lunit transcendante des
religions, translated as The Transcendent Unity of Religions, Wheaton IL: Theosophical Publishing
House, 1984, reissued 1993 under the Quest Books imprint.
[4] For example, The Nyyamuka of Dignga, translated by Xuanzang as (T 1628)
(The Chinese title means Treatise on the Illumination of Causes as a Method for Arriving at Correct
Principles.)
[5] See the Digital Dictionary of Buddhism, under the entry for .
[7] The capital city of the ancient kingdom of Kosala, on the Gangetic plain in northeastern India, in
what is now Uttar Pradesh.
[8] Including the late Ming master Han Shan Deqing, and more recently the Ven. Masters Xu Yun,
Yuan Ying, and Hsan Hua. According to a study by Ronald B. Epstein 127 commentaries exist in the
Chinese, the earliest dating from 767 CE, the most recent that delivered by the Ven. Hsuan Hua in
1968. (Ronald B. Epstein, The Surangama-Sutra (T.945): A Reappraisal of its Authenticity,
unpublished ms., pp. 93ff). I am grateful to Professor Epstein for steering me to this and other
references and sources cited in the present article.
[9] For example, Peter Gregory, in Tsung-Mi and the Sinification of Buddhism, Univ. of Hawaii Press,
2002, p. 57.
[10] 705 CE.
[11] Beginning with Dao An in the fourth century. (Epstein, op. cit., p. 6ff.).
[12] Chinese concepts in the translation which seem to act as analogous representatives of analogous
Indian concepts in the original may iinclude the references to parahelial phenomena and other
atmospheric or celestial events considered to be malign astrological influences. In other cases, what
is Chinese is Indian as well. For example, the Buddha cites an owl that lays its eggs on the ground;
this could easily be the Grass Owl, which is found in both India and China.
[13] There is mention, for example, of seven categories of flavors, rather than the Chinese five. In
another place, the text says that a verse the Buddha has spoken was in two Sanskrit verse-forms
(geya and gatha), but in the Chinese the verse is reduced to one form (all four sets of verses in the
Sutra are in five-character lines).
[14] So Han Shan De Qing, in his Nien Pu (autobiography), in the entry for his thirty-fist year (157667): After my great awakening, having no one to confirm and testify to it, I opened the ragama
Sutra to verify my experience. I had not listened previously to lectures on this sutra and so did not
know its meaning. Now by using the power of the direct reasoning of the non-discriminating mind
and without even the slightest use of its consciousness since there was no room for thinking, I gained
after eight months a complete comprehension of its profound meaning without having a single doubt
left. Tr. Charles Luk in his Practical Buddhism, Rider, 1971, p. 83. Also Ven. Hsuan Hua: Where
the ragama Sutra exists, then the Proper Dharma exists. If the ragama Sutra ceases to exist,
then the Proper Dharma will also vanish. If the ragama Sutra is inauthentic, then I vow to fall
into the Hell of Pulling Tongues to undergo uninterrupted suffering. (On the Authenticity of
the ragamas Sutra, http://online.sfsu.edu/~rone/Buddhism/Shurangama/Shurangama%20Sutra
%20Is%20Definitely%20Authentic.htm
[15] Dogen, in his Nihon Shiso Takikei, says of the ragama, Even if it were a forgery, if the
Buddhas and Bodhisattvas have taken it up, it is a true Buddha-sutra, a true Patriarch Sutra. You
should understand and realize that sentient beings, if they transcend and realize correct awakening,
are the Buddhas and Patriarchs. Quoted in Epstein, op. cit., p. 82.
[16] The mantra is given in full later on in the text in its transcription from the Sanskrit into Chinese
monosyllables. In this form the mantra is still recited every morning in certain monasteries in the
Chinese tradition.
[17] This and the excerpts that follow are from the Buddhist Text Translation Societys translation in
progress (forthcoming in 2008). The translations are still tentative and are not for reproduction.
[18] in Chinese. The Thus-Come One, (Sanskrit Tathgata, Chinese ru lai ) is one of
ten epithets of the Buddha. The translation of the Chinese zang (storehouse; canon or collection;
viscera) given here is matrix (womb in Latin), in its proper English sense of a place or
enveloping element within which something originates, takes form, or develops (Merriam-Webster).
[19] This is as close as Buddhism comes to the idea of divine immanence as it is described in the
Abrahamic religions. But the Buddha never ascribes to the Tathgata-garbha any of the qualities of a
personal God. The Tathgata-garbha has no personality, no history, no intention, no self. It is, above
all, empty.
[20] That is, the five branches (Skt. skandhas) of the conditioned mind and world: the physical world,
perception, cognition, mental formations, and consciousness.
[21] Skt. Angmin.
[22] The rota-pana, (Skt.) one who has entered the stream; this is the first level of the
enlightened Sage. An adept at this level can expect only seven more rebirths.
[23] Numerous passages in the Sutra make it clear that it is not merely a metaphor to say that spiritual
awakening is experienced in part as illumination. Frequently the assembly is treated to displays of
light, often from the Buddha himself. He emits light from his hands, from his chest, from his face,
and from a transfiguration of a Buddha which he makes appear above his head
[24] Presumably by showing that anger and hatred require an object that is seen as real, whereas all
objects are ontologically empty, and the dualities of self and other, mind and world, us and them
are ultimately false.
[25] This is the section on the demonic states associated with the five skandhas.
[26] In his classic of post-colonial studies Orientalism (1978), available in a Penguin Books reprint,
New York, 2003. Saids argument is directed particularly at the Wests distorted views of Arabs and
Islam, but it is no less applicable to India (and China) and Buddhism.
David Rounds, editor of Religion East and West, holds a B.A. from Harvard College and an M.A. in
Buddhist studies and translation from Dharma Realm Buddhist University. A disciple of Master
Hsan Hua for over thirty years, he has authored five books and has collaborated in the translation of
several Mahayana texts, including theShurangama Sutra.
Buddha Nature
Documents Similar To A Brief Introduction on the Essence of Shurangama Sutra 11111111111
JIABSonline
rounakshah
Golden Sunrise
hisjf
Hawayana Gonzalez
Andre Doshim Halaw
Namkha Nyingpo
Ayur Montsuregi
Andre Hsc
anonymoususer2
Long Shi
Justin Vacula
Reda RedaMan
haxeeb
dorje@blueyonder.co.uk
Dala'il Kyairat
JijoyM
Nicolas Le Brizoual
my journey through humanities 1100
3017_Hinduism and Buddhism www.gazhoo.com
Sanam Kapoor
Blah89
The Buddha and the Jainas Reconsidered by Bronkhorst
Jigdrel77
sydney buddhism
Preparing Great Presentation
Fadhilah Muhammad
16060 Buddhism
sidhujassi91
An Ana;itical and creative study of Buddhistsychotherapy.pdf
Sarath Bandara
Buddha: A Story of Enlightenment
Cultural Presentation for Nursing School
Popular in Indian Religions
Indian Philosophy. Deussen
meismaja6736
Buddhist art of Myanmar review: a subtle, sculptural nirvana | Art and design | The Guardian
Krishna's Blessings
mahaphala
Ashtavakra Series - By Guruji in Hindi During May_June 2010
Snigdha Sharma
Rennyo Shonin Goichidaiki Kikigaki
Jōshō Adrian Cirlea
Veera i Ahhh Paper
Baekaar Phasebuk
Kashi Vishwanath Mandir Spiritual Services
templeyatri
The Vast Limitless Expanse an Introduction to Cutting Through by Padmasambhava
tradicionalista
Vivartavada (wiki-engl).pdf
parasoly
Orgyen Norla - Guidebook to the Hidden Land of Pemako - Dudjom Beyul
YoginiDevi
Pu Holidays 2017
Anonymous ey6J2b
Prabodha- Sudhakara English Notes
Rājayoga: the Reincarnations of the King of All Yogas.
Who is Hindu and What is Special about being a Hindu by Francois Gautier
Bharat Kumar A Shah
Death and the Afterlife
Callum Kenny
How Shradh originate
realindiaproperty
Hatha Yoga Pradipika
Sharat
Bell Didactic_Narration Jataka Iconography in Dunhuang
Roberto E. García
TATTVA2
Discussion Questions Oct 7
rahimahsan
parminder211985
Bhavanakrama - Kamalashila.pdf
lotsawa
TeachingsonHathaRajaYoga-1
rhvenkat
Upasaka Culadasa. Jhana.transcript
Johnette Ricchetti
Tsong Khapa Praises of Manjushri
tantravidya
Prashanth Manvi
Hindu Tantras14C5
hari18
Cultural Treasures of Nepal
John Irvin Norberto
19192751 the Dark Age and the Decline in the Buddhas Teachings
picasso020201979
T Krishnamacharya Life and Teachings
Laxmidharan | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,081 |
\section{Introduction}
In recent years, our knowledge of the composition of the interstellar medium (ISM) and the process of star formation has entered a new area thanks to the powerful capabilities (in term of sensitivity, spatial resolutions, and new wavelength windows) of recent ground-based and space observatories (Spitzer, Herschel, ALMA, NOEMA, and the new EMIR receiver on the IRAM 30m). These new instruments have allowed for the detection of new molecules (in the gas-phase or in the ices), namely, either simple ones such as HCl$^+$ \citep{2012ApJ...751L..37D} or complex ones such as (NH$_2$)$_2$CO \citep{2019A&A...628A..10B}. In addition to revealing the richness of the ISM chemistry, they have revealed the non-uniform distribution of their abundances from object to object or even within a certain type of object (with similar physical conditions). One of the key questions astrochemists want to answer is how these molecules in the gas-phase or in the ices form and whether we can reproduce their variability in abundance.\\
The gas-phase composition of cold cores has been observed for a long time using ground-based telescopes in the mm and submm range. Spectral surveys have also revealed a diversity of relative abundances \citep{1992IAUS..150..171O,1997ApJ...486..862P,2000ApJ...542..870D,2016ApJS..225...25G}. Current chemical models do a good job at reproducing most of the observed abundances \citep{2006A&A...451..551W}, although there are still some molecules that have continue to present challenges, even though the chemistry has been considered thoroughly \citep[e.g., CH$_3$CCH and S$_2$H, ][]{2016MolAs...3....1H,2017ApJ...851L..49F}. In recent years, however, the detection of complex organic molecules (such as HCOOCH$_3$, CH$_3$OCH$_3$, and CH$_3$CHO) in the cold environment of dark clouds has revived the interest for cold core chemistry \citep{2012A&A...541L..12B,2014ApJ...795L...2V}. Until this discovery, these molecules were believed to be formed only in the vicinity of hot protostars, where the high dust temperature would effectively promote their formation on the grains \citep{2009ARA&A..47..427H}. Now both their formation at low temperature and the possibility to observe them in the cold gas-phase has again raised questions around their origin
\citep{2013ApJ...769...34V,2015MNRAS.449L..16B,2015MNRAS.447.4004R,2016ApJ...830L...6J,2017ApJ...842...33V} and determined their appearance at much earlier phases during the formation of stars. While there is a debate on the chemical path (gas-phase or surface) forming these molecules, it remains clear that some of them or their precursors need to desorb from the grains at low temperatures. \\
Even for simple molecules commonly observed in the cold gas of dense cores, the physical conditions and the time scales produce a depletion of molecules onto the grains \citep[see, e.g., ][]{2007A&A...467.1103G}. Astrochemical models simulating such environments need to invoke non-thermal desorption mechanisms to bring back into the gas-phase molecules that would have otherwise disappeared from the gas . Several mechanisms have been proposed and some of them have been studied in the laboratory, but their inclusion in astrochemical models is far from being perfect and their efficiency is far from being explicit.
We cite a number of them here, but this is not an exhaustive list. The effect of cosmic rays on the ice chemistry has recently gained a lot of attention. For instance, \citet{2018PCCP...20.5359S} and \citet{2018ApJ...861...20S} have proposed methods that include a cosmic-ray driven radiation surface chemistry into astrochemical models \citep[see also][]{2019ApJ...876..140S,2020ApJ...888...52S}. Once a cosmic-ray particle collides with a grain, it produces radiolysis of species on the grains. Some products of these radiolysis can be supra-thermal and then very reactive, increasing the production of complex organic molecules. Such chemistry would certainly benefit from more experimental measurements. \citet{2013A&A...554A.111K} have developed a model in which the bulk of the ices (separated from the surface) is made of cavities in which the diffusion of the species is more efficient. The sizes and locations of these cavities are changed by the cosmic-ray impacts. \citet{2015ApJ...805...59I} has theoretically revisited the effect of impulsive spot heating induced by cosmic-ray collisions. These two last papers discuss a non-thermal desorption mechanism called "explosive desorption." In this process, when energy is brought to an ice, in which a fair amount of radicals would be frozen, this energy allows the sudden diffusion of these radicals. A chain of exothermic reactions between these radicals would then take place, provoking an explosive desorption of the surfaces. The idea was studied experimentally by \citet{1982A&A...109L..12D} through the photolysis of ices. This process has been included in some models \citep[see, e.g.,][]{2004A&A...415..203S,2013MNRAS.430..264R}, however more experimental data on this process would be needed, as stated in these findings. Even the very exothermic formation of H$_2$ on the grains has been proposed to be responsible for non-thermal desorption localized around the location of the H$_2$ formation \citep{1993MNRAS.260...37D,1994MNRAS.267..949W,2007MNRAS.382..733R}. Such a process does not, however, appear to be efficient in the lab \citep{2016A&A...585A..24M,2017MolAs...6...22W}. The effect of fast grain rotation on the species desorption rates have been investigated by \citet{2019ApJ...885..125H}. To explain some methanol gas-phase observations in a cold core, \citet{2020ApJ...895..101H} proposed the grain-grain collisions to be an efficient non-thermal desorption mechanism.\\
In this paper, we test the respective efficiencies of the different mechanisms commonly included in the models (photo-desorption, chemical desorption, and cosmic-ray-induced whole-grain heating) and tested the addition of sputtering of ice grain mantles via collision with cosmic rays in the electronic stopping power regime, leading to a localized thermal spike desorption (whose efficiency was recently measured).
We describe in Section 2 the chemical and physical model used to study the efficiency of each of these processes. The model results are given in Section 3 and then compared to methanol observations in Section 4. In Section 5, we discuss some of the model assumptions and we present our conclusions in the last section.
\section{Chemical model}\label{chem_model}
\subsection{General presentation}
To study the non-thermal desorption processes, we used the Nautilus gas-grain code. Nautilus is widely detailed in \citet{2016MNRAS.459.3756R} and we only briefly describe it here. It is a numerical model that simulates the chemistry under interstellar conditions. For simplicity, only one grain size of 0.1$\mu$m is considered.
The chemical processes are described as chemical reactions and the model computes the efficiency of each process based on a number of parameters. For bimolecular gas-phase reactions, for instance, the efficiency of each reaction is computed with a modified Arhenius temperature dependent law with three parameters. In some cases, these parameters are determined by experiments or theoretical calculations. In many other cases, the rate coefficients are guessed based on similarities with other systems. The considered gas-phase processes are bimolecular reactions (involving neutral-neutral and ion-neutral reactions), direct cosmic-ray ionization or dissociation, ionization or dissociation by UV photons, ionization or dissociation produced by photons induced by cosmic-ray interactions with the medium \citep{1983ApJ...267..603P}, and electronic recombinations. Details on how the efficiency of each gas-phase process is computed are given in \citet{2012ApJS..199...21W}. \\
The chemical species in the gas-phase can be physisorbed on dust surfaces while colliding with the grains based on equations of \citet{1992ApJS...82..167H}. We used a sticking probability of 1, except for H and H$_2$ for which we use temperature dependent values \citep[based on laboratory experiments from][]{2010JChPh.133j4507M,2012A&A...538A.128C}. For species on the surfaces, we make a distinction between the species in the most external layers (2 monolayers here) that we call surface species, and the species below these layers called mantle species. The species are adsorbed on the surface and become the mantle during the construction of the ices. Similarly, when the species desorb, only the species from the surface can desorb but surface species are gradually replaced by the mantle species.
Once they are on the grains, these species can diffuse through thermal hopping or tunneling through diffusion barriers. For this particular model, we assumed that all species can diffuse through tunneling with an efficiency that depends on the mass of the species. The equations for diffusion through tunneling are based on Eq. 10 from \citet{1992ApJS...82..167H}. The thickness of the diffusion barrier is taken to be 2\AA \citep{Asgeirsson2017}. Both surface and mantle species are capable of diffusion but the process in the mantle is much less efficient. We assume a ratio of 0.4 for the surface species and 0.8 for the mantle species between the diffusion and binding energies. In addition, following \citet{2015PCCP...1711455G}, we assume that water drives the diffusion in the mantle, so that all mantle species with binding energy smaller than that of water are set to the water value. \\
The surface reactions are assumed to proceed through the Langmuir-Hinshelwood mechanism. The probability of reaction is assumed to be one for exothermic and barrier less reactions. For reactions with barriers, the probability of reaction is computed taking into account the competition between diffusion and reaction as explained by \citet{2016MNRAS.459.3756R}. This probability also depends on the efficiency of the reaction to tunnel through the chemical barrier and the reduced mass of the system. The width of the chemical barrier is taken to be 1\AA.
In addition to surface reactions between two adsorbed species, the model includes the ionization or dissociation of the surface and mantle species by UV photons and photons induced by cosmic-ray particles interactions with the gas. The values of the rate coefficient for these processes are the same as in the gas-phase because of a lack of laboratory or theoretical data. \citet{2018MNRAS.478.2753K} proposed applying a scaling factor to the photodissociation rates to use them for surface species. By comparing his model results with ice mantle observations, he determined that the best agreement was obtained for a scaling factor of 0.3. Such a result would, however, be strongly model-dependent. \\
Thermal desorption is included in the model as well as a number of non-thermal desorption mechanisms that will be discussed below. In essence, the model described above is the same as in \citet{2016MNRAS.459.3756R}, except for the diffusion through tunneling for all species. The gas and ice chemical networks are the same as in \citet{2019MNRAS.486.4198W}.
\subsection{Non-thermal desorption processes}
The Nautilus model from \citet{2016MNRAS.459.3756R} includes three non-thermal desorption processes: photodesorption, chemical desorption, and cosmic-ray (CR) heating. We included in Nautilus a new process, which is the sputtering of the grains by the cosmic-ray particles (CR sputtering). Photo-desorption, CR heating, and chemical desorption are only allowed for surface species, while CR sputtering can occur for both surface and mantle species.
\subsubsection{Photo-desorption}
Photodesorption occurs when a single energetic UV photon absorbed close to the grain surface induces the desorption of molecules or radicals. Molecules at the surface can desorb immediately if the energy transferred by the UV photon overcome their binding energy. The energy can also be transferred to a neighboring atom or molecule, inducing an indirect desorption process. Molecules absorbing photons below the surface can also interact with upper layers of molecules or diffuse through the surface and desorb.
Another possibility is called co-desorption, whereby the species below the surface takes with it the species above it. The process can in fact be even more complicated when including
the photodissociation. For instance, \citet{2016ApJ...817L..12B} and \citet{2016A&A...592A..68C} showed experimentally that methanol would break and its fragments would desorb. Depending on the energy brought to the system, the photo-fragments could also stay close to the surface and recombine. For simplicity, we considered independently the dissociation (see Section \ref{chem_model}) and the desorption. In other words, photo-desorption leaves the product intact but photo-dissociations on the surface can occur and then the products can photo-desorb. \\
For the photo-desorption, we have used the simplified formalism for all species described in \citet{2016MNRAS.459.3756R}:
\begin{equation}\label{eq1}
\rm k_{des,UV} = F_{UV}S_{UV}exp(-2A_V)Y_{pd}\frac{\pi r_{dust}^2}{N_{site}}
,\end{equation}
\begin{equation}
\rm k_{des,UV-CR} = F_{UV-CR} \frac{\zeta}{10^{-17}} Y_{pd}\frac{\pi r_{dust}^2}{N_{site}} \\
,\end{equation}
for the desorption rates induced by direct $\rm k_{des,UV}$ and indirect $\rm k_{des,UV-CR}$ UV photons (in s$^{-1}$). The strength of the direct UV field is $\rm F_{UV} =1.0\times 10^8$ photons cm$^{-2}$s$^{-1}$ \citep{2007ApJ...662L..23O}.
We adopt a secondary (cosmic ray induced) UV field of $\rm F_{UV} =10^3$ photons cm$^{-2}$s$^{-1}$ scaled to an ionisation rate of $10^{-17}$s$^{-1}$. This value is an adapted value considering the $\sim$3100 photons cm$^{-2}$s$^{-1}$ for an ionisation rate of $3\times 10^{-17}$s$^{-1}$ from \citet{2004A&A...415..203S}, whereas \citet{1983ApJ...267..603P} gave about 2380 photons cm$^{-2}$s$^{-1}$ for an ionisation rate of $3\times 10^{-17}$s$^{-1}$. In addition,
$\rm S_{UV}$ is the scaling factor for the UV radiation field and $\zeta$ the cosmic ray ionisation rate in s$^{-1}$.
The yield ($\rm Y_{pd}$) is assumed to be $10^{-4}$ molecules per photons for all species \citep{2008A&A...491..907A}. Furthermore, $\rm N_{site}$ is the total number of surface sites on a grain while $\rm r_{dust}$ is the radius of the grains. In our case, $\rm N_{site} \sim 1.2\times 10^6$ and $\rm r_{dust}$ = 0.1$\mu$m.
The strength of the UV radiation field induced by the cosmic-rays is scaled with the local value of the CR ionization rate. The factor of 2 in the exponential of the photo-desorption rate by direct UV photons (Eq.~\ref{eq1}) is taken from \citet{1991ApJS...77..287R} and it takes into account the higher extinction of the grains in the UV wavelength range as compared to the visible (used to compute the Av). This is a mean value over the distribution of photons. A more robust approach would be to compute this scaling factor for each molecule of the ice by convolving the spectrum of photo-desorption for each molecule by the extinction curve of the grains. Such a calculation is, however, outside the scope of this paper, but should be considered in the future.
\subsubsection{Grain heating by cosmic rays}
The energy released by the collision between high velocity cosmic-ray particles and grains can produce a global heating of the grain or a localized one \citep{1985A&A...144..147L,2004A&A...415..203S}, which then cools down to its initial temperature. To include this process in Nautilus, we followed the simple formalism proposed by \citet{1993MNRAS.261...83H} for the whole grain heating: $\rm k_{des,cosmic} = f_{CR} \times k_{des,therm}(T_{peak}),$ with $\rm f_{CR}$ being the ratio between the cooling time of the grain and the time between two collisions, while T$_{\rm peak}$ is the peak temperature reached by the grain. The $\rm f_{CR}$ and T$_{\rm peak}$ parameters critically depend on the size of the grains as well as their nature and their coverage, as the cooling of the grains occurs mostly by molecular evaporation \citep{2006PNAS..10312257H}. In the current model, we kept the prescription by \citet{1993MNRAS.261...83H}: $\rm f_{CR} = 3.16\times 10^{-19}$ and T$_{\rm peak}$ = 70~K.
This process is stochastic and our simple approach does not fully reproduce its complex nature. \citet{2019MNRAS.486.2050K,2020A&A...633A..97K,2020A&A...641A..49K}, for instance, studied this process in detail and proposed other ways to include it into astrochemical models. One limitation of our approach is that we used only one grain size, thus restricted to a representative grain size, in order to limit the number of free parameters. \citet{2006PNAS..10312257H} did a theoretical study of the efficiency of whole grain heating with the size and composition of the grains. They showed that for silicate grains, the peak temperature reached by small grains upon cosmic ray collision increases as the grain radius decreases. \citet{2018A&A...615A..20I} showed that considering a grain size distribution in these complex astrochemical models increases strongly the importance of this process.
\subsubsection{Chemical desorption}
The energy released by exothermic surface reactions can produce partial evaporation of the products. We call this process chemical desorption. The exact description of the process is yet to be understood. As a result, several models have been proposed to include this process into kinetic models. \citet{2019MNRAS.490..709Y} is the latest published model and compares its results with the two previously published ones: \citet{2007A&A...467.1103G} and \citet{2016A&A...585A..24M}. The model proposed by \citet{2016A&A...585A..24M} is the only one based on some experimental measurements but only for a few small systems (O + H, OH + H, and N + N). The three proposed models are all based on a number of unknown parameters. The model from \citet{2007A&A...467.1103G} depends on the fraction of the energy released by the reaction that is transfered to the produced species. The model from \citet{2016A&A...585A..24M} depends on the effective mass of the surface (which also defines the fraction of the energy retained by the product). The recent model from \citet{2019MNRAS.490..709Y} is based on a different theory. It does not assume that the products retain some energy, which is then transformed to kinetic energy, but it assumes that the released energy heats the surface. So their model depends on the thermal diffusivity and specific heat of the surface. \citet{2019MNRAS.490..709Y} compared the efficiency of the three models for bare grains and showed that this gave similar results. \citet{2016A&A...585A..24M} showed experimentally that the chemical desorption would be very much less efficient on water ice surfaces than on bare grains. In the simulations presented here, we anticipate such concerns by stating that the grains are covered by water ices (with a significant fraction of CO$_2$). Since
the model of Garrod does not explicitly depends on the nature of the surface (except by decreasing the $a$ parameter to an unknown value) and that the model of Yamamoto does not give the parameters to be used for water ices, we included only the model from \citet{2016A&A...585A..24M}. \\
Similarly to what we did in \citet{2017MolAs...6...22W}, the fraction of evaporation for singly produced species is computed following:
\begin{equation}
\rm f = e^{-\frac{E_D}{\epsilon E_{reac}/N}}
,\end{equation}
with N the number of degree of freedom of the produced species (N = 3n with n the number of atoms in the produced species) and $\epsilon = \frac{(M-m)^2}{(M+m)^2}$ the fraction of energy kept by the product with a mass m. M is the effective mass of the surface. We first compute the effective mass for bare grains of 120 amu and divide it by 10 to obtain the values for water ices. For the three measured systems, we used the measured values ($\rm f_{O+H} = 30$\%, $\rm f_{OH+H}$ = 25\%, and $\rm f_{N+N}$ = 50\%). For channels producing more than one product, we set the chemical desorption efficiency to zero.
\subsubsection{Sputtering by cosmic-rays}
When CRs impact solids, many excited electron states are created. In less than a picosecond, these excited states relax in atomic motions and lead to a thermal spike \citep{2000NIMPB.166..903T}. As the result of the hot spot, some of the material is sputtered. The sputtering is scaling as the square of the stopping power, or the energy loss by thickness unit dE/dx \citep{2015NIMPB.365..472D,2015NIMPB.365..477M}. It sets heavy and low-energy CRs as the main contributor to this process.
We added this new type of process into Nautilus assuming that both surface and mantle species can desorb simultaneously. For simplicity, we assume that species desorb without breaking but this may not be the case all the time (as for photo-desorption).
The rate coefficient of these reactions are computed as follows:
\begin{equation}
\rm k_{des,CR}(i) = (\zeta/3\times10^{-17}) \times Y_{eff} \times \pi \times r_{dust}^2/N_{site}
,\end{equation}
with
\begin{equation}
\rm Y_{eff} = Y^{\infty} \times (1-e^{-n_{layers}/\beta)^\gamma})
,\end{equation}
where $\zeta$ is the cosmic-ray ionisation rate of H$_2$ (in s$^{-1}$), $\rm r_{dust}$ the radius of a grain, and $\rm N_{site}$ the number of sites on one grain. $\rm Y_{eff}$ is the efficiency of desorption integrated over a cosmic-ray spectral distribution, which is a function of the number of layers of ices ($\rm n_{layers}$) \citep{2018A&A...618A.173D}.
$Y^{\infty}$ is the sputtering yield for thick ices and $\beta$, $\gamma$ are two parameters associated to nature of the ice. For the physical structure adopted here, water ices remain the dominant ice component although CO$_2$ is also very abundant in the ice. This is due to the high dust temperature assumed (see discussion in Section 3.1). Sputtering on CO$_2$ ice is much more efficient than on water. We used the sputtering parameters for water ices: $Y^{\infty} = 3.63$ , $\beta = 3.25$, and $\gamma = 0.57$ \citep{2018A&A...618A.173D} and we test the values for CO$_2$ ices in Section 4.
For the grain sputtering, in the experiments, the diameter of the craters made by the heavy (above C) cosmic-ray particles in the ices is on the order of a few nanometers. Thus, for grains with sizes above 10 nm, the measured yield should apply. The ice mantle thickness influence on the yield is already introduced in the formalism. For smaller sizes, the sputtering yield should be higher and capable of being summed up with the global heating process.
\subsection{1D physical structure}
\begin{figure*}
\centering
\includegraphics[width=0.46\linewidth]{density_TMC1.pdf}
\includegraphics[width=0.46\linewidth]{av_TMC1.pdf}
\includegraphics[width=0.46\linewidth]{T_TMC1.pdf}
\includegraphics[width=0.46\linewidth]{zeta_TMC1.pdf}
\caption{1D physical structure used for TMC1.\label{1D_structure}}
\end{figure*}
To study the effect of the non-thermal desorption processes, we used the 1D cold core physical structure determined by \citet{2020A&A...637A..39N} from observations. These authors derived the density and gas temperature at several positions in the TMC1-C and TMC1-CP cores using CS and CS isotopologs and the RADEX radiative transfer code \citep{2007A&A...468..627V}, together with the Markov Chain Monte Carlo (MCMC) approach \citep[see, e.g., ][]{2019A&A...628A..16R}. The density structure of TMC1 was then obtained by fitting these densities to a Plummer-like analytical density profile, a widely used density profile for prestellar cores \citep{2002ApJ...569..815T, 2018AJ....156...51P}. The visual extinction profiles of several positions in the cloud were obtained from the Herschel extinction maps in \citet{2020A&A...637A..39N}. Assuming spherical symmetry and isotropic UV illumination, the UV shielding at every point inside the cloud is due to an extinction that was taken as half of that measured in the extinction maps. These values of extinction at each position were then interpolated using cubic spline interpolation. Finally, the dust and gas temperature profiles were determined by a cubic spline interpolation of the dust temperatures from the Herschel maps and the gas temperatures from the MCMC approach. The density, gas and dust temperatures, and visual extinctions are shown in Fig.~\ref{1D_structure} as a function of radius from the center of the core. The density starts around $5\times 10^3$~cm$^{-3}$ at $3.4\times 10^4$ au, increases up to $6\times 10^4$~cm$^{-3}$ at 5000 au and then remains flat. The visual extinction is small outside (around 2) and increases up to 10 inside. The gas and dust temperatures decreases toward the inside. They are between 9 and 14.5~K, but the dust temperature is always slightly larger than the gas temperature.
We added to this structure a radius dependent cosmic-ray ionization rate. Indeed, the CRs coming from the outside of the cloud are slowing down by the gas and following cosmic-ray ionization rate is decreasing with penetration \citep{2009A&A...501..619P,2016A&A...585A..15C}. Observations of cosmic-ray ionization rates using molecular ions, show a correlation with the column density \citep{2017ApJ...845..163N,2012ApJ...745...91I}. The H$_2$ cosmic-ray ionization rate (in s$^{-1}$) was computed following the dashed red line fit presented in the lower panel of Fig 6 of \cite{2017ApJ...845..163N}:
\begin{equation}
\zeta (Av) = 10^{-1.05\times \log_{10}(Av) - 15.69} \rm \;for\;Av > 0.5.
\end{equation}
For low Av ($\leq$ 0.5), we adopted a constant CR rate corresponding to a non attenuated CR flux of:
\begin{equation}
\zeta (Av) = 10^{-1.05 \times \log_{10}(0.5)-15.69} \sim 2.9\times 10^{-16}.
\end{equation}
We note that we are using a static 1D physical structure that does not evolve with time. This is clearly a simplification of the model, but we do not yet have a good constraint on the dynamical evolution of the studied cores. In addition, we are interested in comparing the efficiency of several chemical processes. Such a comparison would be more difficult to make if we added a time dependency on the physical conditions and moving structures.
\subsection{Other model parameters}
Starting from a mix of atoms (with abundances listed in Table \ref{ab_ini} and apart from H, which is assumed to be in its molecular form), we ran the chemical model using the 1D physical structure and for a time span of $10^7$~yr. The impact of the initial atomic hydrogen abundance is discussed in Section \ref{initial_H}. The external incident UV flux is assumed to be five times the Draine's field, as suggested by the observations \citep{2019A&A...624A.105F}. To independently study the effect of the non-thermal desorptions, we switched off all of them (no desorption model) and then turn them on one at a time. In the "no desorption model," the thermal desorption is still active, but we checked that in the conditions used here, it does not change the results if no desorption at all is assumed. The exceptions are H$_2$ and He, which have to be allowed to thermally desorb, otherwise most of the gas would end up depleted.
\begin{table}
\caption{Initial abundances (with respect to the total proton density).}
\begin{center}
\begin{tabular}{c|c}
\hline
\hline
Species & Abundance\\
\hline
He & $9\times 10^{-2}$ \\
N & $6.2\times 10^{-5}$ \\
O & $2.4\times 10^{-4}$ \\
H$_2$ & 0.5 \\
C$^+$ & $1.7\times 10^{-4}$\\
S$^+$ & $1.5\times 10^{-5}$\\
Si$^+$ & $1.8\times 10^{-6}$ \\
Na$^+$ & $2.3\times 10^{-7}$\\
Mg$^+$ & $2.3\times 10^{-6}$ \\
P$^+$ & $7.8\times 10^{-8}$ \\
Fe$^+$ & $1\times 10^{-8}$\\
Cl$^+$ & $1\times 10^{-9}$\\
F & $6.68\times 10^{-9}$\\
\hline
\end{tabular}
\end{center}
\label{ab_ini}
\end{table}%
\section{Model results}\label{model_results}
The four non-thermal processes presented in the previous section do not depend on the same quantities. The chemical desorption will depend on the abundance of the reactants and the efficiency of diffusion (which depends on the dust temperature). The photo-desorption depends on both the visual extinction and the cosmic-ray ionisation rate, while the two last processes depend on the cosmic-ray ionization rate. In addition, all species will not be affected the same way. The effect for species essentially formed on the surface should be direct while the effect for species formed in the gas-phase is more complex as it can impact their gas-phase precursors. We separated several groups of molecules and we present the impact of the different non-thermal desorption processes.
\subsection{Main ice constituents}\label{main_ice}
\begin{figure*}
\centering
\includegraphics[width=0.46\linewidth]{figs/KH2O_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/KCO_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/KCO2_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/KCH3OH_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/KCH4_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/KNH3_ab_models_6e5yr.pdf}
\caption{Abundance of the main ice components as a function of radius (and visual extinction) for a time of $6\times 10^5$~yr. The "no desorption" curve is almost the same as the "sputtering" one.}
\label{ice_fig}
\end{figure*}
Figure \ref{ice_fig} shows the abundance of the main constituents of the ices (H$_2$O, CO, CO$_2$, CH$_3$OH, CH$_4$, and NH$_3$) as a function of radius for an integration time of $6\times 10^5$~yr, which is the typical age of an evolved pre-stellar core. We chose this time to emphasize the differences between the models. Indeed, at later times, the differences between the model results are much smaller and are negligible at $2\times 10^5$ yr and earlier because the interactions with the grains are less efficient.
The figures show that the abundance of the ice species increases toward the higher densities (smaller radii). All desorption mechanisms produce similar ice abundances at high density and similar to the model without desorption. Here, H$_2$O and CO$_2$ dominate the ices for H densities larger than $10^4$~cm$^{-3}$ (Av = 4, inside 1500 au). Going outward, the CO$_2$ ice abundance drops and water clearly dominates until an Av of 4 (H density of about $(5-6)\times 10^3$~cm$^{-3}$). The amount of water then depends on the non-thermal mechanism considered, chemical desorption being the most efficient to decrease it while sputtering is the least efficient.
The large CO$_2$ abundance over CO reflects the grain temperature that is slightly above 10~K in the entire structure. The dust temperature used in these simulations comes from Herschel data \citep{2012A&A...544A..50M} and was derived by fitting the SED with gray-body emission \citep{2020A&A...637A..39N}. This procedure is known to overestimate (by about 1-2 K) the dust temperature in the center of the cores because of the contribution of the warmer grains in the external layers of the core surface that are located along the line of sight. The effect of dust temperature is discussed in Section~\ref{sect_dust_t}.
At an Av lower than 3, hydrogenated species such as CH$_4$, NH$_3$, and CH$_3$OH ices can be more abundant than CO$_2$ because there is more free hydrogen thanks to H$_2$ photo-dissociation.
\subsection{Simple abundant species}\label{simple_species}
\begin{figure*}
\centering
\includegraphics[width=0.46\linewidth]{figs/CO_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/CN_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/CS_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/SO_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/H2O_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/HCN_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/HC3N_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/HCOp_ab_models_6e5yr.pdf}
\caption{Abundance of simple gas-phase molecules as a function of radius (and visual extinction) for a time of $6\times 10^5$~yr.}
\label{simplemol_fig}
\end{figure*}
In Fig.~\ref{simplemol_fig}, we show the model results for simple gas-phase molecules often observed in cold cores. Except for water, these molecules are not directly formed on the grains but we can see that their gas-phase abundances are strongly sensitive to the non-thermal desorption processes because: 1) some precursors can be desorbed from the grains; and 2) after being formed in the gas-phase, they are depleted on the grains and non thermal desorption brings them back into the gas-phase. \\
The CO gas-phase abundance is not strongly sensitive to the non-thermal desorptions.
The other molecules (except for water) present a lower level of sensitivity in the outer parts, where the densities are smaller and, thus, the depletion is smaller. In the inner, denser regions, the larger gas-phase abundances are not produced by the same processes for all species. For CN, HCN, HC$_3$N, and HCO$^+$, cosmic-ray heating produces the largest abundances, then sputtering followed by chemical desorption and photo-desorption (the two last ones being equally efficient for CN). Here,
CN, HCN (and HNC), and HC$_3$N molecules are chemically linked. The CN molecule is mostly formed by N + CH, then can react with H$_3^+$ to form HCNH$^+$. While recombining with electrons, HCNH$^+$ produces HCN (and HNC); CN can also react with C$_2$H$_2$ to form HC$_3$N. The main effect of CR heating is to desorb CH$_4$ from the ices. Removing this process leads to results similar to the photo-desorption. Once in the gas-phase, CH$_4$ participates to the ion-neutral chemistry by providing CH radicals. CH$_4$ for instance reacts with H$_3^+$ to form CH$_5^+$, which recombines with electrons to produce CH$_3$. CH$_3$ then reacts with atomic carbon to produce C$_2$H$_2$ involved in the formation of HC$_3$N. The increase of HCO$^+$ abundance is also due to the CH$_4$ CR heating desorption. One of the paths to form HCO$^+$ is through the reaction CO + CH$_5^+$.
The gas-phase SO abundance is enhanced by both the chemical desorption and the sputtering in the inside. This molecule is formed by the neutral-neutral reaction S + OH and O + HS. In the case of chemical desorption, both the chemical desorption of HS and OH (during hydrogenation of S and O on the surfaces) are at the origin of the SO increase. We need to remove both of them to decrease the SO gas-phase abundance. In the case of sputtering, OH gas-phase abundance is highly increased. The CS gas-phase abundance is mostly enhanced by the chemical desorption and the CR heating. For the CR heating, it is again the desorption of CH$_4$ ice that is responsible for this increase. In the CR heating model, gas-phase CS is mostly produced by three reactions: HCS$^+$ + e$^-$, H + HCS, and H$_3$CS$^+$ + e$^-$. All three precursors, HCS, HCS$^+$, and H$_3$CS$^+$, are formed by reactions between neutral or ionized atomic sulfur with CH$_2$ or directly CH$_4$. Similarly to SO, it is the chemical desorption that increases most the gas-phase CS abundance and this is due to the larger HS abundance as it is formed by C + HS in the chemical desorption model.
\subsection{Complex organic molecules observed in cold cores}\label{COMs}
\begin{figure*}
\centering
\includegraphics[width=0.46\linewidth]{figs/CH3OH_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/CH3CHO_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/CH3OCH3_ab_models_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{figs/HCOOCH3_ab_models_6e5yr.pdf}
\caption{Abundance of the complex organic molecules as a function of radius (and visual extinction) for a time of $6\times 10^5$~yr.\label{COM_fig}}
\end{figure*}
Figure~\ref{COM_fig} shows the abundance of CH$_3$OH, CH$_3$CHO, HCOOCH$_3$, and CH$_3$OCH$_3$ as a function of radius and for the same time as previously. Only three models are seen on the figure (except for CH$_3$CHO) because the two other models (i.e., no desorption and CR heating) produce abundances in the gas-phase that are very small. These molecules are produced on the grains, or from precursors formed on the grains, and only efficiently evaporated by sputtering, chemical or photo-desorption. Models with no desorption or only heating by CRs produces negligible gas-phase abundances of these species. \\
The model with chemical desorption is the one producing the largest gas-phase abundance of HCOOCH$_3$ at all radii (although smaller than $10^{-13}$).
In our model, HCOOCH$_3$ is formed in the gas-phase by the reaction CH$_3$OCH$_2$ + O $\rightarrow$ HCOOCH$_3$ + H. CH$_3$OCH$_2$ is formed on the grain surfaces by the reaction s-C...CH$_3$OH + s-H followed by chemical desorption of the produced species CH$_3$OCH$_2$. s-C...CH$_3$OH is a Van Der Waals complex included in Nautilus by \citet{2015MNRAS.447.4004R}. \\
The gas-phase abundances of CH$_3$OH and CH$_3$CHO are larger with the model including sputtering at high density while chemical desorption produces the largest abundances at radii larger than 5000 au. Finally, photo-desorption is the least efficient of the three models at high density but becomes much more efficient than sputtering at radii larger than 15000 au and more efficient than chemical desorption at radii larger than 27000 au. Methanol, CH$_3$OH, is formed on the surfaces by successive hydrogenation of CO. The efficiency of the sputtering and photo-desorption are proportional to the abundance of CH$_3$OH on the grains whereas chemical desorption depends on the abundance of the reactants. We note that in our model, the photo-desorption of methanol is not destructive while experiments by \citet{2016ApJ...817L..12B} showed that is should be partly destructive. In addition, sputtering of the entire mantle along with the surface is allowed, whereas chemical desorption and photo-desorption are only possible for species on the surfaces. Although the cosmic-ray ionization rate decreases inside the cloud, the flux of production of CH$_3$OH is about ten times larger in the sputtering model at the inner point than with the chemical desorption model. Photo-desorption is more efficient at low visual extinction (i.e., in the outer part of the cloud).
Although CH$_3$CHO shows a similar sensitivity to the different desorptions, its formation path is different. In the chemical desorption model (at all radii), it is the gas-phase reaction O + C$_2$H$_5$ that produces gas-phase CH$_3$CHO and C$_2$H$_5$ is produced mostly by H + C$_3$H$_7$, while C$_3$H$_7$ is desorbed at the end of a long hydrogenation chain of surface reaction of C$_3$. In the sputtering model, the O + C$_2$H$_5$ reaction plays also a role (C$_2$H$_5$ being evaporated by sputtering from the surfaces), especially in the outer parts, but the large increase of gas-phase CH$_3$CHO inside is due to the dissociative recombination of CH$_3$CHOH$^+$, itself produced by the reaction CH$_3$OCH$_3$ + H$^+$. The CH$_3$OCH$_3$ abundance at the highest density is indeed quite large, as seen in Fig.~\ref{COM_fig} for this model. \\
For CH$_3$OCH$_3$, it is the chemical desorption that is the least efficient whatever the radius while sputtering is the most efficient for radii inside 22000 au and photo-desorption outer this radius. CH$_3$OCH$_3$ is formed on the surfaces via reactions such as s-H + s-CH$_3$OCH$_2$ and s-CH$_3$ + s-CH$_3$O. The surface abundance of this species is similar in all the models. The binding energy of the species results in a very small fraction of chemical desorption while the other mechanisms are more efficient. \\
The main production reactions discussed here were found by looking at the fluxes of the reactions producing these species. While doing various tests, we found that these COMs were particularly sensitive to the chemistry of Van Der Waals complexes from \citet{2015MNRAS.447.4004R} that we introduced into the network. By removing these processes, we obtain much fewer of these species both in the gas-phase and on the grains. This is particularly true for CH$_3$OCH$_3$, whose surface precursor CH$_3$OCH$_2$ is formed by s-C...CH$_3$OH + s-H $\rightarrow$ s-CH$_3$OCH$_2$. In addition, s-CH$_3$CHO is also impacted because one of the channels producing this species on the surface involves C + s-CO $\rightarrow$ s-CCO and C + s-H$_2$CO $\rightarrow$ s-H$_2$CCO. The species presented in the two previous (Sections~\ref{main_ice} and \ref{simple_species}) are not sensitive to this chemistry. We note that our model predictions for COMs cannot be compared to \citet{2020ApJS..249...26J} because we have used a different physical model, which is much less dense than theirs. Their chemical model includes new non-diffusive surface mechanisms to enhance the production of COMs and does not include the chemistry of Van Der Waals complexes from \citet{2015MNRAS.447.4004R}. They found that the chemical desorption was the main efficient process to desorb COMs from the grains, mostly because of the H-abstraction from grain-surface COMs, followed by recombination, amplifying the chemical desorption. We include these reactions as well and we find that sputtering could play a major role at the highest densities of our model.
\section{Observational constraints: Case of methanol}\label{obs_methanol}
\subsection{Considering abundances}
Methanol is an interesting species as its formation is almost entirely on the grains as can be seen by the very small gas-phase abundance obtained with the model without any non-thermal desorption (Fig.~\ref{COM_fig}). The different non-thermal desorption processes studied in the previous section are potentially efficient at the same time with respective efficiencies although for methanol sputtering, chemical desorption, and maybe photo-desorption dominate. We have run an additional model in which we included all non-thermal desorption mechanisms with the same 1D physical structure. In Fig.~\ref{CH3OH_allprocess}, we show the gas-phase abundance of methanol computed by the model for different times (between $10^5$ and $10^6$~yr) as a function of density (rather than radius as shown in the previous section). We show the model results when each of the important non-thermal desorption mechanism is added or all of them. In the same figure, we also plot observed methanol abundances. After $10^6$~yr, the gas-phase CH$_3$OH abundance evolves less quickly.
The observed methanol abundances have been derived from observations of four rotational transitions of methanol toward TMC-1C, TMC-1(CP), TMC-1(NH3) at 3mm as a part of the "Gas phase Elemental abundances in Molecular Clouds" (GEMS) IRAM 30m large program \citep{2019A&A...624A.105F}. The observed lines have been modeled within the software CASSIS\footnote{http://cassis.irap.omp.eu} \citep[developed by IRAP-UPS/CNRS,][]{2015sf2a.conf..313V} with the RADEX\footnote{http://www.strw.leidenuniv.nl/$\sim$moldata/radex.html} non-LTE radiative transfer code \citep{2007A&A...468..627V}, using the Markov Chain Monte Carlo method (MCMC), more details will be presented in an upcoming paper (Spezzano et al. in prep). Observed column densities at each cloud position, with the H$_2$ column densities, are given in Table~\ref{obs_CH3OH}.
\begin{figure*}
\centering
\includegraphics[width=0.46\linewidth]{CH3OH_1e5yr.pdf}
\includegraphics[width=0.46\linewidth]{CH3OH_2e5yr.pdf}
\includegraphics[width=0.46\linewidth]{CH3OH_3e5yr.pdf}
\includegraphics[width=0.46\linewidth]{CH3OH_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{CH3OH_1e6yr.pdf}
\caption{Abundance of gas-phase methanol as a function of H density (in cm$^{-3}$). The different graphs represent different times. The lines in each box represent the models in which one of the non-thermal desorption process has been added or all of them. The points are the observed abundances, as described in the text.}
\label{CH3OH_allprocess}
\end{figure*}
The modeled gas-phase methanol is high at high density at $10^5$~yr and decreases with time. At low density, on the contrary, the abundance is small and increases with time. The observations show a rather flat abundance of methanol with density, with a small increase toward the small densities. With respect to the observations, the model with all desorption mechanisms and times between $3.6\times 10^5$ and $10^6$~yr best reproduces the observations. Around $3.6\times 10^5$~yr, the observations are best reproduced at high density thanks to the cosmic-ray sputtering while at smaller densities, the observations are best reproduced for later times thanks to the chemical desorption. At $3.6\times 10^5$~yr, the modeled and observed abundances are in agreement within a factor of 10. \\
An underproduction of gas-phase methanol at small densities ( $< 2\times 10^4$~cm$^{-3}$) could indicate a more efficient chemical desorption under these conditions as it would be if the water ice coverage of the grains were small. Experiments from \citet{2016A&A...585A..24M} showed a much more efficient chemical desorption for bare grains and the efficiency would decrease as the grains are covered with water. \citet{2020A&A...637A..39N} found that the observed H$_2$S gas-phase abundance in the same regions with densities smaller than $2\times 10^4$~cm$^{-3}$ could only be reproduced by the chemical model assuming the high values of chemical desorption for bare grains. They suggested that this density would represent a change in the chemical composition of the surface of the grains. Our results on methanol goes in the same direction.
Using the current prescription of \citet{2016A&A...585A..24M} for water ice surfaces, the fraction of produced CH$_3$OH that desorb in the gas-phase is 0.06\%. We changed this fraction in the model and found that the observations at low density can be reproduced with an efficiency ten times larger (0.6\%). The experiments of
\citet{2016A&A...585A..24M} could provide upper limits on the CH$_3$OH chemical desorption on water ice or bare surfaces of 8\%, which is much greater than what we would need.
Alternatively, laboratory experiments on CR sputtering have shown to be more efficient with CO or CO$_2$ ices, rather than pure water ices, and our model indeed predict an ice CO$_2$ abundance as high as water and even larger under some conditions. \\
In summary, to reproduce the flat CH$_3$OH abundance as a function of density for a single time, we need to change the efficiency of the chemical desorption or the CR sputtering with the radius, which would be consistent with a change in the ice composition. There are too many uncertain parameters to be able to constrain quantitatively the efficiency of these processes this way but to illustrate the model sensitivity, we have run three additional models here representing extreme cases. The first one uses Minissale's bare grain prescription for the chemical desorption and the second model uses the sputtering parameters for CO$_2$ ices \citep[$Y^{\infty} = 21.9$, $\beta = 56.3$, and $\gamma = 0.6$,][]{2021A&A...647A.177D}. For the third model, we have considered all processes and the prescriptions for bare grains and CO$_2$ ices. These three models are shown in Fig.~\ref{CH3OH_allprocess_alternate} for two different times ($3.6\times 10^5$~yr and $1\times 10^6$~yr) and noted as "chem des high", "sputtering high", and "all desorptions high". In the same figure, we also report the previous models. The sputtering from an ice mostly composed of CO$_2$ is much more efficient than from water ice producing at high density a CH$_3$OH gas-phase abundance almost ten times larger. The enhanced chemical desorption produces larger CH$_3$OH gas-phase abundance than the enhanced sputtering at all densities. At a small density ($<2\times 10^4$~cm$^{-3}$), a high efficient chemical desorption, as would be expected to occur on bare grains, seems necessary to reproduce the high observed gas-phase methanol abundance. At larger densities ($>2\times 10^4$~cm$^{-3}$), however, the enhanced chemical desorption seems overly efficient, while the enhanced sputtering appears to be a major asset. \\
The modeling results are strongly dependent on the physical conditions. The external UV field for instance was set to 5 in Draine units based on observational constraints.
Decreasing this value increases the methanol abundance. The 1D physical model used here does not cover the points observed at the highest densities (n$_{\rm H}$ = $8\times 10^4$~cm$^{-3}$ and higher). The fact that sputtering produces larger methanol abundance than chemical desorption occurs at higher density with these models than the less efficients ones (above n$_{\rm H}$ = $6\times 10^4$~cm$^{-3}$ instead of $4\times 10^4$~cm$^{-3}$). We checked that this tendency seen on the curves is confirmed by running higher density models.
\begin{figure*}
\centering
\includegraphics[width=0.46\linewidth]{CH3OH_alternative_3e5yr.pdf}
\includegraphics[width=0.46\linewidth]{CH3OH_alternative_1e6yr.pdf}
\caption{Abundance of gas-phase methanol as a function of H density (in cm$^{-3}$), for two different times (left $3.6\times 10^5$~yr and right $1\times 10^6$~yr). The points are the observed abundances with error bars.}
\label{CH3OH_allprocess_alternate}
\end{figure*}
\subsection{Considering column densities}
In order to compare the model to the observations in a different way, we reconstructed the theoretical column densities predicted by our model. To do so, we assume a spherical geometry and integrated the column density along each line of sight \citep[see][for more details on the method]{Navarro2021}. We then obtained the column densities as a function of distance from the centre of the clouds shown in Fig~\ref{coldens_CH3OH_allprocess}. The observed radii have been computed for a source distance of 140 pc. We have four observed positions and three cores.
\begin{figure*}
\centering
\includegraphics[width=0.46\linewidth]{cold_dens_CH3OH_1e5yr.pdf}
\includegraphics[width=0.46\linewidth]{cold_dens_CH3OH_2e5yr.pdf}
\includegraphics[width=0.46\linewidth]{cold_dens_CH3OH_3e5yr.pdf}
\includegraphics[width=0.46\linewidth]{cold_dens_CH3OH_6e5yr.pdf}
\includegraphics[width=0.46\linewidth]{cold_dens_CH3OH_1e6yr.pdf}
\caption{Column densities of gas-phase methanol as a function of distance (in au) to the centre of the source. The different graphs represent different times. The lines in each box represent the models in which one of the non-thermal desorption processes has been added or all of them. The points are the observed column densities as described in the text and listed in Table~\ref{obs_CH3OH}.}
\label{coldens_CH3OH_allprocess}
\end{figure*}
The theoretical column densities decreases rapidly as we move away from the centre of the clouds. The conclusions on the most efficient processes as a function of radius (or density) discussed in the previous section stand here. Our model still underestimate the observed column densities at the larger radii (smaller densities). At early times (1-2$\times 10^5$~yr), the inner column densities are better reproduced while the outer column densities (at 16800 au) are strongly underestimated. At later times, the inner predicted column density is smaller but the outer ones are larger. The predicted column density profile is then flatter. The overall observed column densities (which are rather constant across the clouds) are better reproduced for the times around $3.6 - 6\times 10^{5}$~yr. In that case, the predicted column densities are below the observed ones by less than a factor of 10.\\
From the observational point of view, the column densities are what is measured (with various assumptions for the radiative transfer analysis). The observed abundances used in the previous section were obtained by dividing these column densities with H$_2$ column densities computed from Herschel dust observations at the same positions and with a similar beam size. When computing the observed abundances with respect to H$_2$, the main hypothesis is that the methanol lines and the dust observations probe the same column of material. When we compared the observed and predicted abundances, we used the local densities measured at each position (and deduced from the molecular excitation). Since our model, at the positions of interests, has the 1D structure playing a minor role (the Av is high enough to prevent the direct UV photons to play a major role), the comparison does not depend on the assumed 1D structure, which may not represent the three sources altogether. When comparing the column densities, we are much more dependent on the assumed density profile.
\section{Discussions}
\subsection{Effect of the initial atomic abundance}\label{initial_H}
Starting with no atomic initial abundance is a very common simplification of cold core chemical modeling. Molecular hydrogen is the first molecule to be formed in the interstellar medium. Its formation involves several micro-physical processes on top of interstellar grains, going from low energetic processes (sticking, diffusion and Langmuir-Hinshelwood mechanism) to more energetic ones (Eley-Rideal and "hot atom"). The efficiency of the H$_2$ formation depends strongly on the nature of the surfaces and their shape \citep[see][for a review of on the H$_2$ formation in the ISM]{2017MolAs...9....1W}. In astrochemical models of cold cores with large networks, namely, models that are not dedicated to the formation of H$_2$ itself \citep[such as][]{2001ApJ...553..595B,2010MNRAS.406L..11C,2011A&A...535A..27C,2014A&A...569A.100B}, only the Langmuir-Hinshelwood mechanism is considered for the formation of H$_2$, which may underestimate the formation efficiency of H$_2$ at moderate density. From an observational point of view, H$_2$ can only directly be observed in wavelengths (UV, near-IR, and mid-IR) and is not accessible in dense cold regions. In such regions, we can have some estimates of the H$_2$ abundance, and so the remaining atomic hydrogen, by measuring the residual atomic hydrogen fraction via HI Narrow Self- Absorption (HINSA) observations. Using such a method, \citet{2003ApJ...585..823L} and \citet{2005ApJ...622..938G} estimated the ratio between H and H$_2$ between $10^{-4}$ and a few $10^{-3}$. Using a H/H$_2$ ratio of $10^{-3}$ would lead to a H abundance of $5\times 10^{-4}$ with respect to the total proton density. We ran our models for various values of initial hydrogen ($5\times 10^{-4}$, $10^{-3}$, and $10^{-2}$). Initial atomic hydrogen abundances of $5\times 10^{-4}$ and $10^{-3}$ produce minor differences as compared to the results presented in the previous sections. Starting with a higher abundance of $10^{-2}$ (which may not be realistic) would strongly affect the methanol abundance (much less than the other species discussed in this paper) at early times and for densities larger than a few $10^3$~cm$^{-3}$. At a H density of $2\times 10^{4}$~cm$^{-3}$, the methanol gas-phase abundance could be increased by more than one order of magnitude with respect to the one computed by our "all desorption" model at $10^5$~yr. This difference fades with time and disappears at $10^6$~yr. For the present physical structure, an initial H/H$_2$ ratio different from one could help increase the modeled abundances for intermediate spatial points but only if the initial H abundance is high and only for early times (see Fig.~\ref{CH3OH_initialH}, to be compared to Fig.~\ref{CH3OH_allprocess}).
\subsection{Effect of the dust temperature}\label{sect_dust_t}
For the results presented here, we used the dust temperature obtained from Herschel observations \citep[following][]{2019A&A...624A.105F,2020A&A...637A..39N}. The obtained 1D dust profile is always above 10~K at all radii (see Fig.~\ref{1D_structure}) and above the gas temperature. The surface chemistry can be very dependent on this parameter. As such, our predicted abundance of ice CO is low while the CO$_2$ ice contains a large fraction of the oxygen (as well as water). This has consequences for the predicted ice and gas methanol. We redid all our simulations with a dust temperature equal to that of the gas, which mostly changes in the high-density regions ($\ge 2\times 10^4$~cm$^{-3}$). In Figs.~\ref{ice_fig_lowT} to \ref{COM_fig_lowT}, we show the abundance of the same species as in Section~\ref{model_results} (Figs.~\ref{ice_fig} to \ref{COM_fig}) computed with a dust temperature equal to the gas one at each radius of the 1D structure. There are a number of differences between the two sets of models at high density. As such, the following results are focused on these regions. Thus, we refer to the standard model as the one in which the dust temperature is determined by Herschel observations (and shown in the remainder of the paper). \\
The most obvious difference is the larger CO ice abundance, which propagates to the CH$_3$OH ice (and H$_2$CO ice not shown in the figures). As a consequence, the CO$_2$ ice is predicted much smaller going inward the cold core. The smaller grain temperature decreases the efficiency of the CO$_2$ ice formation (which possesses a barrier) and so more CO is available for H$_2$CO and CH$_3$OH. At $6\times 10^5$~yr, the inner CH$_3$OH ice abundance is two times larger with the smaller grain temperature (whatever the model presented on the figures as they all give the same abundance). Another consequence of the colder grains is a higher H$_2$O ice abundance by a factor of two, which is produced by a higher H$_2$ abundance on the grains (forming water by reacting with OH on the surface). The higher H$_2$ abundance in the ices is due to less efficient thermal desorption. Indeed, in our models, H$_2$ and He are both allowed to thermally desorb and considering their low binding energies, this process is efficient in our standard model, whereas it is negligible for H$_2$ in this lower dust temperature model. The CH$_4$ and NH$_3$ ices are not changed. \\
The lower dust temperature produces some significant changes on the simple molecules commonly observed in the gas-phase (comparing Fig.~\ref{simplemol_fig} to Fig.~\ref{simplemol_fig_lowT}). If CO, CN, CS, and HC$_3$N do not show much difference, then SO, H$_2$O, HCN, and HCO$^+$ are strongly increased at high density in all models except for the CR heating model. The increase in SO, H$_2$O, and HCO$^+$ can be explained by a higher gas-phase abundance of OH and O$_2$. SO forms in the gas-phase by the reaction S + OH. H$_2$O in the gas is a product of the dissociative recombination of H$_3$O$^+$, itself formed by the reactions OH + H$^+$ $\rightarrow$ H + OH$^+$, H$_2$ + OH$^+$ $\rightarrow$ H + H$_2$O$^+$, and H$_2$ + H$_2$O$^+$ $\rightarrow$ H + H$_3$O$^+$. The higher OH gas-phase abundance is produced by more efficient production on the surface complex s-O...CO because of a higher grain coverage by CO. The reaction of s-O...CO with s-H on the surface leads partly to desorbing OH, and interactions with UV photons leads to desorbing atomic oxygen \citep[see][]{2015MNRAS.447.4004R}. Overall, a higher grain coverage of CO produces a slight increase of the level of oxygen in the gas-phase, increasing the gas-phase abundances of SO, H$_2$O and HCO$^+$. The higher abundance of HCN, in the "photodesorption" and "no desorption models," is also due to a greater abundance of oxygen in the gas-phase but less directly. In our standard model, part of the HCN is destroyed by Si$^+$ (to form SiNC$^+$). In the colder dust model, the increase of OH abundance in the gas-phase reduces the Si$^+$ abundance by reacting with it to form SiO$^+$. To find these complex chemical effects, we have tested the model each time by switching off and on the processes. \\
The methanol-increased abundance on the grains propagates to the gas-phase (comparing Fig.~\ref{COM_fig} to Fig.~\ref{COM_fig_lowT}). The other three COMs discussed in this paper (CH$_3$CHO, CH$_3$OCH$_3$, and HCOOCH$_3$) are not significantly affected by the smaller grain temperature. The overall effects of the various non-thermal desorption processes previously discussed stand. The results presented in section~\ref{obs_methanol} are not significantly changed either. The methanol gas-phase abundance is just slightly more abundant at high density (see Fig.~\ref{CH3OH_allprocess_lowT}).\\
\section{Conclusions}
In this work, we used the full gas-grain model Nautilus and included a new non-thermal desorption process, which is the sputtering of ices by cosmic-ray particles, experimentally studied in the laboratory by \citet{2018A&A...618A.173D}. We tested its efficiency with respect to the non-thermal desorption mechanisms commonly included in astrochemical models: chemical desorption, grain heating by cosmic-rays, and photo-desorption. We also tested our model prediction with methanol observations in the TMC-1C, TMC-1(CP), and TMC-1(NH3) cold cores. We focused the discussions on the main ice components, simple molecules usually observed in cold cores (CO, CN, CS, SO, HCN, HC3N, and HCO$^+$), and complex organic molecules (COMs such as CH$_3$OH, CH$_3$CHO, CH$_3$OCH$_3$, and HCOOCH$_3$). Our conclusions are as follows:
\begin{itemize}
\item [-] We found that all species are not sensitive in the same way to the non-thermal desorption mechanisms and the sensitivity also depends on the physical conditions. It is then mandatory to include all of them. Grains heating by cosmic rays appears to impact significantly some of the species formed in the gas-phase (such as CN, HCN, HC$_3$N, and HCO$^+$) through the desorption of CH$_4$ ices at high density. For molecules formed on the grains (such as H$_2$O, CH$_3$OH, CH$_3$OCH$_3$), the sputtering of the ices induced by cosmic-ray collision dominates the desorption at high density while photo-desorption dominates at low density (although resulting the gas-phase is much less high as the ice abundances are smaller).
\item [-] A direct comparison of the gas-phase methanol in TMC-1C, TMC-1(CP), and TMC-1(NH3) cold cores with our model predictions shows that our models can reproduce the observations within a factor of 10 at $3.6\times 10^5$~yr for all densities. Chemical desorption seem essential to reproduce the observations for H densities smaller than $4\times 10^4$~cm$^{-3}$, while sputtering is essential above this density.
\item [-] The models are, however, systematically below the observed abundances. Considering a more efficient chemical desorption, as measured on bare grains by \citet{2016A&A...585A..24M}, appears to head in the right direction for low densities but also seems a bit extreme. Using a higher efficiency for the sputtering, as measured in CO$_2$ ices, increases the methanol abundance in the gas at high density.
\item [-] Comparing CH$_3$OH observed and modeled column densities, rather than abundances, leads to similar conclusions except that the inner column densities are fully reproduced at early times (1-2$\times 10^5$~yr); however, in that case, the most external column densities are strongly underestimated by our models. The best agreement at all radii is obtained for times between 3.6$\times 10^5$ and 6$\times 10^5$~yr. In that case, the predicted column densities is flatter with the radius but below the observed column density by less than a factor of 10.
\item [-] Considering that a small fraction of the hydrogen is not in H$_2$ at the beginning of the chemical calculation only impacts the methanol abundance if this fraction is as high as H/H$_2$ = $2\times 10^{-2}$, which would be unrealistic.
\item [-] In our simulations, we used the dust temperature observed with Herschel, which is a little above the gas temperature determined based on the molecular excitations. Setting the dust temperature equal to the gas one decreases the dust temperature in the inner regions of the cores by a few Kelvin. Such changes will affect the ice reservoir by producing less CO$_2$, increasing the CO and CH$_3$OH abundances. This higher CH$_3$OH ice abundance propagates in the gas-phase by increasing the gas-phase abundance by a factor of a few. The change in CO ice coverage slightly impacts the oxygen quantity in the gas phase because, in our model, the atomic oxygen physisorbed on CO ice partly releases O and OH in the gas phase either by interaction with direct and secondary UV photons and hydrogenation.
\end{itemize}
In conclusion, the sputtering of ices by cosmic-ray collisions may be the most efficient desorption mechanism at high density (a few $10^4$~cm$^{-3}$ under the conditions studied here) in cold cores, while chemical desorption is still needed at smaller densities. Additional studies are needed on both chemical desorption and CR sputtering to assess their efficiency with respect to the main ice composition (especially in the presence of mixtures).
\section{Acknowledgements}
The authors acknowledge the CNRS program "Physique et Chimie du Milieu Interstellaire" (PCMI) co-funded by the Centre National d'Etudes Spatiales (CNES). DNA and AF thank the Spanish MICIU for funding support from AYA2016-75066-C2-2-P and PID2019-106235GB-I00.
\bibliographystyle{aa}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,337 |
Bata Kindai Amgoza ibn LoBagola (1877–1947) was an early 20th-century American impostor and entertainer who presented an exoticized identity as a native of Africa, when in reality he was born Joseph Howard Lee in Baltimore, Maryland. Despite an impoverished start in life and a lack of education, and a series of scandalous arrests related to homosexual activities, mainly involving underage individuals, LoBagola maintained a long and colorful career posing as an African "savage", during which he delivered lectures to many institutions and conducted public debates.
LoBagola; an African Savage's Own Story
LoBagola published some articles in Scribner's Magazine in 1929 and the publishers A.A. Knopf decided to produce a book version to be titled LoBagola; an African Savage's Own Story, in an attempt to capitalise upon the then-current vogue for "exotic customs" of "places untouched by Europe". Knopf made much of LoBagola being a "savage" from a region of Africa supposedly never visited by white people, though LoBagola described himself as a "Black Jew", claiming that he was descended from people who had fled the Holy Land following the destruction of Herod's Temple.
The book was virtually unedited and came across as a picaresque pseudo-biography, studded with LoBagola's observations of "West African" ways and his adventures in many lands.
Death
LoBagola died in Attica Prison in 1947, with eighteen months of his current sentence remaining, of a pulmonary edema. He was buried in the prison cemetery.
Popular culture
LoBagola was the subject of a 2016 episode of the Futility Closet Podcast.
External links
Brochure for Speaking Engagements by LoBagola
References
1877 births
1947 deaths
Impostors
Literary forgeries
Writers from Baltimore
People prosecuted under anti-homosexuality laws
Vaudeville performers
American people who died in prison custody
Prisoners who died in New York (state) detention
American memoirists
African-American non-fiction writers
American non-fiction writers
20th-century African-American people | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,997 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.