text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
Q: Do I need to use an older version of Aspose.Cells.dll? I am Asposisizing a legacy app (users complained about the slowness of the code, which is in Excel Interop).
I have added a reference to Aspose.Cells.dll and added this using:
using Aspose.Cells;
...but get this compile error:
The type or namespace name 'Aspose' could not be found (are you missing a using directive or an assembly reference?)
Is it because the project's Target framework is set to ".NET Framework 2.0"?
If so, is there an older version of the Aspose.Cells.dll that I can use? Or will I need to increment the target to 4.5?
Here is what I have and what I see:
So is the problem a mismatch between .NET Frameworks targeted and the Aspose.Cells.dll version?
A: Seems to be a mismatch between .NET versions. Ensure that the version of Aspose.Cells was compiled for .NET 2.0.
Otherwise you may need to find an older version of Aspose.Cells that was targeted for that.
A: Your assumption is correct. The problem is a mismatch between .NET Frameworks.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,125 |
FIRE has destroyed a high school library in Perth causing $1 million worth of damage.
Firefighters were called to Atwell College about 2.45am on Monday.
Crews took just over an hour to bring the blaze, which is believed to have started in an air-conditioner, under control.
A Department of Education spokeswoman told AAP other buildings not damaged and students have returned to school as normal on Monday, while the Arson Squad is investigating. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,528 |
Q: How to format the individual columns in the example I am starting to tinker with C#. This is my first grassroots project.
I have been working through the MVC example for a week and overcome most of my learning challenges. What I can't get the hang of is how to apply a CSS formatting strategy for the data returned from a DB call (other than to write very messy call-specific HTML).
I am going to use a sample from w3schools.com (https://www.w3schools.com/css/tryit.asp?filename=trycss_table_border_divider) as the sample because it is nicely contained.
Using this example I want to be able to Left-Align the first column, Center the second column, and Right-Align with a # ##0.00 format the value column.
I assume there is something I need to do in this code to distinguish between Column 1, 2 & 3 (but can't find what it is):
th, td {
padding: 8px;
text-align: left;
border-bottom: 1px solid #ddd;
}
The complete listing of the code from w3schools.com is as follows:
<!DOCTYPE html>
<html>
<head>
<style>
table {
border-collapse: collapse;
width: 100%;
}
th, td {
padding: 8px;
text-align: left;
border-bottom: 1px solid #ddd;
}
</style>
</head>
<body>
<h2>Bordered Table Dividers</h2>
<p>Add the border-bottom property to th and td for horizontal dividers:</p>
<table>
<tr>
<th>Firstname</th>
<th>Lastname</th>
<th>Savings</th>
</tr>
<tr>
<td>Peter</td>
<td>Griffin</td>
<td>$100</td>
</tr>
<tr>
<td>Lois</td>
<td>Griffin</td>
<td>$150</td>
</tr>
<tr>
<td>Joe</td>
<td>Swanson</td>
<td>$300</td>
</tr>
<tr>
<td>Cleveland</td>
<td>Brown</td>
<td>$250</td>
</tr>
</table>
</body>
</html>
The tutorial I am working through is this one, but I have resolved what I could within this example (so it should probably be ignored)
https://learn.microsoft.com/en-us/aspnet/core/tutorials/first-mvc-app/adding-model?view=aspnetcore-3.1&tabs=visual-studio-mac
A: I have added some demonstration CSS to show how to tackle this referencing problem. Click on the "Run" button to see it in action.
Although you could add classes onto each <td> tag, it is easier to specify which child of the tr you want. To do this, you can use pseudo-classes first-child, nth-child, and last-child. I have added some random formatting to your example - I will leave the specific formatting definitions to you.
You could of course use nth-child for all of these. Often, enumerations in programming start from zero ("zero-indexed") but this class is one-indexed. Thus, for first-child I could have used nth-child(1) instead. In a similar way, last-child could be written here as nth-child(3), but that may be less maintainable in some cases, since if you want to insert a column, you would have to adjust the CSS for the rightmost one too.
<!DOCTYPE html>
<html>
<head>
<style>
table {
border-collapse: collapse;
width: 100%;
}
th, td {
padding: 8px;
text-align: left;
border-bottom: 1px solid #ddd;
}
table tr td:first-child {
font-weight: bold;
color: red;
}
table tr td:nth-child(2) {
font-size: 0.8em;
color: purple;
}
table tr td:last-child {
font-style: italic;
color: green;
}
</style>
</head>
<body>
<h2>Bordered Table Dividers</h2>
<p>Add the border-bottom property to th and td for horizontal dividers:</p>
<table>
<tr>
<th>Firstname</th>
<th>Lastname</th>
<th>Savings</th>
</tr>
<tr>
<td>Peter</td>
<td>Griffin</td>
<td>$100</td>
</tr>
<tr>
<td>Lois</td>
<td>Griffin</td>
<td>$150</td>
</tr>
<tr>
<td>Joe</td>
<td>Swanson</td>
<td>$300</td>
</tr>
<tr>
<td>Cleveland</td>
<td>Brown</td>
<td>$250</td>
</tr>
</table>
</body>
</html>
Further reading
Classes used:
*
*https://developer.mozilla.org/en-US/docs/Web/CSS/:first-child
*https://developer.mozilla.org/en-US/docs/Web/CSS/:last-child
*https://developer.mozilla.org/en-US/docs/Web/CSS/:nth-child
These are all pseudo-classes that have a special meaning. You can learn more about these here:
*
*https://developer.mozilla.org/en-US/docs/Web/CSS/Pseudo-classes
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,640 |
\section{Introduction}
The theory of branching processes was born out of Galton's famous
family extinction problem. Later, interest turned to populations not
dying out and their growth and stabilisation. In more recent years,
extinction has retaken a place in the foreground, for reasons from
both conservation and evolutionary biology. The time and path to
extinction of subcritical general populations was studied in
\cite{pnas}. Here, time structure is crucial, and life spans and
varying bearing ages cannot be condensed into simple, generation
counting Bienaym\'e-Galton-Watson processes. Thus, the question
arises whether (non-critical) general branching populations bound
for extinction must behave like subcritical populations.
We answer this in the affirmative: a general, multitype branching
process conditioned to die out, remains a branching process, but one
almost surely dying out. If the original process was supercritical
but with a strictly positive risk of extinction, the new process is
subcritical.
Formulated in such a loose manner, this fact belongs to the folklore
of branching, but actually it has been proved only for
Bienaym\'e-Galton-Watson processes, \cite{AthreyaNey:1972}, p. 52. A
moment's afterthought tells us that it remains true for
age-dependent branching processes of the Bellman-Harris type, where
individuals have i.i.d. life spans, and split into i.i.d. random
numbers of children, independently of life span, the time structure
thus not being affected by the conditioning.
But what if the flow of time is no longer independent of
reproduction? Even the simplest case, that of a splitting
reproduction at death, but not independently of age at
death/splitting, would seem to offer difficulties, and the same
certainly applies to the more realistic general processes where
reproduction occurs as a point process during life, thus mimicking
the yearly births of wildlife, or the even more erratic reproduction
pattern of humans.
The conceptual framework is intuitive. Starting from an Eve,
individuals live and give birth independently of one another. At
birth each individual inherits a {\em type} from her mother. The
type, in its turn determines the probability measure over all
possible life careers, including a life span and a marked point
process which reports the successive ages at bearing, and the types
of children at the various bearings. Note that multiple births are
not excluded. The branching property can be summarised into the fact
that given her type and birth time, the daughter process of any
individual born is independent of all individuals not in her progeny
(into which she herself is included).
We set out to prove that this branching property remains true for
processes conditioned to die out. Initially, we shall not mention
supercriticality, and only ask that the probability of extinction is
non-zero for any starting type. (If that probability is one, the
conditioning does not change anything!) Largely, the proof is a
matter of conceptual clarity or discipline, which unfortunately
forces us into the somewhat burdensome notation of probabilities on
tree spaces, obscuring the essential simplicity of the matter.
The main idea behind the proof is, however, easily outlined. Indeed,
consider an individual, and condition upon her progeny ultimately
dying out. Her own life career is then affected precisely through
her only being able to have daughters whose progeny in their turn
must ultimately face extinction. In all other respects her life is
independent of all others, once her type is given. This
reestablishes the branching character, but with a suitably amended
probability measure over her life career, which clearly is
non-supercritical in the sense that the probability of ultimate
extinction is one, from any starting type that can be realised.
If the original process is, furthermore, supercritical, {\em i.e.}
has a positive Malthusian parameter, the conditioned process will
turn out to be subcritical, in the sense of the Malthusian parameter
being negative, if it exists.
\section{Notation}
\subsection{The Ulam-Harris family space}
We choose to work within the classical Ulam-Harris framework,
\cite{Jagers:1989}, identifying individuals with sequences of
natural numbers so that $x=(x_1,x_2,\dots,x_n)$ denotes the $x_n$th
child of the \dots\ of the $x_2$th child of the $x_1$th child of the
ancestor. The ancestor is denoted by an ``empty'' sequence $e$
(mnemonic for ``empty'' or ``Eve''), and the set of all possible
individuals is
$$
\mathbb{T} = e\cup\bigcup_{n\in\mathbb{N}}\mathbb{N}^n.
$$
The concatenation of $x,y\in\mathbb{T}$ is $xy$, and thus $ex=xe=x$ for all $x\in\mathbb{T}$.
For any $e\neq x = (x_1,x_2,\dots,x_n)$ $x$'s \emph{mother} is $mx =
(x_1,\dots,x_{n-1})$, her \emph{rank} in the sibship is $rx = x_n$,
and $x$'s \emph{generation} $g(x)=n$. We agree that $me=re=e$ and
$g(e)=0$. Hence, $mxrx=x$ for $x\in\mathbb{T}$, and $m$ can be iterated so
that $m^nx$ is $x$'s $n$th grandmother, provided $g(x)>n$.
Clearly $x$ \emph{stems from} $y$, usually written $x\succeq y$, if
$m^nx=y$ for some $n\in\mathbb{N}\cup\{0\}$, or equivalently if there exists
a $z\in\mathbb{T}:x=yz$. In this terminology, $x$ stems from herself,
$x\preceq x$. In other words, $(\mathbb{T},\preceq)$ is a partially ordered
set (a semilattice). We define $x\sim y$ if $x\succeq y$ or
$x\preceq y$, i.e.\ $x$ and $y$ are in direct line of descent.
($\sim$ is not an equivalence relation.)
For $A,B\subseteq\mathbb{T}, x\in\mathbb{T}$, we write $x\succeq A$ if there exists
a $y\in A$ such that $x\succeq y$, and $A\preceq B$ if $x\succeq A$
for all $x\in B$. The \emph{progeny} of $A\subseteq\mathbb{T}$ is defined as
$\mathrm{Pr}\,A=\{x\in\mathbb{T}:x\succeq A\}$.
We call a set $L\subset\mathbb{T}$ a stopping line, or \emph{line} for
short, if no two members of $L$ are in direct line of descent:
$x,y\in L,x\neq y\Rightarrow x\not\sim y$. We say that a line $L$ is
a \emph{covering line} if for all $x\in\mathbb{T}$ there exists a $y\in L$
such that $x\sim y$.
\subsection{Life space and population space}
Let $(\Omega_{\ell},\mathscr{A}_{\ell})$ be a \emph{life space} so
that $\omega\in\Omega_{\ell}$ is a possible life career of
individuals. Any individual property, such as mass at a certain age
or life span, is viewed as a measurable function (with respect to
the $\sigma$-algebra $\mathscr{A}_{\ell}$) on the life space. This
should be rich enough to support, at least, the functions
$\tau(k),\sigma(k)$ for $k\in\mathbb{N}$. Here
$\tau(k):\Omega_{\ell}\to\mathbb{R}_+\cup\{\infty\}$ is the mother's age at
the $k$th child's birth, $0\leq \tau(1)\leq\tau(2)\leq\cdots\leq
\infty$. If $\tau(k)=\infty$, then the $k$th child is never born.
$\sigma(k):\Omega_{\ell}\to\mathcal{S}$ is the child's type,
obtained at birth. The type space $\mathcal{S}$ has a (countably
generated) $\sigma$-algebra $\mathscr{S}$. The whole reproduction
process is then the marked point process $\xi$ with $\xi(A\times
B)=\#\{k:\sigma(k)\in A, \tau(k)\in B\}$ for
$A\in\mathscr{S},B\in\mathscr{B}$, the Borel algebra on $\mathbb{R}_+$.
The population space is defined as $(\Omega,
\mathscr{A})=(S\times\Omega_{\ell}^{\mathbb{T}},\mathscr{S}\times\mathscr{A}_{\ell}^{\mathbb{T}})$.
$U_M$ is the projection $\Omega\to \Omega_{\ell}^M$, for $M\subseteq
\mathbb{T}$. For simplicity $U_x=U_{\{x\}}$ and similarly $\mathrm{Pr}\,x
=\mathrm{Pr}\{x\}$. The following $\sigma$-algebras are important:
$$
\mathscr{F}_L=\mathscr{S}\times\sigma(U_x:x\not\succeq
L)=\mathscr{S}\times\sigma(U_x:x\notin\mathrm{Pr}\,L),
$$
for $L\in\mathbb{T}$. Since $L\preceq M\Rightarrow
\mathrm{Pr}\,L\supseteq\mathrm{Pr}\,M\Rightarrow\mathscr{F}_L\subseteq\mathscr{F}_M$,
it holds that $(\mathscr{F}_L:L\in\mathbb{T})$ is a filtration under
$\preceq$. In the usual manner, the definition of the
$\sigma$-algebras $\mathscr{F}_L$ can be extended to
$\sigma$-algebras of events preceding random lines $\mathcal{L}$
which are optional in the sense that events $\{\mathcal{L}\preceq
L\} \in \mathcal{F}_L$ \cite{Jagers:1989}.
Functions $\xi,\tau(k)$ and $\sigma(k)$ were defined on the life
space but we want to be able to speak about these quantities
pertaining to a given $x\in\mathbb{T}$. We write $\xi_x=\xi\circ U_x$, $x$'s
reproduction process, $\tau_x = \tau(rx)\circ U_{mx}$, $x$'s
mother's age at $x$'s birth, and $x$'s type
$\sigma_x=\sigma(rx)\circ U_{mx}$. Note the difference between
$\tau(k)$ and $\tau_k$, $\sigma(k)$ and $\sigma_k$, for
$k\in\mathbb{N}\subset\mathbb{T}$.
Finally, the process is anchored in real time by taking Eve to be
born at time 0, and later birth times $t_x,x\in\mathbb{T}$ recursively
determined by $t_e=0$ and $t_x=t_{mx}+\tau_x$ for $e\neq x\in\mathbb{T}$.
The meaning of $t_x=\infty$ is that $x$ is never born, so that
$\mathscr{R}=\{x\in\mathbb{T}:t_x<\infty\}$ is the set of \emph{realised}
individuals. This set is optional, $\mathcal{F}_{L\cap\mathscr{R}}$
is well defined \cite{Jagers:1989}, and so is the $\sigma$-algebra
$\mathcal{F}_{\mathscr{R}}$ of events pertaining only to realised
individuals. The probability space restricted to such events is that
where a branching processes really lives, {\em cf.} \cite{Neveu},
\cite{Chauvin}.
\subsection{The probability measure and branching property}
The setup is that for each $s\in\mathcal{S}$ there is a probability
measure $P(s,\cdot)$ on the life space
$(\Omega_{\ell},\mathscr{A}_{\ell})$, such that the function $s\to
P(s,A)$ is measurable with $A\in\mathscr{A}_{\ell}$. For any
$s\in\mathcal{S}$ this kernel (the {\em life kernel}) defines a {\em
population probability} measure $\mathbb{P}_s$ on $(\Omega,\mathscr{A})$
with an ancestor of type $\sigma_e=s$ and such that given
$\sigma_x$, $x$'s life will follow the law $P(\sigma_x,\cdot)$
independently of the rest of the process, see \cite{Jagers:1989}.
Indeed, the basic branching property of the whole process can be
characterised by a generalisation of this in terms of the mappings
$S_x=(\sigma_x,U_{\mathrm{Pr}\,x}):\mathcal{S}\times\mathbb{T}\to
\mathcal{S}\times\mathbb{T}$, which renders $x$ the new Eve. Let
$T_x=S^{-1}_x$ and $\{A_x:x\in L\}\subseteq\mathscr{A}$. Then,
\begin{equation*}
\mathbb{P}_s\bigg(\bigcap_{x\in L}T_xA_x\bigg|\mathcal{F}_L\bigg)=
\prod_{x\in L}\mathbb{P}_{\sigma_x}(A_x)
\end{equation*}
for lines $L$. This remains true for optional lines and in particular
\begin{equation}\label{branch_prop}
\mathbb{P}_s\bigg(\bigcap_{x\in
L\cap\mathscr{R}}T_xA_x\bigg|\mathcal{F}_{L\cap\mathscr{R}}\bigg)=
\prod_{x\in L\cap\mathscr{R}}\mathbb{P}_{\sigma_x}(A_x) ,
\end{equation}
where the intersection over the empty set is taken to be
$\Omega$ and the empty product is ascribed the value one.
The interpretation is that the daughter processes of all realised
individuals $x$ in a line are independent given the prehistory of
the line with the population probability measure $\mathbb{P}_{\sigma_x}$,
the only dependence upon the past thus being channelled through the
type $\sigma_x$ and the placing in time $t_x$. This is the branching
property. We shall see that it remains true for processes, bound to
die out.
\section{Conditioning on extinction}
Denote by $E$ the event that the branching process starting from Eve
dies out, i.e. that $\mathscr{R}$ has only a finite number of
elements. Let $q_s=\mathbb{P}_s(E)$ and $E_x$ be the event that the
branching process starting from $x$ dies out, $E_x=T_xE$. Write
$\tilde{\mathbb{P}}_s(\cdot)=\mathbb{P}_s(\cdot | E)$, which clearly only
makes sense for $s\in \mathcal{S}$ such that $q_s>0$, and let $\tilde{\mathbb{E}}_s$
denote expectation with respect to $\tilde{\mathbb{P}}_s$.
\begin{theorem}
Any branching process conditioned on extinction remains a branching
process, but with extinction probability one. Its life kernel is
$\tilde{P}(s,A):=\tilde{\mathbb{P}}_s(\mathcal{S}\times
A\times\Omega_{\ell}^{\mathbb{T}\setminus\{e\}})$ for
$A\in\mathscr{A}_{\ell}$. Thus, for any covering lines $L$ and
$\{A_x:x\in L\}\subseteq\mathscr{A}$
\begin{equation}\label{cond_branch_prop}
\tilde{\mathbb{P}}_s\bigg(\bigcap_{x\in
L\cap\mathscr{R}}T_xA_x\bigg|\mathcal{F}_{L\cap\mathscr{R}}\bigg) =
\prod_{x\in L\cap\mathscr{R}}\tilde{\mathbb{P}}_{\sigma_x}(A_x) .
\end{equation}
Furthermore, the Radon-Nikodym derivative $d\tilde{\mathbb{P}}_s/d\mathbb{P}_s$
with respect to the $\sigma$-algebra $\mathcal{F}_{L\cap\mathscr{R}}$
is given by
\begin{equation}\label{radon-nikodym}
\frac{d\tilde{\mathbb{P}}}{d\mathbb{P}}\bigg|_{\mathcal{F}_{L\cap\mathscr{R}}}=\frac{1}{q_s}\prod_{x\in L\cap\mathscr{R}}q_{\sigma_x}.
\end{equation}
\end{theorem}
\begin{proof}
First, note that
\begin{equation*}
E= \bigcap_{x\in L\cap\mathscr{R}}T_xE,
\end{equation*}
for covering lines $L$. Indeed, since
$\{L\cap\mathscr{R}=\emptyset\}\subseteq E$ and intersection over an
empty index set yields the full space,
\begin{eqnarray*}\label{E}
E &=& (E\cap \{L\cap\mathscr{R}=\emptyset\}) \cup (E\cap
\{L\cap\mathscr{R}\neq \emptyset\})\\
&=& \bigg(\{L\cap\mathscr{R}=\emptyset\}\cap\bigcap_{x\in
L\cap\mathscr{R}}T_xE \bigg) \cup\bigg(
\{L\cap\mathscr{R}\neq\emptyset\}\cap \bigcap_{x\in
L\cap\mathscr{R}}T_xE \bigg)\\
&=&\bigcap_{x\in L\cap\mathscr{R}}T_xE .
\end{eqnarray*}
The branching property \eqref{branch_prop} implies that
\begin{equation}\label{q_prod}
\mathbb{P}_s(E|\mathcal{F}_L)=\prod_{x\in L\cap\mathscr{R}}q_{\sigma_x}
=\mathbb{P}_s(E|\mathcal{F}_{L\cap\mathscr{R}}).
\end{equation}
Hence, for any covering line $L$ and $A\in\mathcal{F}_L$
$$
\tilde{\mathbb{P}}_s(A)=\mathbb{E}_s\bigg[\mathbb{P}_s(E|\mathcal{F}_L);A\bigg]/q_s
=\mathbb{E}_s\bigg[\prod_{x\in L\cap\mathscr{R}}q_{\sigma_x};A\bigg]/q_s,
$$
and thus \eqref{radon-nikodym} holds. Equations \eqref{branch_prop} and \eqref{q_prod} yield
\begin{align*}
\mathbb{P}_s\bigg(\bigcap_{x\in L\cap\mathscr{R}}T_xA_x\bigg|\mathcal{F}_{L\cap\mathscr{R}}\bigg) &=
\prod_{x\in L\cap\mathscr{R}}q_{\sigma_x}\tilde{\mathbb{P}}_{\sigma_x}(A_x) =
\mathbb{P}_s(E|\mathcal{F}_{L\cap\mathscr{R}})\prod_{x\in L\cap\mathscr{R}}\tilde{\mathbb{P}}_{\sigma_x}(A_x),
\end{align*}
and \eqref{cond_branch_prop} follows.
\end{proof}
\begin{remark} For $L=\mathbb{N}$, the first generation,
$$
\frac{d\tilde{\mathbb{P}}_s}{d\mathbb{P}_s}\bigg|_{\mathcal{F}_{\mathbb{N}\cap\mathscr{R}}}
=\frac{1}{q_s}\prod_{k\in\mathbb{N}:k\leq X}q_{\sigma_k},
$$
where $X:=\xi_e(\mathcal{S}\times\mathbb{R}_+)$ is Eve's total offspring. In
single-type processes with extinction probability $q$ therefore
\begin{equation}\label{single_rn}
\frac{d\tilde{\mathbb{P}}}{d\mathbb{P}}\bigg|_{\mathcal{F}_{\mathbb{N}\cap\mathscr{R}}}=q^{X-1}
\end{equation}
and in the Bienaym\'e-Galton-Watson case
$$
\tilde{\mathbb{P}}(X=k)=\mathbb{E}[q^{X-1};X=k]=\mathbb{P}(X=k)q^{k-1},
$$
in perfect agreement with \cite[Theorem I.12.3]{AthreyaNey:1972}.
\end{remark}
\begin{example}
Consider single-type Sevastyanov splitting processes, where
individuals have a life span distribution $G$ and at death split
into $k$ particles with probability $p_k(u)$, if the life span
$L=u$. By \eqref{single_rn} we conclude that
$$
\tilde{\mathbb{P}}(X=k, L\in du) =\mathbb{E}[q^{X-1};X=k, L\in du] =p_k(u)q^{k-1}G(du).
$$
Hence
$$
\tilde{G}(du)=\tilde{P}(L\in du)=\sum_{k=0}^\infty p_k(u)q^{k-1}G(du)
$$
and $\tilde{\mathbb{P}}(X=k, L\in du)=\tilde{p}_k(u)\tilde{G}(du)$, with
$$
\tilde{p}_k(u) = \frac{p_k(u)q^{k-1}}{\sum_{i=0}^{\infty}p_i(u)q^{i-1}},
$$
and the conditioned Sevastyanov process remains a Sevastyanov
process with life span distribution $\tilde{G}$ and splitting
probabilities $\tilde{p}_k(u)$.
\end{example}
Finally, we address the question of whether a supercritical process
conditioned on extinction is subcritical. It is clear that the
conditioned branching process has extinction probability one, for
any starting type, but this would also be the case if the process
were nontrivially critical.
In the single type case it
follows from \eqref{single_rn} that the expected total number of children per
individual in the conditioned process satisfies
$$
\tilde{\mathbb{E}}[X]=\mathbb{E}[Xq^{X-1}]=f'(q),
$$
in terms of the offspring generating function $f$ of the embedded
Bienaym\'e-Galton-Watson process. It is well known that $f'(q)<1$ if
the original process was supercritical, see \cite{AthreyaNey:1972},
and we conclude that the conditioned process is subcritical.
Also in the general case it is enough to consider the embedded
generation counting process. If $X_n(A) = \mathrm{card}\{x\in\mathbb{N}^n\cap\mathscr{R}: \sigma_x\in A\}$ denotes the number of individuals
in $n$th generation of type $A$ and $q:=\sup_{\mathcal{S}}q_s<1$,
$$\tilde{\mathbb{E}}_s[X_n(\mathcal{S})]=\mathbb{E}_s\bigg[X_n(\mathcal{S}) e^{\int_{\mathcal{S}}\log q_rX_n(dr)}\bigg]/q_s\le
\mathbb{E}_s\bigg[X_n(\mathcal{S})q^{X_n(\mathcal{S})}\bigg]/q_s\to 0,
$$
as $n\to \infty$ (by dominated convergence, since $X_n(\mathcal{S})$ must either tend to zero or to infinity). But the expected size of the embedded process tending to zero means exactly that the process is subcritical.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,045 |
\section{Introduction}
\label{sec_intro}
Many Galactic globular clusters (GGCs) located in the inner regions of
the Milky Way (within 3 kpc from the Galactic center) still lack a
proper determination of their physical parameters. The analysis of the
color-magnitude diagrams (CMDs), which is the most common tool to
extract this information, is severely hampered when applied to these
objects. We need to add two specific issues more common in the inner
parts of the Galaxy to the usual problems we face when studying the
globular clusters of the outer Galaxy (e.g., high crowding,
saturation by bright stars). The first issue is the presence of high
extinction and reddening, which usually change differentially over the
field of view of these GGCs. The second issue is the high density of
field stars that appear in the CMDs superimposed with the stellar
population of these GGCs, making it difficult to disentangle the field
from the cluster especially in the most poorly populated GGCs.
To diminish the effects of extinction, observations of these GGCs can
be carried out in the near-infrared. Extinction at these wavelengths
is significantly smaller than in the optical ($\mathcal{A}_K \sim 0.1
\mathcal{A}_V$; see Table~2 in \citealt{cat11}). To take full
advantage of this fact, the {\em VISTA Variables in the Via Lactea} (VVV)
survey \citep{min10,sai12b} observed the inner regions of the Galaxy
in the near-infrared in recent years. VVV is a European Southern
Observatory (ESO) public survey that was conducted between 2010 and
2016, covering 560 sq. degrees of the Galactic bulge and an adjacent
region of the inner disk. Observations in five near-infrared filters
$ZYJHK_s$ were performed, and observations in $K_s$ of the whole
region were taken in multiple epochs, aiming to explore the presence
of variable stars and other variable phenomena in this area of the
sky.
There are 36 GGCs in the area covered by the VVV survey, according to
the 2010 version of the \citet{har96} catalog (from now on the Harris
catalog), along with tens of new candidates (e.g.,
\citealt{min17b,cam19,min19,gra19,pal19}; for a recent update, see
\citealt{bic19}). In a series of papers, we are exploring the variable
stars present in these GGCs, aiming to better characterize the cluster
in which they reside. Among them, RR~Lyrae stars are fundamental for
our purposes. Not only are these stars quite common in (many) globular
clusters, but their period-luminosity (PL) relation, especially tight
in the near-infrared \citep{lon86,cat04}, makes them excellent
standard candles that allow us to accurately infer their distances and
extinctions; we showed this for 2MASS-GC~02 and Terzan~10 in
\citet{alo15}, which we refer to from now on as
\citetalias{alo15}. The GGCs are clumped into two main groups
\citep{oos39,cat09a,smi11}, according to the characteristics of the
fundamental-mode RR~Lyrae (RRab) that they contain: the Oosterhoff~I
group shows RRab stars with shorter periods ($\langle P_{\rm ab}
\rangle \sim 0.55$~days), while the Oosterhoff II have RRab stars with
longer periods ($\langle P_{\rm ab} \rangle \sim 0.64$~days). Between
these two groups, there is an almost empty region called the
Oosterhoff gap, at $\langle P_{\rm ab} \rangle \sim
0.60\pm0.02$~days. Oosterhoff~II GGCs also tend to be more metal-poor
than Oosterhoff~I GGCs and to have a higher ratio of first-overtone
RR~Lyrae (RRc) to RRab stars. A couple of GGCs containing $\langle
P_{\rm ab} \rangle$ too long for their high metallicities have been
classified as Oosterhoff~III GGCs \citep{pri00}.
\begin{table*}
\caption{Positions and physical parameters of the target clusters.}
\label{tab_gcinput}
\centering
\begin{tabular}{ccccccccc}
\hline\hline
Cluster & $\alpha$(J2000) & $\delta$(J2000) & $l$ & $b$ & $[{\rm Fe/H}]$ & $M_V$ & c & $r_{t}$ \\
& (h:m:s) & (d:m:s) & (deg) & (deg) & (dex)& (mag) & & (arcmin) \\
\hline
NGC~6441 & 17:50:13 & -37:03:04 & 353.53 & -5.01 & -0.46 & -9.63 & 1.74 & 7.14 \\
Terzan~10 & 18:02:58 & -26:04:00 & 4.42 & -1.86 & -1.59 & -6.35 & 0.75 & 5.06 \\
2MASS-GC~02 & 18:09:37 & -20:46:44 & 9.78 & -0.62 & -1.08 & -4.86 & 0.95 & 4.90 \\
NGC~6569 & 18:13:39 & -31:49:35 & 0.48 & -6.68 & -0.76 & -8.28 & 1.31 & 7.15 \\
NGC~6626 (M~28) & 18:24:33 & -24:52:12 & 7.80 & -5.58 & -1.32 & -8.16 & 1.67 & 11.22 \\
NGC~6656 (M~22) & 18:36:24 & -23:54:12 & 9.89 & -7.55 & -1.70 & -8.50 & 1.38 & 31.90 \\
\hline
\end{tabular}
\tablefoot{Equatorial coordinates are taken from \citet{bic19}. Iron
content $[{\rm Fe/H}]$, absolute integrated visual magnitude
$M_V$, concentration $c= \log(r_t/r_c)$, and tidal radii $r_{t}$
are according to the 2010 version of the \citet{har96} catalog,
except for Terzan~10, whose $[{\rm Fe/H}]$ value is provided by
Geisler et al. (in prep.), as detailed in Sect.~\ref{sec_gc2var}.}
\end{table*}
In this second paper of the series, we focus on several well-known
GGCs located in the VVV footprint: NGC~6441, NGC~6569,
NGC~6626~(M~28), and NGC~6656~(M~22). These GGCs show a significant
range in their metallicities (see Table~\ref{tab_gcinput}) and in
their Oosterhoff classification. They differ from those studied in
\citetalias{alo15} because they lie in regions in which extinction is
lower, although still high for outer GGCs standards. These GGCs are
also better populated than the GGCs studied in \citetalias{alo15}, and
they lie in fields in which the stellar background densities are
lower. Finally, they possess recent distance estimations inferred from
{\em Gaia} data \citep{bau19}, and they contain significant numbers of
variable stars in their fields present in the most recent version of
the \citet{cle01} catalog of variable stars in the GGCs (from now on,
the Clement catalog) and in the collection of variable stars in the
inner Milky Way by \citet{sos16,sos17,sos19} from the Optical
Gravitational Lensing Experiment (OGLE). Therefore, by drawing a
comparison with this previous literature, we aim to examine the
reliability of our methods to detect variable stars and to infer the
physical parameters (distance, extinction, and proper motion [PM])
for their GGCs. While achieving this, we also provide a look at the
variable stars of these GGCs from their innermost regions to their
outskirts at near-infrared wavelengths for the first
time. Additionally, we also revisit 2MASS-GC~02 and Terzan~10, the
GGCs from \citetalias{alo15}, to check the advantages and drawbacks of
our updated approach to detect variable sources.
The paper is divided into the following sections: In
Sect.~\ref{sec_obs}, we describe our database; in Sect.~\ref{sec_var}
we develop our improved method to detect variable stars; in
Sect.~\ref{sec_m22}, \ref{sec_gc1}, and \ref{sec_gc2} we implement
this method firstly on NGC~6656~(M~22), then on NGC~6441, NGC~6569,
NGC~6626~(M~28), and finally on 2MASS-GC~02 and Terzan~10, providing a
description of the variable star candidates we found in their fields;
we also select potential members of each cluster according to their PM
and their positions in the CMD, and use these sources to obtain the
distances and extinctions toward these GGCs; finally, in
Sect.~\ref{sec_con} we summarize our conclusions.
\section{Observations and datasets}
\label{sec_obs}
In our analysis, we used observations from VVV \citep{min10}, one of
the six original ESO public surveys conducted with the 4.1m VISTA
telescope in the Cerro Paranal Observatory. The camera installed in
the telescope provides wide-field images of 1.6 square degrees of the
sky, with gaps due to the separation of the 16 chips in the
camera. The VVV survey uses five near-infrared filters ($Z$, $Y$, $J$,
$H$, and $K_s$) and its footprint for the Galactic bulge region is
divided into 196 individual fields. The observing strategy for every
VVV field, detailed in \citet{sai12b}, consists in taking two
consecutive slightly dithered images of the sky in a given filter
which, when later on combined into a so-called stacked image, allow
the correction of cosmetic defects from the different chips. Along
with this pattern, a mosaic of six consecutive pointings is taken for
every field and filter to provide a contiguous coverage of the
observed field, covering the gaps among the chips in the camera. Every
field is observed at least twice in $Z$, $Y$, $J$, and $H$, and at
least 70 times in the $K_s$ filter. The $K_s$ observations in every
epoch have a median exposure time of 16 seconds \citep{sai12b} and
were taken in a nonuniform, space-varying cadence \citep{dek19}.
Our analysis is based on the VVV photometry and astrometry provided by
VIRAC2, an updated version of the VVV Infrared Astrometric Catalogue
\citep[VIRAC;][]{smi18}. A complete description of the new features of
VIRAC2 will be given in an upcoming dedicated paper (Smith et al., in
prep.). Suffice it to say here that this version of VIRAC uses DoPHOT
\citep{sch93,alo12} to extract the point-spread function photometry
from the VVV stacked images, significantly reducing the missing
sources and increasing the completeness of the sample, especially in
the highly crowded GGCs \citep{alo18}. Photometric zero points for
each observation were measured by globally minimizing the
error-normalized offsets between multiple detections of individual
sources and offsets from 2MASS (transformed to the VISTA photometric
system as per \citealt{gonfer18}) where available. A further
calibration was applied to remove spatially coherent residual
structure and match the photometric uncertainties to residual
scatter. This way the calibration offsets reported in \citet{haj20}
are effectively resolved. The VIRAC2 catalog provides us with the VVV
mean magnitudes, PMs, and near-infrared light curves for all the
detected sources.
\section{Variability analysis}
\label{sec_var}
We modified the variability analysis presented in \citetalias{alo15}
to optimize for its use with VIRAC2 on the detection and proper
characterization of classical pulsators. These are the variable stars
that we are more interested in detecting because they can be used as
standard candles. According to our experience, Cepheids and RRab stars
are the classical pulsators that stand out the most in the
near-infrared VVV observations. Even though their light curves are
more sinusoidal and their amplitudes are smaller in the near-infrared
than at optical wavelengths \citep{ang14,cat15}, the features of their
light curves still provide ways to properly identify them within the
VVV data, as shown in \citetalias{alo15}. The smaller amplitudes of
the RRc stars complicate their identification as variable
sources. Even when they are identified as variable stars, their almost
sinusoidal light curves make it difficult to conclusively assign them
to this class using only their VVV near-infrared
photometry. Therefore, the analysis described in this section aims to
mainly identify most of the Cepheids and RRab stars with good-quality
light curves in the VVV GGCs fields. Some RRc stars and eclipsing
binaries are also identified, as shown in the next sections.
The first step in our analysis was to take all sources from VIRAC2
that lie inside the tidal radius of the selected GGCs (see
Table~\ref{tab_gcinput}). After that, we selected stars that were
measured in at least half of the $K_s$ observations that were taken
for their region of sky. That way we disregarded spurious or very low
signal-to-noise detections. The next step in our analysis looked at
the distribution of the median absolute deviation (MAD) of the $K_s$
magnitudes in the different epochs for the stars in a given cluster,
as a function of its median $K_s$ magnitude. As classical pulsators
show variations along all its phased light curve, this produces a
higher MAD than for a non-variable source at a given magnitude. For
other types of variable stars such as eclipsing binaries with short
and/or not well-sampled eclipses, the MAD may not be such a good
indicator of variability. We calculated the median and the standard
deviation of the MAD as a function of the median $K_s$ magnitude. For
the next step of our analysis we kept only sources that are more than
$1\sigma$ away from the median of the MAD (see Fig.~\ref{fig_mad}). We
noticed from some preliminary tests with previously known variables in
our sample of GGCs that for stars under this cutoff we were unable to
detect classical pulsators with good-quality light curves. By applying
this cutoff, we reduced the number of stars to be analyzed to $\sim
10\%$ of the original sample.
\begin{figure}
\centering
\includegraphics[scale=0.65]{figm22_fig1.pdf}
\caption{MAD vs. median of the $K_s$ magnitudes for the different
epochs with VVV detections for the stars in M~22, serving as an
example of the preliminary selection to detect variable
candidates. The green dots show the stars we keep for the next
step of the analysis. They are located $1\sigma$ above the median
of the distribution, shown by the red line.}
\label{fig_mad}
\end{figure}
We then looked for periodicity in the signal of every source we
kept. In order to do that, first we submitted the light curve of the
remaining sources to a generalized Lomb-Scargle (GLS) analysis
\citep{zec09}. We analyzed the interval of frequencies $[0.01-10]$
days$^{-1}$, which covers all RR~Lyrae and Cepheids. As in
\citetalias{alo15}, we eliminated sources with significant gaps in a
given section of their folded light curve and aliased sources with
values for frequency close to integers of days$^{-1}$. Using the best
period the GLS algorithm provided, we fitted a Fourier series to the
folded light curve. We masked those epochs that departed more than
$3\sigma$ from the Fourier fit. We averaged values from the same
mosaic sequence (see Sect.~\ref{sec_obs}) for a more accurate sampling
of the light curve. Since the time taken to observe these sequences
for a given VVV field and filter is much shorter than the period of
the variable sources we are measuring, we can consider observations in
one of these sequences to have been taken at the same epoch. We then
recalculated the period of the variable candidate using the GLS
algorithm, and repeated the described sequence iteratively until we
reached a convergence in the period. After this, for every source we
calculated $\rho$, the ratio of the standard deviation of the
distribution of $K_s$ mosaic-averaged magnitudes from the different
epochs available for that source to the standard deviation from its
Fourier fit. The ratio $\rho$ between these two standard deviations
provides us with a good approximation to a by-eye identification and
quality assignment for variable sources (see
Fig.~\ref{fig_varselect}). After some visual inspection, we adopted a
value of $\rho=1.50$ as our cutoff point for the variable stars in the
GGCs reported in this study, except for Terzan~10 in which we used
$\rho=1.30$ as our cutoff point owing to the higher number of epochs
available for this cluster. A quick visual inspection of the shape of
the light curves above this cutoff value allowed us to reject a few
false positives, mainly owing to blending from two sources in the same
light curve.
\begin{figure}
\centering
\includegraphics[scale=0.5]{figm22_varselect.pdf}
\caption{VVV $K_s$ phase-folded light curves for some variable stars
in M~22 in common with the OGLE catalog, as examples of how their
quality changes with the $\rho$ parameter defined in the
text. From top to bottom, light curves are ordered according to
their decreasing $\rho$ value. The upper two panels, with the
higher $\rho$ values, represent clear identifications of the
variable nature of these stars according to their VVV light
curves. The third panel, where $\rho=1.50$, represents a detection
just in our cutoff point. In the lower panel, the poor quality of
the VVV light curve makes it difficult to separate this true
variable source from other false triggers with similar $\rho$
values, rendering their identification as a real variable
problematic.}
\label{fig_varselect}
\end{figure}
For all the remaining variable candidates, we proceeded to obtain
their observational parameters. As in \citetalias{alo15}, the apparent
$K_s$-band equilibrium brightnesses of the stars $\langle K_s \rangle$
were estimated by the intensity-averaged magnitudes of the stars,
computed from the Fourier fits to the light curves, and the total
amplitudes of the light curves $A_{K_s}$ were computed from the
Fourier fits as well. We have observations for filters $Z$, $Y$, $J$,
and $H$ only in approximately two epochs per filter. To calculate the
$\lambda-K_s$ apparent colors, we obtained the $K_s$ magnitude from
the Fourier fit of the $K_s$ light curve at the same phase where the
measurements for the other filters were taken, and after that, we took
the mean of the colors from the few different epochs available for a
given filter. Finally, we proceeded to examine the light curves to
assign a variable type to the different candidates. As mentioned in
\citetalias{alo15}, this eyeball classification is complicated by the
fact that the near-infrared $K_s$ light curves usually contain fewer
outstanding features than in the optical. Although we could reliably
classify Cepheids, RRab stars, and some eclipsing binaries, there were
a significant number of candidates that we left with their variable
type as undecided. For stars in common with the other catalogs and
variable types difficult to characterize in the near-infrared (e.g.,
RRc stars, W~UMa-type [EW] eclipsing binaries), we kept the
classification from their optical light curves. For stars classified
as eclipsing binaries, we doubled the periods reported by our
algorithms because they do not separate primary and secondary eclipses
in these variable stars.
\section{NGC~6656 (M~22)}
\label{sec_m22}
NGC~6656 (M~22) is the most metal-poor cluster from the sample of VVV
GGCs we are considering in this paper. Its high brightness and
moderate concentration when compared with other VVV GGCs (see
Table~\ref{tab_gcinput} for its physical characteristics), along with
its relative proximity, makes NGC~6656 the GGC that subtends the
largest sky area in the original VVV footprint, rivaled only by
FSR~1758 \citep{bar19}, observed by the VVV eXtended survey
\citep[VVVX;][]{min18d}.
\subsection{Variables in the cluster area}
\label{sec_m22var}
There have been a significant number of studies looking for variables
in this Oosterhoff~II GGC, dating back to those by \citet{bai02},
\citet{sha27}, and \citet{saw44}, up to the most recent works by
\citet{kun13} and \citet{roz17}. Ours is the first study in the
near-infrared that covers the whole cluster, from its very center out
to its tidal radius. Following the steps described in
Sect.~\ref{sec_var}, we identified 439 variable candidates in the
field of this GGC. We present their physical properties in
Table~\ref{tab_m22var}. There are 142 variable sources reported in the
Clement catalog for M~22, while we found 604 variable stars in the
OGLE catalog inside the tidal radius of M~22 and there are 360
variable sources in the catalog presented by \citet{roz17}. As shown
in the comparison in Table~\ref{tab_m22var}, most variable stars in
our catalog are also present in these other catalogs, but there are
155 variable candidates that were not previously reported.
\begin{table*}
\caption{Properties of the variable candidates in NGC~6656 (M~22).}
\label{tab_m22var}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccccccccccccccc}
\hline\hline
ID & $ID_{Clement}$ & $ID_{OGLE}$ & $ID_{Ros17}$ & $\alpha$ (J2000) & ${\delta}$ (J2000) & Distance\tablefootmark{a} & Period & $A_{K_s}$ & $\langle K_s \rangle$ & $Z-K_s$ & $Y-K_s$ & $J-K_s$ & $H-K_s$ & $\mu_{\alpha^\ast}$ & $\mu_{\delta}$ & Member\tablefootmark{b} & Type \\
& & (OGLE-BLG-) & & (h:m:s) & (d:m:s) & (arcmin) & (days) & (mag) & (mag)& (mag)& (mag)& (mag) & (mag) & (mas/yr) & (mas/yr) & & \\
\hline
C1 & KT-55 & -- & KT-55 & 18:36:23.25 & -23:53:58.1 & 0.29 & 0.658743 & 0.255 & 12.047 & 0.699 & 0.538 & 0.386 & 0.085 & 9.322 & -5.336 & Yes & RRab\\
C2 & V24 & T2CEP-0927 & 24 & 18:36:21.80 & -23:54:13.4 & 0.5 & 1.714805 & 0.284 & 11.104 & 0.988 & 0.774 & -- & 0.75 & 10.54 & -4.359 & Yes & Cep\\
C3 & V23 & RRLYR-36672 & 23 & 18:36:23.04 & -23:54:41.7 & 0.54 & 0.551593 & 0.307 & 12.253 & 0.911 & 0.704 & 0.412 & 0.104 & 10.056 & -5.662 & Yes & RRab\\
C4 & V1 & RRLYR-36670 & 1 & 18:36:19.57 & -23:54:33.0 & 1.07 & 0.615536 & 0.309 & 12.12 & 0.883 & 0.669 & 0.495 & 0.115 & 8.909 & -5.574 & Yes & RRab\\
C5 & -- & -- & -- & 18:36:25.20 & -23:53:04.8 & 1.15 & 0.105017 & 0.311 & 15.689 & -- & 0.382 & 0.236 & 0.028 & 16.141 & 1.402 & Yes & ?\\
C6 & V21 & RRLYR-36675 & 21 & 18:36:25.76 & -23:52:58.2 & 1.29 & 0.327135 & 0.08 & 12.441 & 0.622 & 0.448 & 0.368 & 0.096 & 9.948 & -5.237 & Yes & RRc\\
C7 & V4 & RRLYR-36673 & 4 & 18:36:23.37 & -23:55:29.4 & 1.3 & 0.716398 & 0.28 & 11.959 & 0.913 & 0.695 & 0.498 & 0.115 & 10.284 & -6.561 & Yes & RRab\\
C8 & V12 & RRLYR-36674 & 12 & 18:36:23.77 & -23:55:39.1 & 1.45 & 0.322624 & 0.099 & 12.451 & 0.682 & 0.492 & 0.335 & 0.069 & 10.372 & -5.292 & Yes & RRc\\
C9 & KT-14 & -- & KT-14 & 18:36:30.69 & -23:53:54.1 & 1.56 & 0.373642 & 0.081 & 12.291 & 0.735 & 0.534 & 0.355 & 0.091 & 10.267 & -7.793 & Yes & RRc\\
C10 & KT-12 & RRLYR-36679 & KT-12 & 18:36:30.94 & -23:53:49.1 & 1.63 & 0.443619 & 0.222 & 14.572 & 0.731 & 0.567 & 0.496 & 0.17 & -4.454 & -3.644 & No & RRab\\
\hline
\end{tabular}}
\tablefoot{This table is available in its entirety in electronic form
at the Centre de Donn\'ees astronomiques de Strasbourg (CDS). A
portion is shown here for guidance regarding its form and
content.\\ \tablefoottext{a}{Projected distance to the cluster
center} \tablefoottext{b}{Cluster membership according to criteria
explained in Sect.~\ref{sec_m22dis}} }
\end{table*}
It is interesting to highlight that we recovered the vast majority of
the previously reported RRab present in the cluster region. From the
39 RRab stars reported in the literature, there are only 2
(OGLE-BLG-RRLYR-36695 and OGLE-BLG-RRLYR-64197) and 1 possible RRab
(U37) that we were not able to detect as variable sources following
the steps described in Sect.~\ref{sec_var}. According to the Clement
catalog, and judging from their dim optical magnitudes, these
undetected RRab stars are not cluster members, but background objects
whose low signal-to-noise ratios in our VVV data may hamper our
ability to classify them as variables. So our method is able to
recover all the previously known RRab sources that belong to M~22. On
the other hand, we report the discovery of 5 possible uncharted RRab
stars in the cluster region (C112, C150, C196, C385, and C424),
although they are highly unlikely to be cluster members judging by
their dim near-infrared magnitudes and their PMs (see
Sect.~\ref{sec_m22pm}).
As mentioned in Sect.~\ref{sec_var}, detecting a lower proportion of
RRc stars with good-quality light curve is expected as a consequence
of their smaller amplitudes. We note that we missed 10 of the 35
previously reported RRc sources. Among those, only 4 (Ku-2, KT-16,
KT-26, and KT-37) are cluster members, according to the Clement
catalog. However, we report 1 new RRc candidate, C273, although its
dim near-infrared magnitudes and PM (see Sect.~\ref{sec_m22pm}) make
it highly unlikely for it to be a cluster member as well. We note as
well that we only recovered C2 (V24), the dimmest of the two Cepheid
stars reported in the literature, which we attribute to the brightest
one (V11) being saturated in our near-infrared data. It is also
interesting to highlight that most of the 155 newly found variable
stars are left as unknown types. We were only able to classify 5 RRab,
1 RRc and 16 eclipsing binaries with very clear eclipses. Finally, we
note 3 stars that are classified differently in the various catalogs
that we checked: C6 (V21), which appears as RRab or RRc; C30 (Ku-4),
which appears as eclipsing binary or RRc; and C56, which appears as
eclipsing binary or semiregular. According to their near-infrared
light curves, we classified C6 and C30 as RRc stars, and C56 as an
eclipsing binary.
\subsection{Proper motion, color-magnitude diagram, and cluster membership}
\label{sec_m22pm}
M~22 is one of the GGCs with the highest PM \citep{bau19}. Using the
multi-epoch VVV observations available at the time, \citet{lib15}
managed to separate the cluster stars from field stars using PMs
selection. One of the main features of the VIRAC2 database is to
provide accurate PMs for the stars in the VVV Galactic bulge
footprint. If we select stars in the field of M~22 with
well-determined VIRAC2 PMs ($\sigma_{\mu_{\alpha^\ast}}<1$ mas/yr,
$\sigma_{{\mu}_{\delta}}<1$ mas/yr), we can observe a clear separation
between two main distributions (see upper left panel of
Fig.~\ref{fig_m22}). The closer we move to the center of the cluster,
the higher the probability for the star to belong to the cluster
\citep{alo11}. Hence, selecting stars close to the cluster center
($r\leq1'$) allows for a clearer identification of the distribution of
stars that belong to M~22, as we can observe from the red dots in the
upper left panel of Fig.~\ref{fig_m22}. In order to define a criterion
for the stars to belong to the cluster based on the PM of our sample,
we used a k-nearest neighbors (kNN) classification \citep{has09}. To
train the kNN classifier, we selected as test stars belonging to the
cluster those with distances $r\leq1'$, and as test stars belonging to
the field an equal number of stars, but we selected the stars in our
sample with the farthest distance from the cluster center. The number
of nearest neighbors used by the kNN classifier is one tenth of the
total test sample. Selecting the innermost ($r\leq1'$) stars that
comply with our membership criteria allows us not only to properly
identify cluster stars in the CMD (see right panel of
Fig.~\ref{fig_m22}), but also to accurately define the PM of M~22 as
the mean of the PMs of those selected stars (see Table~\ref{tab_gcpm}),
which closely agrees with the PM obtained for this GGC from {\em Gaia}
Data Release 2 \citep[DR2;][]{bau19}.
\begin{table*}
\caption{Proper motions of the target clusters.}
\label{tab_gcpm}
\centering
\begin{tabular}{ccccc}
\hline\hline
Cluster & $\mu_{\alpha^\ast}$ & $\mu_{\delta}$ & $\mu_{\alpha^\ast Gaia}$ & $\mu_{\delta Gaia}$ \\
& (mas/yr) & (mas/yr) & (mas/yr) & (mas/yr) \\
\hline
NGC~6441 & $-2.4\pm0.8$ & $-5.6\pm1.0$ &$-2.51\pm0.03$ & $-5.32\pm0.03$ \\
Terzan~10 & $-6.8\pm1.2$ & $-2.5\pm1.3$ & $-6.91\pm0.06$ & $-2.40\pm0.05$ \\
2MASS-GC~02 & $4.0\pm0.9$ & $-4.7\pm0.8$ & $-1.97\pm0.16$ & $-3.72\pm0.15$ \\
NGC~6569 & $-4.1\pm0.8$ & $-7.3\pm0.8$ & $-4.13\pm0.02$ & $-7.26\pm0.02$ \\
NGC~6626 (M~28) & $-0.3\pm2.0$ & $-9.0\pm1.5$ & $-0.33\pm0.02$ & $-8.92\pm0.02$ \\
NGC~6656 (M~22) & $9.9\pm1.1$ & $-5.2\pm1.2$ & $9.82\pm0.01$ & $-5.54\pm0.01$ \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.5]{figm22_pm.pdf}
\includegraphics[scale=0.5]{figm22_cmd.pdf}
\end{tabular}
\caption{Upper left panel: PMs of the stars in the M~22 region with
$\sigma\leq1.0$ mas/yr. Higher transparencies represent lower
densities of stars. Overplotted in red are the PMs of the stars
located in the innermost regions ($r\leq1.0'$) of this GGC. Lower
left panel: PMs of the detected variable stars in the field covered
by M~22. Those stars selected as cluster members by our kNN
classifier are shown according to their variability type: the
solid magenta triangles indicate RRab stars, empty magenta
triangles are RRc stars, blue squares are Cepheids, green
pentagons are eclipsing binaries, and inverted red triangles are
unclassified variables. Right panel: $J-K_s$ vs. $K_s$
near-infrared CMD of the stars in the M~22 field. Higher
transparencies represent lower densities of stars. Overplotted in
yellow are the stars located in the innermost regions
($r\leq1.0'$) of this GGC that were selected as candidates by our
kNN classifier. The cluster member variables from the lower left
panel are also overplotted. The arrow in the upper left corner
shows the selected reddening vector.}
\label{fig_m22}
\end{figure*}
There are 38 variable sources (including 10 RRab, 12 RRc, and 1
Cepheid) in our catalog with PMs consistent with being cluster members
according to our kNN classification criterion (see lower left and
right panels of Fig.~\ref{fig_m22}), which are highlighted in the
second to last column from Table~\ref{tab_m22var} and whose light
curves are shown in Fig.~\ref{fig_m22var}. Among these, we note 6
previously unreported variable sources: C5, C19, C20, and C216 are
variable stars with a short period and a sinusoidal, low-amplitude
light curve, which do not allow us to properly identify their variable
type; C263 and C359 are short period variables as well, but with
distinct eclipses, which lead us to classify them as eclipsing
binaries. However, the PMs of C5 and C20 are relatively far from the
mean of the cluster, which casts some doubts on their membership to
M~22. As expected, we observe in Table~\ref{tab_m22var} that most of
the innermost variables belong to the cluster according to our
classification criteria, but we also identify some variable members
relatively far from the cluster center. If confirmed as cluster
members by follow-up identification of their radial velocities, C275,
C307 and C374 would be the variables in M~22 farthest away from its
center.
\begin{figure*}
\centering
\includegraphics[scale=0.85]{figm22_lc.pdf}
\caption{VVV $K_s$ phase-folded light curves for variable candidates
in the M~22 field selected as cluster members, as shown in
Sect.~\ref{sec_m22pm}. For every variable candidate, we provide
its identifier, its (rounded) period in days, and its variable
type (where available). The data for the light curves of all
the variable candidates found in the studied area, including
those used to create this figure, are available in electronic
form at the CDS.}
\label{fig_m22var}
\end{figure*}
In the right panel of Fig.~\ref{fig_m22} we present the near-infrared
CMD for M~22. The evolutionary sequences of this GGC can be clearly
observed to the blue of the red-giant branch (RGB) of the bulge field
stars. The cluster sequences stand out even more if we focus just on
the innermost ($r\leq 1'$) stars further selected according to our kNN
classifier. The upper main sequence (MS), RGB, and horizontal branch
(HB) of the cluster are clearly defined. The position in the CMD of
the RR~Lyrae selected using our membership criterion, clumped along
the HB, agrees with their expected position if they were cluster
members, reinforcing their high probabilities to belong to M~22.
\subsection{Distance and extinction}
\label{sec_m22dis}
Although such parameters as the extinction and the distance to M~22
could seem straightforward to calculate thanks to the tight
period-luminosity-metallicity (PLZ) relations that RR~Lyrae show in
the near-infrared \citep{lon86,cat04}, some assumptions need to be
made before proceeding to their calculation: 1) We report the
magnitudes of the detected variables in the VISTA near-infrared
photometric system \citep{gonfer18}. There are at least three main PLZ
relations for RR~Lyrae in this system: those by \citet{cat04}
calibrated in the VISTA system that we show in \citetalias{alo15};
those calibrated by \citet{mur15}; and those calibrated by
\citet{nav17,nav17b}. We checked the results each of these provide and
explored the required adjustments for each of these relations to
provide consistent results. 2) The PLZ relations mentioned in the
previous point are calibrated for their use with RRab stars. To use
the RRc sources as well, we need to fundamentalize their periods via
the relation $\log~P_{\rm ab}=\log~P_{\rm c}+0.127$
\citep[e.g.,][]{del06,nav17}. 3) We assume the metallicity of all the
stars in the cluster to be the same. Using the iron content of M~22
from the Harris catalog ($[{\rm Fe/H}]=-1.70$, see
Table~\ref{tab_gcinput}) and the ${\alpha}$-enhancement of
$[\alpha/{\rm Fe}]=0.33$ for M~22 by \citet{mar11a}, we obtained from
Eq. (1) in \citet{nav17} a value of $Z=0.0006$, assuming
$Z_{\odot}=0.017$ for consistency with \citet{cat04}. We note however
that most recent studies based on high-resolution spectroscopy or
narrow and medium-band photometry suggest M~22 to be one of the few
GGCs that show a spread in the iron-peak elements ($\sim0.1$ dex) in
addition to variations in the lighter elements
\citep{dac09,mar09,mar11,lee15,lee16}. There is still some
controversy, however, with groups suggesting the spread in iron is not
real \citep{muc15} and others not being able to make a strong
statement on this matter \citep{mes20}. Such a spread, if real, would
not significantly alter our results. 4) We have observations for
filters $Z$, $Y$, $J$, and $H$ only in approximately two epochs per
filter. We assume that the mean $\lambda-K_s$ apparent colors obtained
from these few measurements following the method described in
Sect.~\ref{sec_var} is equal to the apparent colors obtained from the
mean magnitudes of the light curves. Since the observations in filters
$ZYJH$ were done at random phases for the different RR~Lyrae, the main
effect from this assumption in our estimation of the cluster
extinction is a slightly higher dispersion in the mean distribution of
the color excesses for the different filters, as we mentioned in
\citetalias{alo15}. For $J$ and $H$, we compared our results with
those obtained following the method described in \citet{haj18} to
calculate the magnitudes for the RR~Lyrae on those filters using only
a very limited number of epochs; we found no significant differences
on the mean extinction measured for the cluster. 5) Differential
extinction in M~22 \citep{alo12} is significantly smaller than in the
GGCs in \citetalias{alo15}, and therefore we are not able to find the
true distance and the selective-to-total extinction ratios $R$
simultaneously as we did there, and we need to assume some
selective-to-total extinction ratio to proceed with the distance
calculation. Extinction toward the inner Galaxy has been shown to be
non-canonical and highly variable \citep{alo17}, with several
measurements available in the literature
\citep[e.g.,][]{maj16,alo17,nog18,dek19}. However, given that M~22 is
not located at very low Galactic latitudes, we decided to use the
canonical law provided by \citet{car89}, which is good for the outer
halo of the Galaxy. Nevertheless, we also examined the effect of using
the extinction ratios provided by \citet{alo17}, which are good for
the innermost Galaxy, on the estimation of the distance to the
cluster.
\begin{table*}
\caption{Color excesses and distances to the target clusters.}
\label{tab_gcdis}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccccc}
\hline\hline
Cluster & E($Z-K_s$) & E($Y-K_s$) & E($J-K_s$) & E($H-K_s$) & $R_{\odot}$ & $R_{\odot,\rm Baumgardt19}$ & $R_{\odot,\rm Harris96}$\\
&(mag) & (mag) & (mag) & (mag) & (kpc) & (kpc) & (kpc)\\
\hline
NGC~6441 & 0.41$\pm$0.10 & 0.20$\pm$0.08 & 0.18$\pm$0.06 & 0.03$\pm$0.02 & 13.0$\pm$0.3+0.2+0.8 & 11.83$\pm$0.05 & 11.6\\
Terzan~10 & 2.44$\pm$0.36 & 1.54$\pm$0.26 & 0.86$\pm$0.12 & 0.30$\pm$0.04 & 10.3$\pm$0.3-1.1+0.1 & -- & 5.8\\
2MASS-GC~02 & -- & -- & 3.0$\pm$0.4 & 1.12$\pm$0.23 & 6.0$\pm$0.9 & -- & 4.9\\
NGC~6569 & 0.65$\pm$0.05 & 0.37$\pm$0.04 & 0.25$\pm$0.05 & 0.07$\pm$0.03 & 10.6$\pm$0.3+0.3+0.3 & 10.24$\pm$1.16 & 10.9\\
NGC~6626 (M~28) & 0.46$\pm$0.09 & 0.28$\pm$0.07 & 0.23$\pm$0.03 & 0.05$\pm$0.02 & 5.41$\pm$0.08+0.11-0.11 & 5.42$\pm$0.33 & 5.5\\
NGC~6656 (M~22) & 0.35$\pm$0.07 & 0.24$\pm$0.06 & 0.18$\pm$0.04 & 0.05$\pm$0.03 & 3.24$\pm$0.05+0.06-0.04 & 3.24$\pm$0.08 & 3.2\\
\hline
\end{tabular}}
\tablefoot{The reported first $\sigma$ in both distances and color
excesses corresponds to the dispersion from the individual RR~Lyrae
estimations, the second $\sigma$ in the distance estimation
corresponds to the effects of changing the extinction law that we
use in the different GGCs according to the text, and the third to
changing the PLZ relations.}
\end{table*}
Taking the above points into consideration, our first calculations
using the sample of 22 RRLyrae that belong to M~22 (see
Sect.~\ref{sec_m22pm}) showed a significant variation between the
distance moduli based on the different PLZ relations we used:
$\mu_{K_s}=12.82\pm0.02$ mag from \citetalias{alo15},
$\mu_{K_s}=12.78\pm0.02$ mag from \citet{nav17b}, and
$\mu_{K_s}=12.64\pm0.02$ mag from \citet{mur15}. Using the PLZ
relations for the other VVV filters and the extinction ratios by
\citet{car89}, we were able to correct the distance moduli from
extinction. For \citet{mur15}, since there are no PLZ relations
provided in the other VVV filters that we could use to correct for
extinction, we took the color excesses provided by the PLZ relations
from \citetalias{alo15}. We obtained the following distances to M~22:
$3.48\pm0.05$ kpc from \citetalias{alo15}, $3.36\pm0.06$ kpc from
\citet{nav17b}, and $3.20\pm0.05$ kpc from \citet{mur15}. Only the
distance estimate provided by \citet{mur15} agrees with the values
provided in the literature (see Table~\ref{tab_gcdis}). We note that
the PLZ relations by \citet{nav17b} were calibrated using a distance
modulus for $\omega$ Centauri, $\mu_0=13.708$ mag, a value somewhat
higher than the latest measurements from {\em Gaia}, $\mu_0=13.60$ mag
\citep{bau19}. If we calibrate the PLZ relations from \citet{nav17b}
using the latter $\mu_0$ value for $\omega$ Centauri, we need to apply
an offset of 0.11 to Eqs. (4) and (5) from \citet{nav17b}, leaving
them as
\begin{equation}
M_J(RRL)=-1.77 \log(P) + 0.15 [{\rm Fe/H}] - 0.45,
\end{equation}
\begin{equation}
M_{K_s}(RRL)=-2.23 \log(P) + 0.14 [{\rm Fe/H}] - 0.78.
\end{equation}
Applying these updated PLZ relations, we obtained for M~22 a distance
modulus $\mu_{K_s}=12.67\pm0.02$ mag, and after correction for
extinction, a distance of $3.20\pm0.06$ kpc, which better agrees with
values from the literature (see Table~\ref{tab_gcdis}). As the PLZ
relations from \citetalias{alo15} seem to provide consistent results
according to the examination done in \citet{nav17}, we assumed that
there was only an offset in their calibration. Assuming the distance to
M~22 to be 3.24 kpc from the {\em Gaia} determination by
\citet{bau19}, we found the offset we need to apply to the PLZ
relations from \citetalias{alo15} to be 0.157, transforming them into
\begin{equation}
M_{K_s} = -0.480 - 2.347 \log(P) + 0.1747 \log(Z),
\end{equation}
\begin{equation}
M_H = -0.397 - 2.302 \log(P) + 0.1781 \log(Z),
\end{equation}
\begin{equation}
M_J = -0.079 - 1.830 \log(P) + 0.1886 \log(Z),
\end{equation}
\begin{equation}
M_Y = +0.166 - 1.467 \log(P) + 0.1966 \log(Z),
\end{equation}
\begin{equation}
M_Z = +0.314 - 1.247 \log(P) + 0.2014 \log(Z).
\end{equation}
Using these updated PLZ relations from \citetalias{alo15}, we show in
Fig.~\ref{fig_m22dis} the distance moduli and color excesses for all
22 RR~Lyrae selected in Sect.~\ref{sec_m22pm} as M~22 members. We
stress the small dispersions they show. We report the mean values of
the distance and color excesses for M~22 in Table~\ref{tab_gcdis}.
\begin{figure}
\centering
\includegraphics[scale=0.65]{figm22_dis.pdf}
\caption{Distance modulus vs. $E(J-K_s)$ for those RR~Lyrae selected
as members of M~22. The solid magenta triangles indicate RRab
stars, while empty magenta triangles represent RRc stars. The
distance moduli are not corrected for extinction. The relatively
small near-infrared differential extinction baseline does not
allow for a proper determination or selection of a particular
extinction law.}
\label{fig_m22dis}
\end{figure}
In principle, we could also use C2 (V24), the Cepheid we detected, to
calculate distance and extinction to M~22. Unfortunately, the PL
relations currently calculated for Type~2 Cepheids in the VISTA
photometric system only cover the $J$, $H$, and $K_s$ filters
\citep{bha17,bra19,dek19}. C2 is saturated in $J$ and $H$ (see
Table~\ref{tab_m22var}), so we were not able to obtain the extinction
from the PL relations. Assuming for C2 the mean extinction for the
cluster reported in Table~\ref{tab_gcdis} from the RR~Lyrae estimation
and the \citet{car89} extinction law, we obtained a distance of
$3.46\pm0.02$ kpc. This distance is a little higher than the value we
obtained from the literature (see Table~\ref{tab_gcdis}), but
interestingly it agrees well with the values we were obtaining from
the original PLZ relations from \citetalias{alo15} before we
recalibrated them. However, we note that our sample of Cepheids in
this GGC consists of only one detected star, and therefore all results
obtained from the analysis of such a small sample should be regarded
with caution.
\section{NGC~6626 (M~28), NGC~6569, and NGC~6441}
\label{sec_gc1}
NGC~6441, NGC~6569, and NGC~6626 (M~28) are also massive GGCs located
toward the inner Galaxy (see Table~\ref{tab_gcinput}), although, as
seen in the reddening maps of these regions \citep{gon12,gon18}, they
are at Galactic latitudes where the values of extinction are still not
as extreme as for the GGCs in Sect.~\ref{sec_gc2}. As shown in
Table~\ref{tab_gcinput}, the metallicities of these GGCs extend over a
wide range, and, as shown in Table~\ref{tab_gcdis}, their kinematic
distances were recently measured using {\em Gaia} DR2 \citep{bau19}.
\subsection{Variables in the cluster area}
\label{sec_gc1var}
A significant number of variable stars, including several classical
pulsators, are already known in the regions covered by these
GGCs. However, as for M~22 in Sect.~\ref{sec_m22}, ours is the first
study to characterize the populations of variable stars in these GGCs
in the near-infrared, covering the whole cluster area.
M~28 was classified as an Oosterhoff intermediate or a hybrid
Oosterhoff~I/II system by \citet{pri12}, the most recent study of the
variable stars in this GGC. We detected and characterized 88 variable
candidates in the field of M~28. We present their main observational
parameters in Table~\ref{tab_m28var}. From the 13(18) RRab stars
reported in the Clement(OGLE) catalog, we only fail to detect 1(2). As
expected, the proportion of detected RRc sources is a little
lower. From 9(10) RRc stars reported in the Clement(OGLE) catalog, we
detected 3(5) of them. Furthermore, there are 2 reported Cepheids in
the Clement catalog, but they appear neither in the OGLE catalog nor
in ours, probably due to saturation. Finally, we detected 66 new
variable candidates. We classify 2 as RRab, 13 as eclipsing binaries,
and we are not able to assign a variable type to the other 51
candidates.
\begin{table*}
\caption{Properties of the variable candidates in NGC~6626 (M~28).}
\label{tab_m28var}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccccccccccccccc}
\hline\hline
ID & $ID_{Clement}$ & $ID_{OGLE}$ & $\alpha$ (J2000) & ${\delta}$ (J2000) & Distance\tablefootmark{a} & Period & $A_{K_s}$ & $\langle K_s \rangle$ & $Z-K_s$ & $Y-K_s$ & $J-K_s$ & $H-K_s$ & $\mu_{\alpha^\ast}$ & $\mu_{\delta}$ & Member\tablefootmark{b} & Type \\
& & (OGLE-BLG-) & (h:m:s) & (d:m:s) & (arcmin) & (days) & (mag) & (mag)& (mag)& (mag)& (mag) & (mag) & (mas/yr) & (mas/yr) & & \\
\hline
C1 & V20 & RRLYR-59819 & 18:24:33.11 & -24:51:43.0 & 0.48 & 0.497742 & 0.262 & 13.459 & 0.689 & 0.525 & 0.407 & -0.002 & -4.975 & -7.654 & Yes & RRab\\
C2 & V22 & RRLYR-59799 & 18:24:30.98 & -24:52:01.9 & 0.49 & 0.322637 & 0.107 & 13.637 & 0.738 & 0.448 & 0.366 & 0.088 & 1.108 & -12.76 & Yes & RRc\\
C3 & V23 & RRLYR-59795 & 18:24:30.25 & -24:52:03.1 & 0.64 & 0.292314 & 0.09 & 13.774 & 0.766 & 0.528 & 0.367 & 0.12 & 3.496 & -8.488 & Yes & RRc\\
C4 & V11 & RRLYR-59804 & 18:24:31.48 & -24:51:33.1 & 0.73 & 0.54276 & 0.329 & 13.41 & 0.889 & 0.61 & 0.426 & 0.118 & -1.411 & -9.161 & Yes & RRab\\
C5 & V18 & RRLYR-59844 & 18:24:36.54 & -24:51:50.1 & 0.88 & 0.640177 & 0.293 & 13.252 & 1.068 & 0.787 & 0.504 & 0.119 & 4.428 & -7.115 & Yes & RRab\\
C6 & V25 & RRLYR-59787 & 18:24:28.82 & -24:52:09.1 & 0.95 & 0.74772 & 0.206 & 13.105 & 1.0 & 0.731 & 0.579 & 0.099 & 1.732 & -7.355 & Yes & RRab\\
C7 & V29 & RRLYR-59780 & 18:24:27.66 & -24:52:15.1 & 1.21 & 0.311162 & 0.085 & 13.6 & 0.882 & 0.611 & 0.444 & 0.103 & -0.551 & -8.168 & Yes & RRc\\
C8 & V13 & RRLYR-59773 & 18:24:25.77 & -24:52:33.7 & 1.68 & 0.654924 & 0.294 & 13.247 & 1.067 & 0.782 & 0.504 & 0.131 & -1.095 & -8.659 & Yes & RRab\\
C9 & -- & -- & 18:24:24.61 & -24:53:01.8 & 2.08 & 0.172628 & 0.479 & 14.657 & 1.192 & 0.862 & 0.622 & 0.147 & 1.698 & -5.384 & No & ?\\
C10 & V12 & RRLYR-59878 & 18:24:43.31 & -24:52:56.3 & 2.45 & 0.578212 & 0.333 & 13.367 & 0.943 & 0.676 & 0.51 & 0.121 & 0.262 & -8.341 & Yes & RRab\\
\hline
\end{tabular}}
\tablefoot{This table is available in its entirety in electronic form
at the CDS. A portion is shown here for guidance regarding its form and content.\\
\tablefoottext{a}{Projected distance to the cluster center}
\tablefoottext{b}{Cluster membership according to criteria explained in Sect.~\ref{sec_gc1dis}}
}
\end{table*}
NGC~6569 could be considered an Oosterhoff~I GGC according to its
metallicity and mean periods of their RRab, although the high ratio of
RRc and RRab, characteristic of an Oosterhoff~II GGC, casts some
doubts on its proper Oosterhoff classification \citep{kun15}. We
detected 27 variable candidates in the field of NGC 6569. We present
the main observational parameters of these candidates in
Table~\ref{tab_ngc6569var}. Detection of the variable stars seems
highly dependent on their distance to the cluster center. While at
distances $1.1' \leq r \leq r_t$ we recovered all the 4(6) RRab stars
from the Clement(OGLE) catalog, at distances $r<1.1'$ from the cluster
center we only recovered 1(1) of the 9(6) RRab sources shown in the
Clement(OGLE) catalog. As expected, the proportion of detected RRc
stars is lower. From the 12(13) RRc stars reported in the
Clement(OGLE) catalog, we failed to detect all 4(4) stars at distances
$r<1.1'$ from the cluster center, and from the remaining ones at $1.1'
\leq r \leq r_t$, we only detected 2(2) of them. Furthermore, there is
1 reported Cepheid in the Clement catalog, which saturates in our
photometry. There is another one in the OGLE catalog that we were not
able to recover either. Finally, we detected 6 new variable candidates
in NGC~6569. We classified C2 as an RRab, C19 as a Cepheid, and we
were not able to assign a variable type to the other 4 candidates.
\begin{table*}
\caption{Properties of the variable candidates in NGC~6569.}
\label{tab_ngc6569var}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccccccccccccccc}
\hline\hline
ID & $ID_{Clement}$ & $ID_{OGLE}$ & $\alpha$ (J2000) & ${\delta}$ (J2000) & Distance\tablefootmark{a} & Period & $A_{K_s}$ & $\langle K_s \rangle$ & $Z-K_s$ & $Y-K_s$ & $J-K_s$ & $H-K_s$ & $\mu_{\alpha^\ast}$ & $\mu_{\delta}$ & Member\tablefootmark{b} & Type \\
& & (OGLE-BLG-) & (h:m:s) & (d:m:s) & (arcmin) & (days) & (mag) & (mag)& (mag)& (mag)& (mag) & (mag) & (mas/yr) & (mas/yr) & & \\
\hline
C1 & V26 & RRLYR-34968 & 18:13:36.88 & -31:49:17.2 & 0.54 & 0.653784 & 0.295 & 14.761 & 1.196 & 0.813 & 0.465 & 0.132 & -3.418 & -7.771 & Yes & RRab\\
C2 & -- & -- & 18:13:35.08 & -31:50:15.6 & 1.07 & 0.672413 & 0.212 & 14.898 & 1.223 & 0.855 & 0.51 & 0.128 & -3.371 & -7.906 & Yes & RRab\\
C3 & V17 & RRLYR-34982 & 18:13:39.21 & -31:50:46.5 & 1.19 & 0.53611 & 0.33 & 15.087 & 1.048 & 0.684 & 0.522 & 0.103 & -5.731 & -5.845 & Yes & RRab\\
C4 & V2 & RRLYR-34949 & 18:13:31.04 & -31:49:40.7 & 1.69 & 0.574729 & 0.286 & 15.004 & 1.073 & 0.74 & 0.516 & 0.122 & -2.682 & -5.495 & Yes & RRab\\
C5 & V37 & ECL-369682 & 18:13:33.21 & -31:50:58.5 & 1.86 & 0.631562 & 0.19 & 15.166 & 0.996 & 0.667 & 0.473 & 0.128 & -0.234 & -2.314 & No & Ecl\\
C6 & V38 & ECL-370166 & 18:13:37.57 & -31:47:31.6 & 2.08 & 0.947012 & 0.196 & 15.059 & 1.103 & 0.758 & 0.483 & 0.148 & -6.705 & -3.783 & No & Ecl\\
C7 & V20 & RRLYR-35010 & 18:13:45.97 & -31:51:13.5 & 2.21 & 0.542082 & 0.357 & 15.011 & 1.187 & 0.807 & 0.44 & 0.135 & -4.884 & -8.345 & Yes & RRab\\
C8 & -- & -- & 18:13:30.70 & -31:51:04.0 & 2.3 & 0.138767 & 0.455 & 14.685 & 1.508 & 1.079 & 0.731 & 0.187 & 3.751 & -6.034 & No & ?\\
C9 & -- & -- & 18:13:33.94 & -31:47:30.8 & 2.33 & 0.210626 & 0.081 & 11.783 & 0.637 & 0.429 & 0.339 & 0.101 & -3.608 & 0.242 & No & ?\\
C10 & V12 & RRLYR-34936 & 18:13:27.37 & -31:49:56.1 & 2.49 & 0.261068 & 0.093 & 15.351 & 0.827 & 0.533 & 0.33 & 0.092 & -2.724 & -8.771 & Yes & RRc\\
\hline
\end{tabular}}
\tablefoot{This table is available in its entirety in electronic form
at the CDS. A portion is shown here for guidance regarding its form and content.\\
\tablefoottext{a}{Projected distance to the cluster center}
\tablefoottext{b}{Cluster membership according to criteria explained in Sect.~\ref{sec_gc1dis}}
}
\end{table*}
Lastly, NGC~6441 is one of the most intriguing GGCs in the Milky Way
on account of its radial pulsators. Along with NGC~6388 \citep{pri02},
NGC~6441 belongs to the Oosterhoff~III GGCs, which is characterized
for being metal-rich and hosting long-period RR~Lyrae. In this regard,
NGC~6441 has been the subject of several studies to look at its
population of variable stars
\citep[e.g.,][]{pri01,pri03,cor06,kun18}. We identified 59 variable
candidates in our analysis of the region covered by this GGC. We
present their main observational parameters in
Table~\ref{tab_ngc6441var}. As for NGC~6569, our detection of the
variable stars depends highly on their distance to the cluster
center. While from the 20(20) RRab sources from the Clement(OGLE)
catalog at distances $1.0'\leq r \leq r_t$, we missed only 4(3), at
distances $r<1.0'$ from the cluster center we only recovered 2(2) of
the 31(12) RRab stars shown in the Clement(OGLE) catalog. As expected,
the detection of RRc stars with good-quality light curves is worse. We
did not detect any of the 15(5) RRc sources reported in the
Clement(OGLE) catalog at distances $r<1'$, while we only recovered
4(3) of the 13(11) RRc stars reported in the Clement(OGLE) catalog at
distances $1.0'\leq r \leq r_t$. Moreover, we recovered none of the
8(2) Cepheids reported in the Clement(OGLE) catalog at distances
$r<1.0'$, but we recovered the only Cepheid reported in the OGLE
catalog at distances $1.0'\leq r \leq r_t$. Furthermore, we report 9
new variable candidates. Unfortunately, we were not able to assign a
variable type to any of these. Finally, it is worth highlighting 3 of
the variable stars present in NGC~6441: C8 (V150), C17 (V45), and C24
(V69). For C8, we found a period much longer than reported in the
Clement catalog. A period this long ($\sim1.06$ days) for a pulsator
in a canonical GGC means C8 is a Cepheid. But as shown in
Sect.~\ref{sec_gc1dis}, C8 seems to follow the PLZ relations for an
RRab star in NGC~6441, so we kept its classification as an RRab. C24
is classified as an RRc in the Clement catalog and as an RRab in the
OGLE catalog. Even though its period ($\sim0.56$ days) suggests this
pulsator to be an RRab, its low amplitude suggests it is an RRc. In
Sect.~\ref{sec_gc1dis} we show that in order for it to be at the same
distance as the rest of RR~Lyrae in NGC~6441, it needs to be
considered as an RRc. The case of C17 could be similar to that of
C24. Its position in the CMD coincides with the RR~Lyrae from
NGC~6441, but its period, much shorter than the mean periods of the
RRab stars in NGC~6441, may suggest we are dealing with a long-period
RRc from this GGC. However, given that it has a higher amplitude than
C24, and its belonging to NGC~6441 is just borderline according to our
kNN classifier (see Sect.~\ref{sec_gc1pm}), we prefer to consider it
as a field RRab, as suggested by \citet{pri01}.
\begin{table*}
\caption{Properties of the variable candidates in NGC~6441.}
\label{tab_ngc6441var}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccccccccccccccc}
\hline\hline
ID & $ID_{Clement}$ & $ID_{OGLE}$ & $\alpha$ (J2000) & ${\delta}$ (J2000) & Distance\tablefootmark{a} & Period & $A_{K_s}$ & $\langle K_s \rangle$ & $Z-K_s$ & $Y-K_s$ & $J-K_s$ & $H-K_s$ & $\mu_{\alpha^\ast}$ & $\mu_{\delta}$ & Member\tablefootmark{b} & Type \\
& & (OGLE-BLG-) & (h:m:s) & (d:m:s) & (arcmin) & (days) & (mag) & (mag)& (mag)& (mag)& (mag) & (mag) & (mas/yr) & (mas/yr) & & \\
\hline
C1 & V57 & RRLYR-03918 & 17:50:10.40 & -37:02:55.9 & 0.54 & 0.694358 & 0.299 & 15.228 & 0.949 & 0.614 & 0.56 & 0.114 & -3.486 & -9.286 & Yes & RRab\\
C2 & V59 & RRLYR-03904 & 17:50:08.95 & -37:03:32.2 & 0.94 & 0.702823 & 0.416 & 15.347 & 0.861 & 0.582 & 0.451 & 0.042 & -7.74 & -0.528 & Yes & RRab\\
C3 & V66 & RRLYR-03934 & 17:50:11.45 & -37:02:07.0 & 1.0 & 0.860932 & 0.174 & 14.989 & 1.046 & 0.703 & 0.461 & 0.12 & -2.915 & -7.391 & Yes & RRab\\
C4 & V61 & RRLYR-03980 & 17:50:14.69 & -37:04:02.6 & 1.03 & 0.750108 & 0.278 & 15.173 & 1.031 & 0.706 & 0.467 & 0.092 & -1.965 & -4.983 & Yes & RRab\\
C5 & V62 & RRLYR-03961 & 17:50:13.19 & -37:04:06.5 & 1.04 & 0.679969 & 0.294 & 15.167 & 0.727 & 0.51 & 0.577 & 0.12 & -2.832 & -3.402 & Yes & RRab\\
C6 & V40 & RRLYR-03920 & 17:50:10.51 & -37:04:00.5 & 1.07 & 0.648005 & 0.343 & 15.3 & 1.024 & 0.705 & 0.389 & 0.087 & -0.95 & -5.497 & Yes & RRab\\
C7 & V43 & RRLYR-04003 & 17:50:16.85 & -37:02:09.7 & 1.19 & 0.773081 & 0.248 & 15.086 & 1.019 & 0.697 & 0.51 & 0.088 & -1.63 & -7.458 & Yes & RRab\\
C8 & V150 & -- & 17:50:07.04 & -37:03:15.9 & 1.21 & 1.068624 & 0.184 & 14.709 & 1.058 & 0.754 & 0.467 & 0.11 & -2.487 & -5.487 & Yes & RRab\\
C9 & -- & -- & 17:50:05.90 & -37:03:20.7 & 1.44 & 2.93268 & 0.201 & 14.648 & 1.396 & 1.021 & 0.704 & 0.169 & -2.486 & -6.663 & Yes & ?\\
C10 & V42 & RRLYR-03956 & 17:50:13.05 & -37:01:31.5 & 1.54 & 0.812634 & 0.239 & 15.078 & 1.143 & 0.804 & 0.618 & 0.233 & -2.412 & -5.718 & Yes & RRab\\
\hline
\end{tabular}}
\tablefoot{This table is available in its entirety in electronic form
at the CDS. A portion is shown here for guidance regarding its form and content.\\
\tablefoottext{a}{Projected distance to the cluster center}
\tablefoottext{b}{Cluster membership according to criteria explained in Sect.~\ref{sec_gc1dis}}
}
\end{table*}
\subsection{Proper motions, color-magnitude diagrams, and cluster memberships}
\label{sec_gc1pm}
We used the PMs provided by VIRAC2 for the detected variable stars to
assign membership to the GGCs. As we did for M~22 in
Sect.~\ref{sec_m22pm}, first we identified stars with precise PM
($\sigma_{\mu_{\alpha^\ast}}<1$ mas/yr, $\sigma_{\mu_{\delta}}<1$
mas/yr) and located at distances close to the cluster center
($r\le1'$) in these three GGCs (see left upper panels of
Fig.~\ref{fig_gc1}) to define the cluster membership criterion through
our kNN classifier. To train the classifier, we used the same criteria
that we used in M~22 (see Sect.~\ref{sec_m22pm}). While the separation
between the PMs of cluster and field populations is not as clear as for
M~22, in Table~\ref{tab_gcpm} we can see that the PMs of these GGCs,
defined by the mean of the PMs of the innermost ($r\le1.0'$) stars
selected by our kNN classifier, closely agree with those provided by
{\em Gaia} \citep{bau19}.
\begin{figure*}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.37]{figm28_pm.pdf}
\includegraphics[scale=0.37]{figm28_cmd.pdf}\\
\includegraphics[scale=0.37]{figngc6569_pm.pdf}
\includegraphics[scale=0.37]{figngc6569_cmd.pdf}\\
\includegraphics[scale=0.37]{figngc6441_pm.pdf}
\includegraphics[scale=0.37]{figngc6441_cmd.pdf}\\
\end{tabular}
\caption{As in Fig.~\ref{fig_m22}, but for NGC~6626 (M~28) in the
top panels, for NGC~6569 in the central panels, and for NGC~6441
in the lower panels. Now cyan empty circles encapsulate those
variable candidates whose memberships to the cluster according to
our kNN classifier were reversed (see text).}
\label{fig_gc1}
\end{figure*}
In the right panels of Fig.~\ref{fig_gc1} we present the CMDs for
these GGCs out to their tidal radii. Since they are contaminated by
field stars, we overplotted their innermost ($r\le1'$) stars selected
as members by our kNN classifier. We can appreciate that the dimmest
evolutionary sequences we reach in our CMDs for the inner regions of
M~28, the lower RGB and upper MS of the cluster appear well populated;
however, this is not the case for the inner regions of NGC~6569 and
NGC~6441, suggesting that for the innermost regions of these 2 GGCs
incompleteness for these evolutionary sequences is higher than for
M~28 and M~22. This would explain the lower number of detected
RR~Lyrae and Cepheids close to the cluster center shown in
Sect.~\ref{sec_gc1var} for both NGC~6569 and NGC~6441.
In the left lower panels of Fig.~\ref{fig_gc1} we present the PMs of
the variable candidates detected in the fields of the 3 GGCs
considered here, highlighting those that our kNN classifier selected
as cluster members. There are 28 member variable stars in M~28, 9 in
NGC~6569, and 28 in NGC~6441, according to our kNN classifier. When we
analyzed their position in the CMD, the bulk of variable candidates
seem to agree with their PM membership classification, but a few of
them needed their assigned memberships to be reversed. In the case of
M~28 (see top panels in Fig.~\ref{fig_gc1}), the positions in the CMD
of C24 and C71 (unclassified variables), and C19 (RRab) seem more
consistent with their classification as field stars. All of these
sources have similar PMs and are near the membership limit according to
the kNN classifier, which made us discard these as member stars, along
with C46 (unclassified), whose PM is also similar. We also discarded 2
other RRab stars, C49 and C66, owing to their positions in the
CMD. They are around half a magnitude below the other RRab sources and they
do not suffer from significant reddening. On the other hand, we
consider another RRab, C1, to be a member of M~28 owing to its
position in the CMD, even though the kNN classifier regards it as a
field star. We speculate that the projected proximity of this source
to the cluster center may influence the accuracy of its reported PM. A
similar situation seems to be happening for C1 and C2 in NGC~6441,
which we also classified as cluster members even though their PMs are
very different from those of the cluster members (see lower panels in
Fig.~\ref{fig_gc1}). The cases for the RR~Lyrae C28 in NGC~6441 and C4
in NGC~6569 are not so extreme. They have PMs closer to the bulk of the
stars in their respective GGCs. We classified these as as cluster
members according to their position in the CMD even though our kNN
classifier considers them to be field stars (see central and lower
panels of Fig.~\ref{fig_gc1}). Finally, we consider C39 in NGC~6441 to
be a field RRc star given its bright magnitude, even though the PM of
this source seems to suggest that it belongs to this GGC (see lower
panels in Fig.~\ref{fig_gc1}).
In the end, based on their PMs and positions in the CMD, and limiting
ourselves to the variables stars we detected, we consider cluster
members 23 variable stars in M~28 (including 10 RRab and 5 RRc), 10 in
NGC~6569 (including 6 RRab and 2 RRc), and 29 in NGC~6441 (including
16 RRab, 2 RRc, and 1 Cepheid). They are identified as such in
Tables~\ref{tab_m28var}, \ref{tab_ngc6569var}, and
\ref{tab_ngc6441var}, respectively. Their corresponding light curves
are shown in Figs.~\ref{fig_m28var}, \ref{fig_ngc6569var}, and
\ref{fig_ngc6441var}.
\begin{figure*}
\centering
\includegraphics[scale=0.85]{figm28_lc.pdf}
\caption{As in Fig.~\ref{fig_m22var}, but now for variable
candidates in the M~28 field selected as cluster members, as shown
in Sect.~\ref{sec_gc1pm}.}
\label{fig_m28var}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.85]{figngc6569_lc.pdf}
\caption{As in Fig.~\ref{fig_m22var}, but now for variable
candidates in the NGC~6569 field selected as cluster members, as
shown in Sect.~\ref{sec_gc1pm}.}
\label{fig_ngc6569var}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.85]{figngc6441_lc.pdf}
\caption{As in Fig.~\ref{fig_m22var}, but now for variable
candidates in the NGC~6441 field selected as cluster members, as
shown in Sect.~\ref{sec_gc1pm}.}
\label{fig_ngc6441var}
\end{figure*}
\subsection{Distance and extinction}
\label{sec_gc1dis}
The GGCs in this section show a wide range of iron contents (see
Table~\ref{tab_gcinput}). Taking [${\alpha}$/Fe]=0.3 for all of the
GGCs \citep{vil17,joh18,ori08} translates into metallicities
$Z_{M28}$=0.0013, $Z_{NGC6569}$=0.0048, and $Z_{NGC6441}$=0.0096,
using Eq. (1) in \citet{nav17}. Applying the newly calibrated
\citetalias{alo15} PLZ relations from Sect.~\ref{sec_m22dis} (Eqs. [3]
to [7]) to the RR~Lyrae selected in Sect.~\ref{sec_gc1pm} as cluster
members (15 in M~28, 8 in NGC~6569, and 18 in NGC~6441), and assuming
a \citet{car89} canonical extinction law toward these as well, we
obtain the color excesses and distances for M~28, NGC~6569, and
NGC~6441 that we report in Table~\ref{tab_gcdis}.
The reported distances agree, within the error, with those reported by
\citet{bau19} using {\em Gaia} data, and also with those in the Harris
catalog (see Table~\ref{tab_gcdis}). NGC~6441 is the only case that
seems to be off ($\sim10\%$). This may imply that the uncommon
Oosterhoff~III GGCs follow different PLZ relations, a very plausible
posibility if we consider that the atypical extension of the HB and
the long periods in the RR~Lyrae in Oosterhoff~III GGCs may be
attributed to an overabundance of helium
\citep[e.g.,][]{cal07,cat09a,tai17}, which, as shown in \citet{cat04},
significantly impacts the PLZ relations that are followed by RR~Lyrae
stars.
The dispersion among the distance moduli to individual RR~Lyrae in
every GGC is small (see Fig.~\ref{fig_gc1dis}). It is unlikely to be
due to differential reddening because extinction is relatively low in
the near-infrared for these GGCs (see Table~\ref{tab_gcdis}), and the
dispersion among the stars does not seem to follow the reddening
vector (e.g., C28 for M~28 or C13 for NGC~6569 in
Fig.~\ref{fig_gc1dis}). This dispersion is quoted in
Table~\ref{tab_gcdis} as the first ${\sigma}$ in our distance
determination. Interestingly, this dispersion is similar for NGC~6441
as for the other GGCs.
\begin{figure*}
\centering
\begin{tabular}{ccc}
\includegraphics[scale=0.48]{figm28_dis.pdf}
\includegraphics[scale=0.48]{figngc6569_dis.pdf}
\includegraphics[scale=0.48]{figngc6441_dis.pdf}
\end{tabular}
\caption{As in Fig.~\ref{fig_m22dis}, but for NGC~6626 (M~28) in the
left panel, for NGC~6569 in the middle panel, and for NGC~6441 in
the right panel.}
\label{fig_gc1dis}
\end{figure*}
We note that assuming a non-canonical extinction law such as those
reported for the innermost Galaxy
\citep[e.g.,][]{maj16,alo17,nog18,dek19} results in increasing the
distances to these GGCs. Variations in the distance due to applying
such extinction laws are included in the reported second $\sigma$ of
the distance estimation in Table~\ref{tab_gcdis}. We also note that
applying the different PLZ relations shown in Sect.~\ref{sec_m22dis}
only produces small variations among the reported mean ${\mu}_{Ks}$
for every cluster, increasing their values from a few hundredths of a
magnitude, in the case of the newly calibrated \citet{nav17b} PLZ
relations (Eqs. [1] and [2] in Sect.~\ref{sec_m22dis}), up to
$\sim0.1$ mag, for the \citet{mur15} PLZ relation. Variations in the
distance due to applying these different PLZ relations are included in
the reported third $\sigma$ of the distance estimation in
Table~\ref{tab_gcdis}.
There are no Cepheids in M~28 and NGC~6569 that we detected as cluster
members, but there is one, C34, in NGC~6441. When we examined its
colors, we found it was saturated in $H$ but not in $J$. Using the PL
relations by \citet{dek19}, we found a color excess $E(J-K_s)=0.27$
for this Cepheid, and a distance of $R_{\odot}=13.3$ kpc, in agreement
with those reported by the RR~Lyrae for this GGC (see
Table~\ref{tab_gcdis}), but significantly higher than that reported by
{\em Gaia} and in the Harris catalog. Therefore, this result hints
that a different calibration for the PL relations in Cepheids from
Oosterhoff~III GGCs may also be required.
\section{2MASS-GC~02 and Terzan~10}
\label{sec_gc2}
2MASS-GC~02 and Terzan~10 are different from the previously analyzed
GGCs because they are two poorly populated GGCs located at very low
latitudes (see Table~\ref{tab_gcinput}). Their Galactic locations make
them two of the most reddened GGCs in the Milky Way. We studied the
variable populations in these GGCs and in their immediate surroundings
in \citetalias{alo15}. Now we revisit these two GGCs as test cases of
our redesigned method to detect variables.
\subsection{Variables in the cluster area}
\label{sec_gc2var}
2MASS-GC~02 is one of the very few Galactic GGCs lying in the
Oosterhoff gap, as we showed in \citetalias{alo15}. We detected 45
variable candidates in its field, following the procedures detailed in
Sect.~\ref{sec_var}. We present their main observational parameters in
Table~\ref{tab_2msgc2var}. Owing to its position, very close to the
Galactic plane, this GGC is outside the footprint of the OGLE
experiment. There are 6 variable candidates listed in the Clement
catalog, but we discarded these as not being real variables in
\citetalias{alo15}. We do not register any signal of variability for
these stars in our current analysis either. In \citetalias{alo15} we
were able to detect 32 variables inside the tidal radius of this
GGC. We independently recovered all the variable stars detected in
\citetalias{alo15}, except those classified as long period variables
(LPVs): NV16, NV19, and NV27; and the eclipsing binary NV32. This is
expected as we explained in Sect.~\ref{sec_var}: Our method is now
restricted to looking for stars with periods shorter than 100 days,
and therefore we do not expect to find any LPVs; as for the eclipsing
binaries, we expect to miss some of these because of the preliminary
cuts we apply to our selection. We underline that we recovered all 16
pulsators (13 RRab's and 3 Cepheids) previously found in
\citetalias{alo15} inside the tidal radius of this GGC. Additionally,
we note that the coordinates for the cluster center we used in this
work (see Table~\ref{tab_gcinput}) are slightly different from those
assumed in \citetalias{alo15}, which puts the Cepheid candidate C45
(NV33) inside the tidal radius, and among our detected variables. On
the other hand, however, the improved light curve of C40 (NV28) casts
some doubts on our previous classification as a Cepheid, and we prefer
not to assign this source a variable type. Finally, we highlight that
there are 16 newly detected candidates. We classified 2 as RRab, 4 as
eclipsing binaries, and we were not able to assign a variable type to
the other 10 new candidates.
\begin{table*}
\caption{Properties of the variable candidates in 2MASS-GC~02.}
\label{tab_2msgc2var}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccccccccccccc}
\hline\hline
ID & $ID_{PaperI}$ & $\alpha$ (J2000) & ${\delta}$ (J2000) & Distance\tablefootmark{a} & Period & $A_{K_s}$ & $\langle K_s \rangle$ & $Z-K_s$ & $Y-K_s$ & $J-K_s$ & $H-K_s$ & $\mu_{\alpha^\ast}$ & $\mu_{\delta}$ & Member\tablefootmark{b} & Type \\
& & (h:m:s) & (d:m:s) & (arcmin) & (days) & (mag) & (mag)& (mag)& (mag)& (mag) & (mag) & (mas/yr) & (mas/yr) & & \\
\hline
C1 & NV1 & 18:09:35.98 & -20:47:11.7 & 0.52 & 0.70004 & 0.298 & 14.585 & -- & 4.411 & 2.783 & 0.95 & 4.89 & -4.81 & Yes & RRab\\
C2 & NV2 & 18:09:34.22 & -20:46:56.2 & 0.68 & 0.651668 & 0.218 & 14.779 & -- & -- & 2.942 & 1.008 & 3.785 & -4.757 & Yes & RRab\\
C3 & NV4 & 18:09:38.58 & -20:46:05.0 & 0.75 & 0.623721 & 0.243 & 15.356 & -- & -- & 4.176 & 1.494 & 4.421 & -2.921 & Yes & RRab\\
C4 & NV3 & 18:09:33.77 & -20:46:29.5 & 0.79 & 0.570441 & 0.279 & 15.069 & -- & -- & 3.46 & 1.155 & 5.01 & -3.961 & Yes & RRab\\
C5 & NV5 & 18:09:38.85 & -20:47:26.5 & 0.83 & 0.603305 & 0.33 & 14.973 & -- & -- & 3.213 & 1.099 & 3.846 & -3.893 & Yes & RRab\\
C6 & NV6 & 18:09:33.00 & -20:47:07.8 & 1.02 & 0.551335 & 0.404 & 14.965 & -- & -- & 2.937 & 1.004 & 3.038 & -6.488 & Yes & RRab\\
C7 & -- & 18:09:38.86 & -20:45:46.7 & 1.05 & 0.60917 & 0.23 & 15.557 & -- & -- & -- & 1.758 & 2.888 & -2.794 & Yes & RRab\\
C8 & -- & 18:09:33.46 & -20:47:32.9 & 1.16 & 1.068307 & 0.081 & 14.747 & 3.327 & 2.277 & 1.412 & 0.503 & -2.931 & -0.364 & No & ?\\
C9 & -- & 18:09:40.77 & -20:47:43.5 & 1.33 & 3.2458 & 0.271 & 14.507 & 3.734 & 2.591 & 1.594 & 0.537 & 0.486 & -1.929 & No & Ecl\\
C10 & -- & 18:09:31.65 & -20:47:24.7 & 1.42 & 0.596172 & 0.34 & 14.687 & -- & -- & 2.924 & 1.018 & 3.094 & -5.215 & Yes & RRab\\
\hline
\end{tabular}}
\tablefoot{This table is available in its entirety in electronic form
at the CDS. A portion is shown here for guidance regarding its form and content.\\
\tablefoottext{a}{Projected distance to the cluster center}
\tablefoottext{b}{Cluster membership according to criteria explained in Sect.~\ref{sec_gc2dis}}
}
\end{table*}
Terzan~10 shows in \citetalias{alo15} mean periods for its RRab star
corresponding to an Oosterhoff~II GGC. But its reported metallicity at
the time, from photometric measurements, $[{\rm Fe/H}]=-1.0$ according
to the Harris catalog, was too high for this group, putting it closer
to the Oosterhoff~III GGCs. As shown in Table~\ref{tab_gcinput}, in
this paper we use a lower iron content, $[{\rm Fe/H}]=-1.59\pm0.02$,
which is measured from Ca triplet spectra in a sample of 16 member
stars observed with the FORS2 instrument at the Very Large
Telescope (VLT; Geisler et al., in prep.). This metallicity is more
consistent with Terzan~10 being a member of the Oosterhoff~II GGCs. We
detected 65 variable candidates in its field. We present their main
observational parameters in Table~\ref{tab_terzan10var}. The Clement
catalog for this GGC shows the variable stars we detected in
\citetalias{alo15}, plus a few more candidates from OGLE. There are 48
variable stars listed in \citetalias{alo15} inside the tidal radius of
Terzan~10. We recovered all but 7 of these variables: 1 LPV (NV48),
since our method is now restricted to looking for stars with periods
shorter than 100 days (see Sect.~\ref{sec_var}); and the other six -- 3
variable sources of unknown type (NV8, NV37, and NV38), 2 Cepheid
candidates (NV19 and NV39), and 1 eclipsing binary candidate (NV45)--,
whose light curves quality was too low to be picked out by our
improved method. We note that, as for 2MASS-GC~02, the coordinates for
the cluster center we used in this work (see Table~\ref{tab_gcinput})
are slightly different from those assumed in \citetalias{alo15} for
this GGC, which puts NV50 and NV51 from \citetalias{alo15} inside its
tidal radius, and among our detected variables. We also reconsidered
our previous classifications for some of these stars based on their
improved light curves: we classified C56 (NV44) as an RRab, while we
were not able to assign a variable type to all the 6 stars (C15, C16,
C18, C19, C47, and C54) previously classified as Cepheid candidates
(NV16, NV18, NV14, NV17, NV31, and NV47). Finally, there are 22
variable candidates not reported in \citetalias{alo15} that we
detected in this work. From those, OGLE previously detected 12
variable stars and the other 10 are newly detected candidates. We
classified 2 of them as eclipsing binaries, and we were not able to
assign a variable type to the other 8 new candidates.
\begin{table*}
\caption{Properties of the variable candidates in Terzan~10.}
\label{tab_terzan10var}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccccccccccccccc}
\hline\hline
ID & $ID_{Clement}$ & $ID_{OGLE}$ & $\alpha$ (J2000) & ${\delta}$ (J2000) & Distance\tablefootmark{a} & Period & $A_{K_s}$ & $\langle K_s \rangle$ & $Z-K_s$ & $Y-K_s$ & $J-K_s$ & $H-K_s$ & $\mu_{\alpha^\ast}$ & $\mu_{\delta}$ & Member\tablefootmark{b} & Type \\
& & (OGLE-BLG-) & (h:m:s) & (d:m:s) & (arcmin) & (days) & (mag) & (mag)& (mag)& (mag)& (mag) & (mag) & (mas/yr) & (mas/yr) & & \\
\hline
C1 & V1 & ECL-270713 & 18:02:58.87 & -26:03:35.3 & 0.46 & 3.87962 & 0.39 & 14.14 & 2.713 & 1.833 & 1.069 & 0.31 & -0.118 & -9.19 & No & Ecl\\
C2 & V2 & RRLYR-33526 & 18:02:59.48 & -26:04:22.6 & 0.5 & 0.730538 & 0.291 & 14.655 & 2.734 & 1.831 & 1.105 & 0.331 & -6.908 & -4.292 & Yes & RRab\\
C3 & V3 & RRLYR-33512 & 18:02:54.04 & -26:03:46.9 & 0.92 & 0.70172 & 0.298 & 14.842 & 3.17 & 2.005 & 1.154 & 0.383 & -7.334 & -1.237 & Yes & RRab\\
C4 & V4 & ECL-270954 & 18:03:00.19 & -26:05:05.8 & 1.2 & 1.368958 & 0.395 & 14.651 & 2.082 & 1.421 & 0.798 & 0.249 & -2.839 & -6.769 & No & Ecl\\
C5 & V5 & -- & 18:02:57.14 & -26:02:44.0 & 1.28 & 0.68854 & 0.314 & 14.974 & 3.571 & 2.522 & 1.338 & 0.405 & -6.43 & -1.814 & Yes & RRab\\
C6 & V6 & RRLYR-33519 & 18:02:56.90 & -26:05:19.7 & 1.35 & 0.582332 & 0.334 & 14.957 & 3.199 & 2.01 & 1.14 & 0.361 & -6.184 & -2.43 & Yes & RRab\\
C7 & V7 & RRLYR-33509 & 18:02:53.24 & -26:05:12.7 & 1.62 & 0.715331 & 0.339 & 14.726 & 2.928 & 1.915 & 1.147 & 0.409 & -8.985 & -4.512 & Yes & RRab\\
C8 & V10 & ECL-271769 & 18:03:04.65 & -26:05:15.4 & 1.95 & 0.422508 & 0.45 & 16.306 & 2.033 & 1.251 & 0.768 & 0.221 & -4.449 & -0.437 & Yes & Ecl\\
C9 & V9 & -- & 18:02:51.22 & -26:05:14.9 & 1.97 & 0.193836 & 0.236 & 16.207 & 2.325 & 1.605 & 0.921 & 0.239 & -5.788 & -4.402 & Yes & ?\\
C10 & V12 & RRLYR-33538 & 18:03:04.08 & -26:05:33.7 & 2.07 & 0.568658 & 0.291 & 14.533 & 2.408 & 1.7 & 0.939 & 0.265 & -1.84 & -7.808 & No & RRab\\
\hline
\end{tabular}}
\tablefoot{This table is available in its entirety in electronic form
at the CDS. A portion is shown here for guidance regarding its form and content.\\
\tablefoottext{a}{Projected distance to the cluster center}
\tablefoottext{b}{Cluster membership according to criteria explained in Sect.~\ref{sec_gc2dis}}
}
\end{table*}
Therefore, from the analysis of these 2 difficult GGCs, we note that
our redesigned method to detect variable candidates (see
Sect.~\ref{sec_var}) recovered all variable stars detected in
\citetalias{alo15}, except for those with periods outside of our range
of interest and those with low quality light curves. On the other
hand, it increased the number of detected candidates with good quality
light curves by more than 50 percent. However, we also note that the
number of epochs available for our variability analysis was
significantly increased with respect to \citetalias{alo15}, by a
factor of $\sim2$ in 2MASS-GC~02 and a factor of $\sim3$ in Terzan~10.
\subsection{Proper motions, color-magnitude diagrams, and cluster memberships}
\label{sec_gc2pm}
Following the same method described in Sect.~\ref{sec_m22pm} and
\ref{sec_gc1pm}, we used the PMs provided by VIRAC2 for the detected
variable stars to assign membership to 2MASS-GC~02 and Terzan~10. In
the left upper panels of Fig.~\ref{fig_gc2}, we can observe that the
PM distributions of stars from both GGCs are more clearly separated
from their field counterparts than in the GGCs from the previous
section (see Fig.~\ref{fig_gc1}). Our kNN classifier allowed us to
further select the innermost stars that belong to the GGCs. We are
able to obtain in this way the PMs for 2MASS-GC~02 and Terzan~10, which
we show in Table~\ref{tab_gcpm}. While the mean PM for Terzan~10
agrees with that measured by \citet{bau19}, this is not the case for
2MASS-GC~02. We attribute this to the very high reddening in front of
2MASS-GC~02, which may have produced a wrong identification of its
members using {\em Gaia} data by \citet{bau19}. We argue that our VVV
near-infrared data for this GGC provides more accurate results in this
case because a PM pattern for the stars in this GGC, different from
the stars in the field, and not observed in the {\em Gaia} data,
appears clearly in the left panels of Fig.~\ref{fig_gc2} for
2MASS-GC~02.
\begin{figure*}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.37]{fig2ms-gc2_pm.pdf}
\includegraphics[scale=0.37]{fig2ms-gc2_cmd.pdf}\\
\includegraphics[scale=0.37]{figterzan10_pm.pdf}
\includegraphics[scale=0.37]{figterzan10_cmd.pdf}\\
\end{tabular}
\caption{As in Fig.~\ref{fig_m22}, but now for 2MASS-GC~02 in the
top panels and for Terzan~10 in the lower panels. For 2MASS-GC~02,
a cyan empty circle encapsulates the variable candidate whose
membership to the cluster according to our kNN classifier was
reversed (see text).}
\label{fig_gc2}
\end{figure*}
Our kNN classifier also allows us to identify cluster member variable
stars using their PMs (see left lower panels of
Fig.~\ref{fig_gc2}). There are 15 variable stars identified as members
for 2MASS-GC~02, all of which are RRab stars, except for 1
unclassified variable (C21). However, as the position of C21 in the
CMD is too blue to belong to this GGC and better agrees with a star in
the foreground disk (see top right panel in Fig.~\ref{fig_gc2}), we
decided to consider it a non-member in its classification in
Table~\ref{tab_2msgc2var}. On the other hand, all the RRab classified
as cluster members consistently fall along the reddening vector in the
CMD, indicating the presence of strong differential reddening in the
line of sight of this GGC. All the RRab selected as members in
\citetalias{alo15} were also selected in this work using the kNN
classifier. We present their $K_s$ light curves in
Fig.~\ref{fig_2msgc2var}. When also considering the additional 2 newly
detected RR~Lyrae in this GGC, the Oosterhoff intermediate
classification we assigned it in \citetalias{alo15} is reinforced. For
Terzan~10, our kNN classifier identifies 14 variable stars as cluster
members based on their PMs. We present their $K_s$ light curves in
Fig.~\ref{fig_terzan10var}. As for 2MASS-GC~02, all the RRab stars
classified in \citetalias{alo15} as cluster members are also
classified as members now.
\begin{figure*}
\centering
\includegraphics[scale=0.85]{fig2ms-gc2_lc.pdf}
\caption{As in Fig.~\ref{fig_m22var}, but now for variable
candidates in the 2MASS-GC~02 field selected as cluster members,
as shown in Sect.~\ref{sec_gc2pm}.}
\label{fig_2msgc2var}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.85]{figterzan10_lc.pdf}
\caption{As in Fig.~\ref{fig_m22var}, but now for variable
candidates in the Terzan~10 field selected as cluster members, as
shown in Sect.~\ref{sec_gc2pm}.}
\label{fig_terzan10var}
\end{figure*}
\subsection{Distance and extinction}
\label{sec_gc2dis}
For our determination of distances and extinctions to 2MASS-GC~02, we
used 14 RRLyrae: the 12 cluster members detected in \citetalias{alo15}
(C1 to C6, C13 to C16, C19, and C34), plus 2 additional new detections
from our current analysis (C7 and C10), which our kNN classifier also
considers members of 2MASS-GC~02. The remaining RRab, C38, and the 3
detected Cepheids (C17, C43 and C45) were classified as field stars in
\citetalias{alo15} and in this work by our kNN classifier. As
mentioned in Sect.~\ref{sec_gc2pm}, the positions of the RRab stars of
2MASS-GC~02 in the CMD (see top right panel in Fig.~\ref{fig_gc2})
provided us with a clear hint of the significant differential
reddening present in the field of this GGC. When we applied the PLZ
relations, assuming a metallicity of Z=0.0025 as in
\citetalias{alo15}, we could also see the effect of differential
extinction in the calculated distance moduli and color excesses (see
left panel in Fig.~\ref{fig_gc2dis}). This allowed us to perform an
analysis similar to that in \citetalias{alo15}, where we calculated
simultaneously extinction ratios and distance to this GGC from a
linear fit to the distance moduli and color excesses of its
RR~Lyrae. The zero-point of the fit is the distance modulus to this
GGC, corrected by extinction, while the slope of the fit is the
selective-to-total extinction ratio. Unfortunately, as was the case in
\citetalias{alo15}, the substantial reddening of 2MASS-GC~02 prevented
our detection of the RR~Lyrae in the bluer filters ($Z$ and $Y$), so
we could only perform our analysis using $J$, $H$, and $K_s$. Hence,
following \citetalias{alo15}, we performed an ordinary least-squares
bisector fit on the $\mu_{K_s}$ versus E($J-K_s$) and on $\mu_{K_s}$
versus E($H-K_s$) values obtained from the PLZ relations for the RRab
stars and we obtained the selective-to-total extinction ratios
$R_{K_s,H-K_s}=1.13\pm0.11$ and $R_{K_s,J-K_s}=0.55\pm0.08$, which
agree within the errors with those from \citetalias{alo15}. However,
we should note that while $R_{K_s,H-K_s}$ is slightly lower than in
our previous work, $R_{K_s,J-K_s}$ is instead slightly higher. The
mean of the distance moduli corrected by extinction obtained from both
fits is $\mu_0=13.9\pm0.3$ mag, which translates into the distance
given in Table~\ref{tab_gcdis}. While this distance differs from that
provided by the Harris catalog, it closely agrees with the distance
provided in \citetalias{alo15}, once the effects of the recalibration
of the PLZ relations are taken into consideration. The color excesses
reported in Table~\ref{tab_gcdis} for 2MASS-GC~02 are the mean of
those obtained to the individual RR~Lyrae, and they are similar to
those reported in \citetalias{alo15}. However, we note that these
average color excesses for 2MASS-GC~02 should be used with caution
because extinction toward this GGC is highly variable and changes
significantly over small regions.
\begin{figure*}
\centering
\begin{tabular}{ccc}
\includegraphics[scale=0.48]{fig2ms-gc2_dis.pdf}
\includegraphics[scale=0.48]{figterzan10_dis.pdf}
\end{tabular}
\caption{As in Fig.~\ref{fig_m22dis}, but now for 2MASS-GC~02 in the
left panel and for Terzan~10 in the right panel. Also shown as a
straight line for 2MASS-GC~02 is the linear fit used to define its
distance modulus corrected by extinction, as the zero-point of the
fit, and its selective-to-total extinction ratio, as the slope of
the fit.}
\label{fig_gc2dis}
\end{figure*}
For Terzan~10, we identified 7 RRab stars as cluster members following
the analysis in Sect.~\ref{sec_gc2pm}, as we can observe in
Fig.~\ref{fig_gc2} and \ref{fig_terzan10var}. They are the same 7 RRab
that we identified in \citetalias{alo15} as members. In the right
panel of Fig.~\ref{fig_gc2dis}, we show the distance moduli and color
excesses that we obtained after applying the PLZ relations, taking now
$Z_{Terzan10}$=0.0007; this is a lower metallicity than inferred in
\citetalias{alo15}, which we adopted for this work given the lower
iron content for this GGC reported in Table~\ref{tab_gcinput}. There
is indication of differential reddening, but it is less significant
than in 2MASS-GC~02. Importantly, the bulk of the RR~Lyrae in this GGC
does not seem to be suffering from it, with only a couple of these
sources (C24, C5) showing significant departures from the mean
reddening. So instead of performing a linear fit that would be heavily
affected by the extreme values of the color excesses of just these two
stars, we preferred to apply the ratios $R_{K_s,\lambda-K_s}$ from
\citet{alo17} to correct for extinction and calculate their
distances. The mean of these measurements is presented in
Table~\ref{tab_gcdis} as the distance to Terzan~10, and although it is
clearly off from the value given in the Harris catalog, this value
agrees with that provided in \citetalias{alo15} and also agrees with
recent measurements by \citet{ort19}.
\section{Conclusions}
\label{sec_con}
Using the VVV survey, we studied for the first time in the
near-infrared the variable stellar population over the entire field of
M~22, M~28, NGC~6569, and NGC~6441, four massive GGCs located toward
the Galactic bulge, whose corresponding metallicities span a wide
range of values. We also revisited the topic in 2MASS-GC~02 and
Terzan~10, which are two poorly populated and highly reddened GGCs we
already studied in the first paper of this series. The updated VIRAC2
database provided us with PMs and light curves for the stars located in
these GGCs. We defined a parameter that allows us to efficiently
discriminate the light curves provided by the VIRAC2 database and to
single out those which show clear signs of variability. We were able
to identify almost all of the RRab pulsators reported in other
catalogs of variable stars for these GGCs, except for the innermost
regions of the farthest clusters. Moreover, we were able to catalog
some other known RRc and Cepheid pulsators, and some other known
binary stars that show clear signs of variability in our VVV dataset,
and we identified some tens of new variable candidates. We were also
able to recover the vast majority of the variable candidates found for
2MASS-GC~02 and Terzan~10 in the first paper of this series, plus a
significant number of new candidates. We used the PMs that VIRAC2
provides to identify cluster members through a kNN classifier. We were
able to provide in this way the PMs for these GGCs, which agreed with
those provided by {\em Gaia} DR2, except for the most reddened
cluster, 2MASS-GC~02, where the VVV near-infrared observations provide
a more accurate result. Using their PMs, along with their positions in
the sky and in the CMD, we were able to select the variable stars that
belong to these GGCs as well. Since all of these clusters have a
significant number of RR~Lyrae, we used their tight, near-infrared PLZ
relations to calculate the distances and extinctions toward these
GGCs. We recalibrated previously used PLZ relations, obtaining in this
way a good agreement with those distances provided in the literature
and by {\em Gaia} DR2, except in the case of the Oosterhoff~III GGC
NGC~6441. Building on the methods described in this work, we plan to
extend the study of their variable stellar population to the other
GGCs located in the footprints of the VVV and the VVVX surveys.
\begin{acknowledgements}
J.A.-G., K.P.R., and S.R.A. acknowledge support from Fondecyt Regular
1201490. M.C. and D.M acknowledge support from Fondecyt Regular
1171273 and 1170121, respectively. J.A.-G., M.C., C.N., J.B., R.C.R.,
and F.G. are also supported by ANID – Millennium Science Initiative
Program – ICN12\_009 awarded to the Millennium Institute of
Astrophysics MAS. M.C., D.M. and D.G. are also supported by the BASAL
Center for Astrophysics and Associated Technologies (CATA) through
grant AFB-170002. C. E. F. L. acknowledges a post-doctoral fellowship
from the CNPq. F.G. also acknowledges support by CONICYT/ANID-PCHA
Doctorado Nacional 2017-21171485S. E.R.G. acknowledges support from
the Universidad Andr\'es Bello (UNAB) PhD scholarship
program. D.G. also acknowledges financial support from the Direcci\'on
de Investigaci\'on y Desarrollo de la Universidad de La Serena through
the Programa de Incentivo a la Investigaci\'on de Acad\'emicos
(PIA-DIDULS). R.A. also acknowledges support from Fondecyt
Iniciaci\'on 11171025.
\end{acknowledgements}
\bibliographystyle{aa}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,716 |
# Copyright
The advice herein is not intended to replace the services of trained health professionals, or be a substitute for medical advice. You are advised to consult with your health care professional with regard to matters relating to your health, and in particular regarding matters that may require diagnosis or medical attention.
Copyright © 2019 by Gwyneth Paltrow
Photographs copyright © 2019 by Ditte Isager
Cover and book design by Shubhani Sarkar, sarkardesignstudio.com
Cover copyright © 2019 by Hachette Book Group, Inc.
Hachette Book Group supports the right to free expression and the value of copyright. The purpose of copyright is to encourage writers and artists to produce the creative works that enrich our culture.
The scanning, uploading, and distribution of this book without permission is a theft of the author's intellectual property. If you would like permission to use material from the book (other than for review purposes), please contact permissions@hbgusa.com. Thank you for your support of the author's rights.
Grand Central Life & Style
Hachette Book Group
1290 Avenue of the Americas, New York, NY 10104
grandcentrallifeandstyle.com
twitter.com/grandcentralpub
First Edition: January 2019
Grand Central Life & Style is an imprint of Grand Central Publishing. The Grand Central Life & Style name and logo are trademarks of Hachette Book Group, Inc.
The publisher is not responsible for websites (or their content) that are not owned by the publisher.
The Hachette Speakers Bureau provides a wide range of authors for speaking events. To find out more, go to www.hachettespeakersbureau.com or call (866) 376-6591.
Library of Congress Control Number: 2018953666
ISBNs: 978-1-5387-3046-1 (hardcover), 978-1-5387-3047-8 (ebook)
E3-20181127-JV-NF-ORI
# CONTENTS
COVER
TITLE PAGE
COPYRIGHT
DEDICATION
INTRODUCTION
COOKING WITH THIS BOOK
[PART I
THE RECIPES](part001.xhtml)
Breakfast
Soups
Salads, Bowls & Rolls
A Little More Filling
Drinks & Snacks
Basics
[PART II
HEALING CLEANSES](part002.xhtml)
Fat Flush
Heavy Metal Detox
Adrenal Support
Candida Reset
Heart Health
Veg-Friendly Ayurveda
ACKNOWLEDGMENTS
ABOUT THE AUTHOR
ALSO BY GWYNETH PALTROW
NEWSLETTERS
For Brad,
my favorite dinner companion,
who taught me that
it is never too late
to make a clean start.
And for Apple, Moses, Isabella,
and Brody.
# INTRODUCTION
Life is messy. It's supposed to be. Everyone I know, myself included, is juggling too many things. But we also don't want to be told to slow down or to give something up. If anything, I hear from friends, already with full plates, about other projects they're looking to take on: the next school event they've signed on to work, boards they've recently joined, causes they want to champion, or new relationships they're investing time in. In the background of all this productivity and duty and excitement, I hear a common refrain that's all too relatable. It's typically pushed to the side, downplayed, or simply drowned out: _I don't feel... great_.
There's so much that's outside our control, but how do we begin to claim some autonomy over our own health and well-being? What levers can we pull that can make a difference in how we feel? And how do we do this without sacrificing, without saying "no" to the things we want to say "yes" to, without pulling back at the office or at home or anywhere else?
Everyone's toolbox for optimal wellness looks different. For me, the most powerful reset button is food. I don't know any magic bullets, but eating clean comes close. (Although I have to say that good sleep is high up on the list for me, too.) There's a marked difference, for the better, in how I feel, and to a lesser degree how I look, when I'm eating at least fairly clean.
When I say "clean," a lot of people picture me living off foods like kale, oat milk, kelp powder, wheatgrass—and who knows what other foods that I would never actually eat. I also typically do a cleanse only once a year, which is not about punishing my body for enjoying things like burgers and whiskey the rest of the year. Eating clean as a baseline, or full-on, for a set period of time isn't a moral choice, and it shouldn't have to feel like an act of deprivation.
Of course, I see why it looks that way. At the core of almost every cleanse I've tried that's worked—in at least giving me more energy—is cutting out a specified set of ingredients from your diet for a set amount of time. The ingredients excluded on most cleanses are processed foods and sugars, gluten, dairy, red meat, soy, peanuts, nightshades, alcohol, and caffeine. In general, these foods are more likely to be associated with sensitivities, inflammatory reactions, and digestive issues. My good friend Dr. Alejandro Junger (more from him here) has called them "toxic triggers," which, he explains, can compromise the integrity of the gut lining and health of the intestinal flora. You may have heard the functional medicine phrase "leaky gut"—a condition in which the gut lining is perforated—which is thought to be connected to a host of health issues. Dr. Junger has explained to me that the gut is the most complex system in the body—it processes food, absorbs nutrients, eliminates toxins, helps to regulate mood, and is home to about 70 percent of our immune system and a nervous system larger than the one inside our skulls.
It makes sense: When the gut is off, the body isn't running optimally. And when people eliminate or at least limit their toxic triggers, the results can be dramatic and all-encompassing—some notice improvements in their skin complexion, others less bloat, and some a more level mood, for starters. But maybe the most rewarding effect of switching to a cleaner diet is the ability to better tune into what your body likes and what it prefers to do without. If you don't know if you're sensitive to cheese or nightshades like eggplants, removing basic inflammatory triggers for an extended period of time (twenty-one days seems to be a threshold) gives you a cleaner slate to find out. After, assuming you have the time and patience to reintroduce food groups one at a time, you can see how each ingredient affects you or not. (I also recommend getting tested for food sensitivities and allergies if it's a concern.)
Admittedly, part of the reason eating clean is associated with deprivation is that, once you remove things like mozzarella and pasta, the food offerings on the table have not traditionally been all that exciting. This was partly the impetus behind my cookbook _It's All Good_ —to find a way to make mealtimes fun and full of taste and flavor, without falling back on the classic comfort foods.
For _The Clean Plate_ , the challenge was ratcheted up: Everything had to follow the basic tenets of super-clean eating as outlined by doctors—no loopholes—so that almost anyone with a food sensitivity or on nearly any cleanse could use the recipes for inspiration. And use them seamlessly—in a way that didn't put the focus on what was missing from their plate, but rather on what was there. Mostly, I was searching for food that tasted and felt as nourishing as it was healthy. Self-care and self-love have become overused words, but I don't think these feelings are present enough in the kitchen or when we're sitting down to eat. (I type this as I'm hurriedly eating a leftover salad at my desk, in front of my laptop, before my next meeting.) I never want to cook or eat something that feels like a compromise, like I'm saying "no" to what my body is craving.
What also makes this book different for me is that I developed recipes to work as part of six different week-long healing cleanses—each one anchored by a trusted health expert and tailored to support the body through a challenge that has been a roadblock in my life or in the lives of friends or family at one time or another. I think of these mini cleanses as gateways to the potency of nutrient-dense, whole foods. After the recipes, the doctors share their respective perspectives on tackling weight-loss resistance, dealing with heavy metals, giving our adrenals a break, resetting from Candida, getting proactive about cardiovascular health, and tapping into the ancient wisdom of Ayurveda. (In the next section, you'll see more on how these different targeted cleanses are broken down.)
Healthy and delicious are not mutually exclusive. The challenge of cleaning up a recipe is inherently enticing to me—whether I'm experimenting at home, trying to approximate something from a fast-food joint that my family is partial to, or in the goop test kitchen putting my own spin on John Legend's fried chicken wings. I hope you see a little bit of all that—healthy, delicious, fun—in every recipe that follows.
_Love,_
**GP**
# COOKING WITH THIS BOOK
When I was writing this cookbook, the first rule was that everything had to taste really good. The second was that it all had to comply with the fundamentals of clean eating. Which means that the recipes pull their weight whether you have a friend with a gluten allergy coming over, you're craving something healthy, or you're halfway through Whole30 with no bright ideas left.
#### IT ALSO MEANS THAT NOWHERE IN THIS BOOK WILL YOU FIND ANY:
Alcohol | Caffeine | Dairy
---|---|---
Gluten | Nightshades | Peanuts
Processed foods or sugars | Red meat | Soy
#### INSTEAD, YOU'LL FIND INGREDIENT SWAPS LIKE THESE:
Coconut aminos, a soy-free swap for tamari in marinades and dipping sauces. | Chickpea miso, for soy-free miso soups, marinades, and dressings. | Chickpea, lentil, or brown rice pasta as a gluten-free noodle fix.
---|---|---
Sugar-free kimchi (I like Wildbrine), which has all the flavor of regular kimchi and none of the typical added sugar. | Vanilla powder instead of vanilla extract, for alcohol-free, vanilla-flavored chia puddings and smoothies.
|
Open to any recipe section any time you want a satisfying, nutritious meal. To make it easier to spot what you're after, you'll find three labels attributed to recipes throughout. (The threshold for "quick" here is under 30 minutes.)
**PACKABLE**
**QUICK**
**VEGAN**
**A NOTE ON SERVING SIZES:** The majority of the recipes in this book serve two, because when I'm cooking really clean, it's generally just for myself and leftovers. Most people aren't serving up completely clean meals for their families or at dinner parties. That said, my kids do like a bunch of the recipes in this book so I'll sometimes double certain recipes, and there are dishes you could liberally top off with Parmesan and butter to make them more appealing to a crowd. You'll also see that some recipes are designed with bigger serving sizes in mind—particularly soups that wouldn't be worth it to make just one or two servings of, and which store/freeze well for later.
For those looking to go deeper into clean eating, in part II (here), I've put together six different menus, each good for a full week of clean eating, with a tailored twist. As mentioned in the introduction, these menus correspond to the slightly different food philosophies of six doctors, organized around varied health goals. Their respective takes on maximizing well-being are outlined in interviews with them that cover their lifestyle and detox recommendations beyond the kitchen. I also encourage you to look closer at their work, websites, and books, if they're relevant to what's going on in your personal health journey. And before following any protocol or plan, consult your own doctor to find out what's best and safe for you.
First up, beginning here, is stabilizing metabolism and weight with Dr. Tasneem "Taz" Bhatia, a board-certified integrative medicine physician who combines both Western and Eastern medicine in her practice. Second (here) is getting rid of heavy metals with Dr. James Novak, whom I've come to know via Dr. Alejandro Junger. Dr. Junger spearheads his own chapter (here) on supporting your adrenal glands—akin to the outlet strip that your body plugs into for energy. The fourth targeted cleanse (here) is a specialty of Dr. Amy Myers, a functional medicine practitioner based in Austin, who has helped many patients rebalance their microbiomes after a Candida overgrowth. The fifth chapter (here) is with cardiologist turned nutrition expert Dr. Steven Gundry, who walks you through taking good care of your heart. Last is something for vegetarians or anyone in need of a plant-based diet: an accessible Ayurvedic cleanse (here), done in conjunction with leading Ayurveda expert Dr. Aruna Viswanathan.
# PANTRY
Having a well-stocked pantry is one of the keys to cooking well at home. You might notice that the list of pantry items I used for _The Clean Plate_ is shorter than the lists in my other cookbooks. That's because this is the cleanest I've ever had to cook. After cutting out so many processed foods and added sugars, I was left with nearly all raw ingredients. I've added an herbs and produce section because they are the most satisfying and nutritious ways to add flavor when cooking (and eating) clean. For produce, I aim for organic to avoid pesticides and herbicides that are typically sprayed on conventional fruits and veggies.
Last, beans are a favorite non-meat source of protein. If you are trying to avoid lectins (see Dr. Gundry's explanation of this plant protein here), beans should be pressure cooked, which gets rid of the lectins. I either do this myself at home, or buy Eden Foods brand canned beans, which are pressure cooked (most canned beans aren't).
#### OILS, VINEGARS, AND SAUCES
Apple cider vinegar
Coconut aminos
Coconut oil
Extra virgin olive oil
Good-quality fish sauce, like Red Boat
Sunflower seed oil
Tahini
Toasted sesame oil
#### CANNED GOODS
Anchovies
Beans (Eden Foods)
Capers
#### RICES, PASTAS, FLOURS, AND OTHER PANTRY ITEMS
Brown, black, and basmati rice
Buckwheat groats
Chia seeds
Chickpea flour
Chickpea, lentil, or brown rice pasta
Flaxseed
Millet
Nori sheets
Quinoa
Raw nuts and seeds like cashews, almonds, pepitas (pumpkin seeds), and sunflower and sesame seeds
#### FOR SWEETNESS
Almond butter
Dates
Liquid stevia (not stevia-based sweeteners)
Raw cacao powder
Unsweetened shredded coconut
#### SPICES
Aleppo pepper
Bay leaves
Black pepper
Cardamom
Chili flakes
Cinnamon
Coriander
Cumin
Curry powder
Garam masala
Kosher salt
Maldon sea salt
Mexican oregano
Turmeric
Za'atar
#### FRESH HERBS AND PRODUCE
Basil
Chives
Cilantro
Garlic
Ginger
Leeks
Lemons and limes
Mint
Parsley
Red, white, and yellow onions
Rosemary
Scallions
Tarragon
Thyme
#### SOME ESSENTIAL TOOLS
Food processor
Immersion blender
Mandoline
Microplane rasp grater
Spiralizer
Vitamix blender
# PART I
# THE RECIPES
Now, we cook.
# BREAKFAST
With its morning buns, chocolate croissants, and fruit juices, breakfast is a minefield of sugar and empty calories, which can amp you up, then leave you crashed and in recovery mode by ten a.m. Not exactly the best way to start your day on the regular. Plus, if you lean toward the savory side like me, these conventional offerings aren't all that appealing or satisfying. Most weekday mornings, I actually opt for an easy smoothie that's light on the sugar (see the drinks chapter here), but when I'm cooking breakfast—which is generally weekend brunch with family—I like to try for dishes that can keep you full until lunchtime. With a little planning (and some key ingredients), a quick and seriously nutritious breakfast—like black rice pudding with mango, chocolate milk chia pudding, or dal with sautéed chard—is right at your fingertips. Because I know not everyone shares my affinity for savory at all times of the day, I layered in ingredients for the sweet-toothed among us. Whether you're after quick-and-easy weekday inspiration or a brunch-worthy dish, this mix of recipes has you covered.
# VEGAN SOCCATAS, 3 WAYS
I know what I said here—but let's be honest: Sometimes a smoothie for breakfast just isn't gonna cut it. And while I'd normally turn to my favorite fried egg sandwich, when I'm keeping things clean, these savory "soccatas"—inspired by an egg frittata, but made with chickpea flour—are my go-to. They're warm, filling, and very un-"detox"-feeling.
# Cauliflower, Pea & Turmeric Soccata
VEGAN
Serves 2
My kids are crazy for samosas, stuffed with cauliflower, peas, and potatoes and served with a mint or cilantro chutney. I used those flavors as inspiration here, filling the soccata with cauliflower and peas (skipping the potatoes to keep it nightshade-free) and topping it with a fresh herb salad. These make an elegant brunch option, but are also nice for lunch or dinner.
½ cup chickpea flour
½ cup water
2 teaspoons olive oil, plus more as needed
½ teaspoon kosher salt
⅔ cup steamed cauliflower florets (I use frozen organic cauliflower)
⅔ cup frozen peas
1 teaspoon grated lime zest
2 scallions, thinly sliced
½ teaspoon ground turmeric
2 tablespoons fresh cilantro leaves
2 tablespoons torn fresh mint leaves
2 tablespoons fresh parsley leaves
Juice of ½ lime
Flaky sea salt
½ ripe avocado, thinly sliced (optional)
In a medium bowl, whisk together the chickpea flour, water, olive oil, and kosher salt.
Add the cauliflower florets, mashing them a bit with a fork. Stir in the peas, lime zest, scallions, and turmeric.
Heat an 8-inch nonstick skillet over medium-high heat. Add about 1 tablespoon olive oil, and pour in half the batter. Cook for about 4 minutes, or until the bottom is starting to crisp, then flip the pancake and cook for 3 minutes or so on the second side.
Transfer the soccata to a plate and cook the second soccata.
In a small bowl, toss the cilantro, mint, and parsley with the lime juice, a little olive oil, and a pinch of flaky salt.
Top each soccata with some sliced avocado, if desired, and garnish with the herb salad.
# Kale Kuku Soccata
VEGAN
Serves 2
Inspired by the Persian egg dish _kuku_ , this vegan _kuku_ gets a healthy dose of sliced kale in addition to the typical fresh herbs.
¾ cup chickpea flour
¾ cup water
1 tablespoon olive oil, plus more as needed
¾ teaspoon kosher salt
1⅓ cups thinly sliced kale leaves
2 tablespoons minced fresh parsley
2 tablespoons minced fresh dill
2 tablespoons minced fresh cilantro
2 scallions, thinly sliced
1 teaspoon grated lemon zest
Lemony Garlic Aquafaba Sauce (here; optional)
½ ripe avocado, thinly sliced (optional)
In a medium bowl, whisk together the chickpea flour, water, olive oil, and kosher salt.
Stir in the kale, parsley, dill, cilantro, scallions, and lemon zest.
Heat an 8-inch nonstick skillet over medium-high heat. Add about 1 tablespoon olive oil, and pour in half the batter. Cook for about 4 minutes, or until the bottom is starting to crisp, then flip the pancake and cook for 3 minutes or so on the second side.
Transfer the soccata to a plate and cook the second soccata.
Serve with a dollop of lemony garlic aquafaba and garnish with avocado, if desired.
# Zucchini & Lemon Soccata
VEGAN
Serves 2
Zucchini contains a lot of water, so make this batter right before you plan to cook it. Otherwise, the zucchini can make the mixture too watery.
½ cup chickpea flour
½ cup water
2 teaspoons olive oil, plus more as needed
½ teaspoon kosher salt
1⅓ cups grated zucchini (about 1 zucchini)
1 teaspoon grated lemon zest
2 scallions, thinly sliced
A large pinch of Aleppo pepper (optional)
In a medium bowl, whisk together the chickpea flour, water, olive oil, and kosher salt.
Stir in the zucchini, lemon zest, scallion, and Aleppo pepper (if using).
Heat an 8-inch nonstick skillet over medium-high heat. Add about 1 tablespoon olive oil, and pour in half the batter. Cook for about 4 minutes, or until the bottom is starting to crisp, then flip the pancake and cook for 3 minutes or so on the second side.
Transfer the soccata to a plate and cook the second soccata.
# Breakfast Dal
VEGAN
Serves 2
This warming, savory breakfast is both easy and satisfying (and would be delicious at any time of day). I love the chard here, as it's a not-too-hearty, not-too-delicate green, but any greens you have on hand will do in a pinch.
1 tablespoon coconut oil, plus more if needed
½ white or yellow onion, finely diced
Scant ½ teaspoon kosher salt
1 tablespoon curry powder
½ teaspoon grated fresh ginger
½ cup red lentils
1½ cups water
½ cup torn Swiss chard leaves
In a small saucepan, melt the coconut oil over medium heat. Add the onion and salt and cook, stirring, until softened, 3 to 5 minutes. Add the curry powder and ginger, stir, and cook for a couple of minutes more, until very fragrant. Add the lentils and water. Cook, stirring frequently, for 8 to 10 minutes, until the lentils are tender and the dal is thick, adding additional water if needed to get the desired consistency.
In a small skillet, melt about 1 teaspoon of coconut oil over medium-high heat. Add the Swiss chard and cook, stirring, until tender, for 3 minutes. Alternatively, simply stir the chard directly into the dal and let it gently cook.
# Seed Cracker with Smoked Salmon & Avocado
Serves 2
Once your cracker is made, this no-brainer breakfast—full of skin-friendly omega-3s—is so quick and easy to throw together. While the cracker recipe was inspired by chef Magnus Nilsson, I made this variation with Dr. Taz Bhatia in mind—see here where she explains how healthy fats help balance insulin, keep blood sugar levels stable, and make you feel satiated. If you have a Microplane, grate a little fresh lemon zest over the salmon here while you're at it.
2 large pieces Seed Cracker (here)
2 ounces good-quality smoked salmon
1 avocado, thinly sliced
Pickled Cucumbers (here)
Fresh lemon juice
Flaky sea salt and cracked black pepper
Top the crackers with the smoked salmon, avocado, and a few pickled cucumbers.
Squeeze lemon juice over the top and sprinkle with a little flaky salt and cracked pepper.
# Seed Cracker with Egg & Avocado
Serves 2
Instead of forgoing avocado toast altogether—which can be tough for some people to digest—when eliminating gluten and grains, I sub in a seed cracker for the toast. This one gets topped with a boiled egg for a little extra protein, but feel free to skip it if you're not eating eggs (I'm not sensitive to them, but I know others can be) and/or you want to keep things vegan.
2 large eggs
2 large pieces Seed Cracker (here)
1 small avocado, thinly sliced
Flaky sea salt and cracked black pepper
Pickled Red Onions (here)
Bring a small saucepan of water to a boil. Add the eggs and set a timer for 8 minutes. Meanwhile, fill a medium bowl with ice and water. When the timer goes off, use a slotted spoon to plunge the eggs into the ice bath and let cool. When they're cool enough to handle, peel and slice the eggs.
Top the crackers with avocado and egg. Season with flaky salt and cracked pepper, and garnish with a few pickled onions.
# EGGS AND GREENS, 3 WAYS
Sometimes in the kitchen, you've got to be flexible. After years of improvised cooking, I know that if I have eggs, greens, and some sort of allium (onion, shallot, leek, garlic, etc.)—I've got a healthy, foolproof breakfast. Here are three of my favorite eggs-and-greens iterations, but the whole point of these dishes is to use up what you've got, so substitute whatever greens happen to be languishing in your fridge.
# Veggie Scramble
QUICK
Serves 2
So simple yet so delicious, this scramble is easy enough to make for breakfast every day, but would fit in on a Mother's Day brunch spread, too.
2 tablespoons olive oil
¼ cup sliced scallions (about 4)
1½ to 2 cups baby spinach
Kosher salt and freshly ground black pepper
4 large eggs, whisked
Heat a 10-inch nonstick skillet over medium heat. Add the olive oil, scallions, and spinach and season with a pinch of salt.
Season the eggs with a large pinch of salt and a good amount of pepper, then add them to the pan.
Cook until the eggs are just set, then divide between two plates and serve.
# Easy Frittata
QUICK
Serves 2
Frittatas are perfect for using up veggie scraps. When testing recipes for this book, I used a lot of beets (beets and I were having a moment), so I always had leftover beet greens around the kitchen. I tried adding them to a frittata with some shallots, and it was a hit.
2 tablespoons olive oil
¼ cup diced shallots
1½ to 2 cups chopped beet greens
Kosher salt and freshly ground black pepper
4 large eggs, beaten
Preheat the oven to 375ºF.
In an 8-inch oven-safe nonstick skillet, heat the olive oil over medium-high heat. Add the shallots and cook, stirring, for a couple of minutes, until softened, then stir in the beet greens and cook until wilted. Season with a generous pinch each of salt and pepper. Add the eggs and turn off the heat. Pop the skillet in the oven for about 10 minutes, or until the eggs are set.
Slice and serve.
# Poached Eggs over Sautéed Greens
QUICK
Serves 2
Runny egg yolk and sautéed greens are a match made in heaven. I know people claim you need vinegar to properly poach an egg, but I'm not buying it. Just be sure to keep the water vortex moving by stirring gently but consistently as the eggs cook at a nice, low simmer.
2 tablespoons olive oil
⅓ cup thinly sliced leeks
Heaping 1½ cups chopped chard leaves
Kosher salt
2 large eggs
Freshly ground black pepper
Bring a large saucepan of water to a simmer.
Meanwhile, in a medium nonstick skillet, heat the olive oil over medium-low heat. Add the leeks and cook for 3 minutes, then stir in the chard and a large pinch of salt.
Crack the eggs into small ramekins and add them one at a time to the simmering water, using a large slotted spoon to carefully make sure they are not sticking to the bottom of the saucepan. Cook for 2½ minutes, or until the white is fully cooked but the yolk still feels tender when gently pressed with a fingertip.
Divide the greens between two plates, top each with a poached egg, and sprinkle with salt and pepper.
# Quinoa Cereal with Freeze-Dried Berries
VEGAN
Serves 2
My daughter loves freeze-dried berries, and they are so tasty in cereal that I thought I'd try a homemade cereal for a lighter breakfast option. My local co-op's bulk bin had nutty puffed quinoa crisps—akin to detox Rice Krispies. This recipe makes enough for two servings, but you could scale the recipe up and store it in a mason jar to have all week long.
1 cup quinoa crisps
¼ cup freeze-dried blueberries
¼ cup freeze-dried raspberries
¼ cup toasted sunflower seeds
¼ teaspoon ground cinnamon
A pinch of flaky sea salt
Nondairy milk, for serving
Combine all the ingredients in a medium bowl. Divide between two serving bowls and serve with your (alternative) milk of choice. Store any leftover dry cereal mix in an airtight container at room temperature for up to 2 weeks.
# Sweet Buckwheat Porridge
VEGAN
Serves 2
For a lazy morning at home—of which there are not nearly enough—I turn to a warm, cozy porridge, and buckwheat makes for a great gluten-free alternative. If brown sugar is part of your normal oatmeal routine, I think you'll be surprised how well the dates and apples sub in for sweetness. Personally, I just love the texture of the groats in contrast to the juicy, crisp apple.
½ cup buckwheat groats, soaked overnight in water
1 cup unsweetened almond milk
1 date, pitted and chopped
1 tablespoon unsweetened almond butter
½ Granny Smith apple, cored and diced
¼ teaspoon ground cinnamon or cloves
Flaky sea salt
Drain the soaked groats and put them in a small saucepan. Add the almond milk and date and bring to a boil. Reduce the heat to low, cover, and cook for 10 to 15 minutes. Stir in the almond butter and apple.
Divide between two bowls, sprinkle with the cinnamon and flaky salt, and serve.
# Black Rice Pudding with Coconut Milk & Mango
VEGAN
Serves 2
Full of iron, vitamin E, and antioxidants, black rice is a nutritional powerhouse. It also happens to be delicious. Here I simmer it slowly in water and coconut milk, and top it with ripe mango. It plays to both sides—a little sweet, a little salty. In the right crowd, I'd even get away with serving it for dessert.
½ cup black rice
2 cups water
½ cup plus 2 tablespoons full-fat coconut milk
1 date, pitted and diced
¼ teaspoon kosher salt
½ mango, peeled and sliced
In a small saucepan, combine the rice, water, ½ cup of the coconut milk, the date, and the salt. Bring to a boil, then reduce the heat to maintain a simmer, cover, and cook for 45 minutes, stirring halfway through to make sure the rice is not sticking.
Divide the rice pudding between two bowls and pour 1 tablespoon of the coconut milk over each. Top with the mango and serve.
# Chocolate Chia Pudding
PACKABLE / QUICK / VEGAN
Serves 2
I like to think of this as a healthy, grown-up version of the "snack pack"–style chocolate pudding I grew up with as a kid. This was designed for breakfast, but also works well as an afternoon pick-me-up if you have a sweet tooth. Plus, it's a great way to get antioxidant-rich superfoods like maca (which I've been told by doctors and nutritionists alike is particularly good for sexual health) into your diet.
2 cups full-fat coconut milk
½ cup water
1 tablespoon raw cacao powder
2 teaspoons maca (optional)
4 dates, pitted and finely chopped
½ teaspoon kosher salt
⅛ teaspoon vanilla powder (optional)
½ cup chia seeds
Cacao nibs, to garnish (optional)
In a medium bowl, whisk together the coconut milk, water, cacao powder, maca (if using), dates, salt, and vanilla (if using) until well combined.
Whisk in the chia seeds and let sit at room temperature for 5 minutes, stirring every minute or so to make sure the seeds are evenly mixed.
Eat immediately, garnished with cacao nibs, if desired, or cover and store in the fridge for up to 4 days.
# Beet Açai Blueberry Smoothie Bowl
QUICK / VEGAN
Serves 2
Sometimes I'm burnt out on "drinking" breakfast even if I'm actually still craving the taste of a smoothie. Enter the smoothie bowl, which can add a bit more ceremony to a sit-down breakfast, even if it's happening at my desk during a meeting with the goop merch team. This one is as colorful as it is delicious. Not to mention, it's packed with antioxidant-rich beets, açai, and blueberries.
¼ cup chopped Roasted Beets (here)
2 (5-ounce) packs frozen açai
½ cup full-fat coconut milk
½ cup coconut water
½ cup frozen blueberries
Scant ¼ teaspoon ground cardamom
Zest and juice of 1 lime
Suggested toppings: toasted coconut flakes, Quinoa Cereal (here), kiwi, pomegranate seeds, cacao nibs
Combine all the ingredients except the toppings in a high-speed blender and blend until smooth. Pour into small bowls and garnish with your toppings of choice.
# SOUPS
I can't think of anything more comforting than a bowl of soup. I leaned on soups hard to warm me up when I lived in chillier London, but they're a big part of my cooking repertoire in sunnier LA, too. I still get nostalgic for chicken soup—all the women on the Jewish side of my family had their own divine recipes—particularly when I'm feeling not so great. As far as clean eating goes, soups are traditionally made from vegetables, bone broth, and other inherently good-for-you ingredients, so they're usually in line with whatever detox-esque regimen I come across in my wellness deep dives or get asked to guinea pig by the goop edit girls. They rarely need much (if any) tweaking. And because my entire family will happily eat them, I've almost always got a Dutch oven of something soupy simmering on the stove and a stockpile of favorites in the freezer. Packed with home runs like bright and brothy pho, the stickiest turmeric rice porridge, and a refreshing cucumber-avocado gazpacho, this chapter has a soup for every palate, season, and hunger level.
# Clean Carrot Soup
Serves 4
An oldie but a goodie, I've been cooking this carrot soup for years. Made with just five ingredients (not including the olive oil, salt, and pepper), it's so simple to put together, but still seriously delivers in the flavor department.
6 to 8 medium to large carrots (about 1½ pounds), diced into rustic cubes
Flaky sea salt and freshly ground black pepper
3 tablespoons olive oil, plus more as needed
6 cups Chicken Stock or Vegetable Stock (here and here)
1 (1-inch) piece fresh ginger, peeled
1 small white or yellow onion, chopped
2 garlic cloves
Preheat the oven to 375ºF.
Place half the carrots on a baking sheet. Season with salt and pepper and drizzle lightly with olive oil. Toss to combine. Bake for about 20 minutes, shaking the pan every so often for even cooking, until soft, lightly browned, and caramelized. Remove from the oven.
Meanwhile, in a large saucepan, combine the stock, ginger, onion, and garlic. Bring to a boil, then reduce the heat to maintain a simmer and cook for about 5 minutes, or until the onions are soft. Add the raw carrots and simmer for 5 minutes, until the carrots are just slightly soft but not cooked through.
Carefully transfer the mixture to a blender, add the roasted carrots, and blend until smooth (work in batches, if necessary, and be careful when blending hot liquids). Alternatively, add the roasted carrots to the saucepan and blend the soup directly in the pan with an immersion blender.
To serve, pour or ladle into bowls, season with salt and pepper, and add a drizzle of olive oil over each portion.
# Roasted Kabocha Soup
VEGAN
Serves 4
Thanks to its inherent starchiness, kabocha—an orange-fleshed Japanese squash—makes the most incredible creamy, dairy-free soups. This recipe only uses half of one squash, but I always roast the whole thing and throw the other half into grain bowls and salads throughout the week. (P.S. If you've heard of lectins—a plant-based protein that Dr. Gundry recommends his patients with autoimmunity and cardiovascular concerns limit—and you want to try cutting them out, you can do so by seeding and peeling seeded veggies like squash. For more, see here.)
½ medium kabocha squash, seeded
Kosher salt and freshly ground black pepper
2 tablespoons olive oil
2 tablespoons coconut oil
1 large onion, sliced
2 garlic cloves, sliced
2 tablespoons chopped fresh ginger
1 teaspoon ground cumin
½ teaspoon ground coriander
½ teaspoon garam masala
3 cups Vegetable Stock (here)
Preheat the oven to 400°F. Line a baking sheet with parchment paper.
Season the squash generously with salt and pepper, drizzle with 1 tablespoon of the olive oil, and place flesh-side down on the prepared baking sheet. Roast until browned and tender, about 35 minutes.
Meanwhile, in a heavy-bottomed saucepan, melt the coconut oil over medium heat. Add the onion and a pinch of salt, stir, then reduce the heat to medium-low. Cover the pot and cook, stirring occasionally, for about 20 minutes, or until the onion is very soft and sweet.
Add the garlic, ginger, cumin, coriander, and garam masala, increase the heat to medium-high, and cook, stirring, for 1 minute. When the spices are fragrant but not burnt, add the stock and a big pinch of salt. Partially cover the pot and let the soup simmer gently while the squash roasts.
Let the squash cool slightly, then scrape the flesh into a bowl (discard the skin); you should have about 2 cups cooked squash. Add the squash to the pot with the stock and bring the soup to a boil. Reduce the heat to maintain a simmer, partially cover the pot, and cook for 10 minutes.
Carefully transfer the soup to a blender and blend until smooth (work in batches, if necessary, and be careful when blending hot liquids). Alternatively, blend the soup directly in the pot with an immersion blender. Taste for seasoning, ladle or pour into bowls, and enjoy!
# Beet Gazpacho
QUICK / VEGAN
Serves 2
Cold soup can sometimes feel a little less than satisfying, but this beet gazpacho is somehow rich and even feels a little indulgent. Topped with diced cucumber and avocado, it's a perfect hot-day lunch that even the toughest gazpacho naysayer can get behind.
# FOR THE GAZPACHO:
2 cups chopped Roasted Beets (here)
¼ red onion, chopped (about ⅓ cup)
2 garlic cloves
2 Persian cucumbers, peeled and seeded (about 1 cup)
½ cup water
Juice of 1 lemon
½ teaspoon kosher salt
1 teaspoon apple cider vinegar
# TO GARNISH:
Finely diced cucumber
Avocado
Fresh cilantro leaves
Combine all the gazpacho ingredients in a high-speed blender and blend until smooth. Pour into small bowls and garnish with cucumber, avocado, and cilantro.
# Cucumber & Avocado Gazpacho
QUICK / VEGAN
Serves 2
This easy, bright cucumber gazpacho makes a lovely light lunch or first course for a summer dinner party.
# FOR THE GAZPACHO:
1 large seedless English cucumber, peeled
¼ avocado
1 large scallion, roughly chopped
1 garlic clove, grated
¼ cup olive oil
¼ cup plus 2 tablespoons water
1 teaspoon kosher salt, plus more as needed
2 tablespoons apple cider vinegar
# TO GARNISH:
Finely chopped fresh mint leaves
Apple cider vinegar
Flaky sea salt
1 small shallot, finely diced
Olive oil
Cracked black pepper
To make the gazpacho, cut a 2-inch piece off the end of the cucumber and set it aside for garnish. Roughly chop the remaining cucumber and transfer it to a high-speed blender.
Add the avocado, scallion, garlic, olive oil, water, kosher salt, and vinegar and blend until smooth. Chill in the fridge for at least 1 hour.
For the garnish, finely dice the reserved piece of cucumber and toss in a small bowl with the mint leaves, apple cider vinegar, a pinch of flaky salt, and minced shallot.
Remove the gazpacho from the fridge, taste, and season with kosher salt. Divide between two bowls and spoon over the chopped cucumber relish. Drizzle each bowl with olive oil and finish with cracked pepper.
# Coconut Chicken Soup
Serves 2
I'm a sucker for Thai food and especially love _tom kha gai_ —a chicken soup made with lemongrass, galangal, coconut milk, and fish sauce, one of my favorite ingredients. This is my cleaned-up version, which, while not exactly authentic, is still pretty damn delicious.
1 tablespoon coconut oil
2 tablespoons finely minced fresh ginger
1 tablespoon finely minced garlic
3 tablespoons chopped fresh cilantro (leaves and stems)
2 cups Chicken Stock (here)
1 (16-ounce) can full-fat coconut milk
1 lemongrass stalk, smashed
1 boneless, skinless chicken breast, cut into thin strips
2 tablespoons coconut aminos
1½ tablespoons fish sauce
2 heads of baby bok choy, roughly chopped
½ cup snap peas
1 small zucchini, spiralized
Juice of 1 lime
In a 4-quart Dutch oven, melt the coconut oil over medium heat. Add the ginger, garlic, and cilantro and cook, stirring, for 30 seconds, or until fragrant but not browned.
Add the stock, coconut milk, and lemongrass and bring to a simmer. Add the chicken, coconut aminos, and fish sauce and cook for 5 to 7 minutes, until the chicken is just cooked through. Stir in the bok choy, snap peas, and zucchini noodles.
Divide the soup between two bowls. Squeeze over fresh lime juice just before serving.
# Chicken Meatball Pho
Serves 2
I crave this warming ginger-infused soup whenever I'm feeling a little under the weather. Make a double batch of both the broth and the meatballs and freeze half, so you can whip this up at a moment's notice.
# FOR THE BROTH:
4 cups Chicken Stock (here)
1 (2-inch) piece fresh ginger, sliced
3 garlic cloves, smashed
2 star anise pods
½ cinnamon stick
12 cilantro sprigs
Kosher salt (optional)
# FOR THE MEATBALLS:
½ pound ground dark meat chicken
2 tablespoons finely chopped cilantro
2 tablespoons finely chopped basil
2 scallions, minced
Grated zest of ½ lime
¼ teaspoon kosher salt
# TO FINISH:
1 small zucchini, spiralized
¼ cup bean sprouts
1 scallion, thinly sliced
Fresh cilantro leaves
Torn fresh basil leaves
Lime wedges
To make the broth, combine all the broth ingredients in a 4-quart Dutch oven and bring to a boil. Reduce the heat to medium-low, cover, and simmer gently for 15 minutes to infuse the flavors.
While the broth simmers, make the meatballs. Combine all the meatball ingredients in a medium bowl and roll into 1-inch meatballs.
Fish the aromatics out of the broth and discard them. Taste the broth and season with salt, if needed. Add the meatballs to the broth, cover the pot with the lid ajar, and gently simmer the meatballs for about 10 minutes.
Divide the zucchini between two bowls and top with the broth and meatballs. Garnish with the bean sprouts, scallions, cilantro, and basil and serve with lime wedges for squeezing.
# Chicken & Leek Soup
Serves 2
This is a staple in the UK (where it's known as cock-a-leekie soup)—it's the kind of recipe everyone's mother or grandmother has. While the original is thickened with barley, my version is thick and rich from all the leeks. When cooked down and caramelized, they almost act as noodles.
2 tablespoons olive oil
2 or 3 leeks, washed well and thinly sliced (about 4 cups)
1 teaspoon kosher salt
2 rosemary sprigs
1 boneless, skinless chicken breast
4 cups Chicken Stock (here)
Flaky sea salt and cracked black pepper
In a medium Dutch oven, heat the olive oil over medium heat. Add the leeks and kosher salt, stir well, and simmer for 5 to 8 minutes, until the leeks have wilted and begun to caramelize.
Add the rosemary, chicken, and stock. Cover and cook for 20 to 25 minutes, until the chicken is opaque and cooked through.
Remove the chicken and set aside to cool. Increase the heat to medium-high and bring the soup to a low boil. Cook for about 10 minutes, until the soup has thickened.
When the chicken is cool enough to handle, shred it.
Remove and discard the rosemary sprigs and return the shredded chicken to the pot. Finish the soup with flaky salt and cracked pepper and serve.
# Peruvian Chicken Cauli Rice Soup
Serves 2
A friend of mine told me about this soup she'd had at a Peruvian restaurant called _aguadito de pollo_ that was a vibrant green color from all the cilantro in it. As a lover of cilantro for its unmistakable flavor, I had to try to make my own version of the soup. After tinkering a bit and swapping out the rice for cauliflower rice, I landed on a soup that was equal parts light and satiating. Cilantro is said to have chelating properties—meaning it may help the body get rid of heavy metals (see here)—and is generally thought of as a cleansing herb in the Ayurvedic tradition (see here). This is the kind of cleanse-friendly food I'd eat whenever.
1 medium white onion, chopped
1 bunch of cilantro, roughly chopped
½ jalapeño (optional)
Juice of 3 limes
¼ cup water, plus more if needed
4 cups Chicken Stock (here)
1 boneless, skinless chicken breast
1 teaspoon kosher salt
½ head of cauliflower, riced (1 to 1½ cups)
½ cup frozen peas
Lime wedges, for serving
Combine the onion, cilantro, jalapeño (if using), lime juice, and water in a high-speed blender and blend until smooth, adding a little extra water if needed to loosen the mixture. Set aside.
In a medium soup pot, bring the stock to a simmer over medium-low heat. Add the chicken and salt and cook until the chicken is opaque and fully cooked through, about 20 minutes. Remove the chicken and let cool.
Meanwhile, add the cauliflower rice and peas to the broth and simmer for 10 to 15 minutes, until the cauliflower rice is tender but not mushy.
When the chicken is cool enough to handle, shred the meat.
To serve, increase the heat to medium, return the shredded chicken to the pot, and add the onion-cilantro puree. Stir to combine and cook for 5 minutes before serving.
Divide into bowls and garnish with lime.
# Miso Soup
QUICK / VEGAN
Serves 2
Probiotic-rich, gut-healing miso—combined with the kind of greens that many functional doctors have told me to add to my diet over the years—is another soup I turn to when something's ailing me. If you haven't tried cooking with wakame yet, it's the seaweed you normally find in miso soup, and it's a source of minerals like iron. You can find it in health food stores, or often in the Asian foods aisle of the grocery store (and, of course, you can always find it online).
4 cups cool water
2 tablespoons dried instant wakame
3 to 4 tablespoons chickpea miso paste
½ cup hot water
½ cup chopped Swiss chard
½ cup thinly sliced scallions
Kosher salt (optional)
In a medium saucepan, bring the water to a boil. Add wakame, reduce the heat to medium and simmer for 4 to 6 minutes.
Place 3 tablespoons of the miso in a small bowl, add the hot water, and whisk until the miso is smooth. Stir the miso mixture into the pot with the wakame.
Add the chard and scallions to the pot and cook over medium-high heat for 5 minutes. Taste and add more miso or season with salt, if desired.
Serve warm.
# Brown Rice, Turmeric & Spinach Porridge
Serves 2
This spinach and brown rice porridge—a flavor and texture mash-up of Chinese congee and Indian dal—is endlessly comforting. I like it garnished with a large spoonful of cilantro salsa verde, but roughly chopped cilantro and a spoonful of unsweetened coconut yogurt also work well.
2 tablespoons coconut oil
1 small bunch of scallions, thinly sliced
1 (2-inch) piece fresh ginger, peeled and very finely minced or grated
4 large garlic cloves, very finely minced or grated
1 teaspoon ground turmeric
½ cup short-grain brown rice
4 cups Chicken Stock or Vegetable Stock (here and here)
1 cup water
Kosher salt and freshly ground black pepper
5 ounces baby spinach leaves, roughly chopped
Cilantro Salsa Verde (here; optional)
Lime wedges, for serving
In a small Dutch oven, melt the coconut oil over medium heat.
Add the scallions, ginger, and garlic and cook, stirring, for 2 minutes, or until fragrant but not browned.
Add the turmeric and brown rice and stir to coat the rice in the aromatics.
Pour in the stock and water and season generously with salt and pepper. Bring the mixture to a boil, then cover and reduce the heat to maintain a low simmer. Cook for 1 hour, stirring every 10 minutes or so to make sure the rice isn't sticking.
Stir in the baby spinach, cover, and cook for 2 minutes more. Season with salt and pepper.
Ladle into bowls and dollop each serving with a spoonful of salsa verde, if desired. Serve with lime wedges for squeezing.
# Broccoli-Parsnip Soup
VEGAN
Serves 2
Cleaning up broccoli soup can feel like a losing game (Uh, no cheese? No cream? Why bother?), but the natural sweetness and creaminess of parsnips saves the day. Even my kids love it. This recipe freezes well, so feel free to double up for a rainy day.
3 tablespoons olive oil
½ yellow onion, diced
½ teaspoon kosher salt
1 medium parsnip, peeled and diced
2 cups water
2 cups Vegetable Stock (here)
10 ounces broccoli, cut into florets
Flaky sea salt and cracked black pepper
In a heavy-bottomed pot, heat the olive oil over medium heat. Add the onion and cook for 5 to 7 minutes, until softened and beginning to caramelize. Add the kosher salt, parsnip, water, and stock. Cover, increase the heat to medium-high, and cook for about 20 minutes, or until the parsnips are soft. Add the broccoli, cover, and cook for about 10 minutes, or until the broccoli is fork-tender. Remove from the heat.
Working in batches as needed, carefully transfer the soup to a high-speed blender and blend until smooth (be careful when blending hot liquids). Alternatively, blend the soup directly in the pot with an immersion blender.
Finish with flaky salt and cracked pepper and serve.
# Chickpea & Escarole Soup
Serves 4
A cross between a soup and a stew, this interpretation of a classic Italian recipe gets real depth of flavor from sweet, sticky roasted garlic. If you can't find escarole, any sturdy green can stand in.
1 head of garlic, cloves separated and peeled
5 tablespoons olive oil
1 small or ½ large yellow onion, finely diced
1 carrot, diced
2 celery stalks, diced
1 teaspoon fennel seeds
¼ teaspoon chili flakes
6 cups Chicken Stock or Vegetable Stock (here and here)
2 (14-ounce) cans chickpeas, drained and rinsed
Kosher salt
1 small bunch of escarole, leaves roughly torn and cleaned
Lemon zest, for serving
Parsley Salsa Verde (here), for serving
Flaky sea salt and cracked black pepper
Preheat the oven to 325°F.
Place the garlic cloves in a small ramekin and pour over 2 tablespoons of the olive oil. Cover with parchment paper and bake for 25 minutes.
Meanwhile, in a medium Dutch oven, heat the remaining 3 tablespoons of olive oil over medium heat. Add the onion, carrot, celery, fennel seeds, and chili flakes and cook for 10 minutes, or until the vegetables are translucent and just starting to brown.
Add the stock and half the chickpeas, and season with kosher salt. Bring the soup to a boil, then reduce the heat to maintain a simmer, cover the pot with the lid ajar, and cook for 20 minutes.
Remove the garlic from the oven. Using a fork, lift out all the roasted cloves and add them to the soup, reserving the infused oil.
Remove the soup from the heat. Blend the soup directly in the pot with an immersion blender until smooth. Stir in the remaining chickpeas and the escarole and cook over medium-high heat for 10 minutes more.
Ladle into bowls. Serve topped with a drizzle of the reserved garlic oil, some lemon zest, a spoonful of salsa verde, and flaky salt and cracked black pepper.
# SALADS, BOWLS & ROLLS
Chopped salads, summer rolls, nori wraps, and Buddha bowls: I eat some sort of crunchy, veggie-packed salad, bowl, or roll for lunch almost every single day. It's not so much about eating clean, it's just that few dishes can continually satisfy me as much come one p.m. They're healthy and endlessly versatile—you can usually sub in whatever mix of veggies, proteins, and sauces you have on hand. The real selling point, though, might be their portability. Lunch is the worst meal to leave up to chance, in my experience, because I usually end up disappointed after the what-to-order scramble is over. With a little prep and some good nontoxic storage containers (Onyx makes the best stainless steel ones), I can eat nourishing, fresh, and totally satisfying lunches whether I'm at home, in the office, on set, or en route to a meeting. So for anyone who is always on the go, which seems to be just about everyone these days, these make-ahead, packable, clean-eating dream recipes are for you.
# Grilled Chicken Salad with Miso Dressing
PACKABLE
Serves 2
A crunchy, gingery salad with grilled chicken has been my jam for longer than I can remember. This one feels super hydrating with the snap peas and jicama, and that grilled chicken adds some heft to help you power through the four o'clock snack slump.
# FOR THE CHICKEN:
1 garlic clove, peeled and grated
1 (1-inch) piece fresh ginger
½ teaspoon toasted sesame oil
2 tablespoons coconut aminos
¼ teaspoon kosher salt
1 chicken breast cutlet
Sunflower seed oil
# FOR THE SALAD:
1 cup shredded romaine
½ cup chopped snap peas
¼ cup julienned jicama
¼ cup bean sprouts
4 scallions, thinly sliced
Miso-Ginger Dressing (here)
½ teaspoon sesame seeds, to garnish
To make the chicken, in a medium bowl, whisk together the garlic, ginger, sesame oil, coconut aminos, and salt. Add the chicken and marinate at room temperature for 15 to 20 minutes.
Heat a grill pan over medium-high heat. Brush the pan with a little sunflower seed oil, then add the chicken and cook for 3 to 5 minutes on each side, depending on the thickness. When done, set aside to rest for a few minutes, then slice into ½-inch-thick strips.
To make the salad, in a serving bowl, toss together all the salad ingredients.
Drizzle the salad with the dressing and top with the sliced grilled chicken and sesame seeds.
# Carrot & Beet Slaw
PACKABLE / QUICK / VEGAN
Serves 4
If you get tired of green salads, try rotating this into the mix. Shredded raw beets, carrots, herbs, and a little bit of apple cider vinegar make a bright and satisfying salad that would go with just about anything. In an ideal world, I love it to balance the richness of a burger. Seriously, serve this at a BBQ and everyone will be too busy eating it to even realize how clean it is.
2 cups grated carrots
1½ cups grated beets
4 scallions, thinly sliced
½ teaspoon kosher salt
¼ cup plus 1 tablespoon apple cider vinegar
Combine all the ingredients in a medium bowl. Cover and let sit for 5 to 10 minutes before serving.
# Italian Chicken Salad with Grilled Asparagus
PACKABLE
Serves 2
I love chicories like radicchio, frisée, and escarole, but I get that not everyone is into their bitter flavor. The trick here is balancing those flavors with peppery arugula, sweet fennel, and super-savory grilled chicken and asparagus. This way you can enjoy the digestive benefits of these greens without the intense bitterness.
Zest of 1 lemon
2 garlic cloves, grated
½ teaspoon fresh thyme leaves
¼ cup extra virgin olive oil, plus more as needed
½ teaspoon kosher salt
¼ teaspoon chili flakes (optional)
1 chicken breast cutlet
½ bunch of asparagus
½ cup arugula
½ cup chopped radicchio
½ cup thinly sliced fennel
½ lemon
Flaky sea salt and cracked black pepper
In a small bowl, whisk together the lemon zest, garlic, thyme, olive oil, kosher salt, and chili flakes (if using). Divide the marinade between two medium bowls. Toss the chicken breast with the marinade in one bowl and the asparagus in the other, and let them sit for at least 30 minutes.
Heat a grill pan over high heat. Add the chicken and asparagus and cook for 4 minutes on each side until the chicken is fully cooked and the asparagus is nicely charred.
Meanwhile, in a serving bowl, toss together the arugula, radicchio, and fennel.
Cut the chicken and asparagus into bite-sized pieces, then squeeze the lemon half over them. Add the chicken and asparagus to the bowl with the greens and toss, adding a little more lemon juice and olive oil as needed.
Finish with flaky salt and cracked pepper and serve.
# Garden Salad with Aquafaba Ranch Dressing
PACKABLE / QUICK / VEGAN
Serves 2
Sometimes you just want a crunchy garden salad! It's easy, healthy, tasty, and—who am I kidding—really just a vehicle for that aquafaba ranch.
1 cup shredded romaine lettuce
1 chopped Persian cucumber
¼ cup grated carrot
¼ cup chopped snap peas
½ avocado, diced
½ cup canned chickpeas
Aquafaba Ranch Dressing (here)
Combine all the ingredients except the dressing in a medium bowl. Add a few tablespoons of the dressing and toss. Divide the salad between two smaller bowls and serve.
# Greek Salad
PACKABLE / QUICK / VEGAN
Serves 2
From the protein-packed chickpeas to the crunch of the cucumbers, this bright green salad is a ten.
# FOR THE DRESSING:
Juice of 1 lemon (about ¼ cup)
⅓ cup extra virgin olive oil
½ teaspoon kosher salt
1 tablespoon dried oregano
1 garlic clove
# FOR THE SALAD:
1 head of romaine lettuce, chopped
1 Persian cucumber, diced
½ cup canned chickpeas
¼ cup Pickled Red Onions (here)
Kosher salt and freshly ground black pepper
To make the dressing, combine all the dressing ingredients in a high-speed blender and blend until well combined.
To make the salad, in a large salad bowl, combine all the ingredients for the salad and toss until well combined. Add the dressing and toss once more.
Finish with salt and pepper and serve.
# Kale, Carrot & Avo Salad with Tahini Dressing
PACKABLE / QUICK / VEGAN
Serves 2
Adding grilled chicken to everything is an easy way to protein-pack a meal when you're eating clean, but it can get a little old. This salad is one of those meals I find satisfying without any animal protein, and it tends to please vegans and meat eaters alike. While I've dabbled in vegetarianism before, I don't see myself ever really going back. Still, I know that in essentially every hotbed of longevity in the world, people have followed a diet really limited in meat—so I try to take some cues from centenarians and keep meat from being too much a focus in my kitchen.
2 cups thinly sliced black kale leaves
1 cup grated carrots
Tahini Dressing (here)
1 avocado, thinly sliced
¼ cup sunflower seeds, toasted
In a serving bowl, toss the kale and carrots with a few tablespoons of the dressing, then top with the avocado and sunflower seeds and serve.
# GRAIN BOWLS, 3 WAYS
There's something wonderfully versatile about a grain bowl, and in my opinion, the key element is a really great dressing. Here are three of my tried-and-trues, all using dressings from the Basics chapter (here). But as long as you've got some kind of cooked (gluten-free) grain, some veggies, and a solid dressing, you're in grain-bowl business.
# Brown Rice Grain Bowl with Kale, Broccoli & Sesame
PACKABLE / VEGAN
Serves 2
If you're packing this for lunch (for you or your kiddos), cook the broccoli and make the dipping sauce the night before so all you have to do in the morning is assemble the bowls. A cruciferous vegetable, broccoli is a prized ingredient in a lot of doctors' books. But I didn't know until getting into Dr. Amy Myers's protocols that, because of its sulfur- and nitrogen-containing compounds, broccoli can be used as a tool in healing from Candida (for more info, see the Candida Reset here).
Kosher salt
1 cup broccoli florets
1 cup cooked brown rice
½ cup grated carrot
½ cup finely chopped kale leaves
2 tablespoons toasted sesame seeds
Coconut Aminos Sauce (here)
Bring a saucepan of salted water to a boil. Fill a medium bowl with ice and water. Add the broccoli to the boiling water and cook for about 3 minutes, or until just tender. Drain the broccoli and shock it in the ice water to stop the cooking. Drain again.
To assemble, divide the brown rice between two bowls, then top each with half the broccoli, carrot, and kale. Sprinkle with the sesame seeds and drizzle with dipping sauce right before eating.
# Crunchy Spring Veggie Grain Bowl
PACKABLE / QUICK
Serves 2
Spring in a bowl, this crunchy-salad-meets-grain-bowl is all I want to eat for lunch when it turns warm outside.
1 cup cooked quinoa or brown rice
3 asparagus spears, shaved
½ cup grated carrot
½ small watermelon radish, thinly sliced with a mandoline
⅔ cup shredded Poached Chicken (here; optional)
⅔ cup thinly sliced bok choy
¼ cup chopped fresh cilantro leaves
Miso-Ginger Dressing (here)
About 1 tablespoon coconut aminos
About 1 teaspoon toasted sesame oil
Divide the quinoa between two bowls. Top each with half the asparagus, carrot, radish, chicken, and bok choy. Garnish with the cilantro and pour over the miso dressing. Drizzle with the coconut aminos and sesame oil and serve.
# Quinoa, Sweet Potato & Tahini Grain Bowl
PACKABLE / VEGAN
Serves 2
The flavor combination of sweet potato and tahini is made even better here by the addition of earthy beets, spicy arugula, and creamy avocado. Yum.
1 small sweet potato, peeled and cut into 1-inch pieces
1 tablespoon olive oil
Kosher salt and freshly ground black pepper
1 cup cooked quinoa
2/3 cup cubed Roasted Beets (here)
2/3 cup baby arugula
½ avocado, diced
¼ cup sunflower seeds, toasted
Tahini Dressing (here)
Preheat the oven to 400°F. Line a baking sheet with parchment paper.
In a medium bowl, toss the sweet potato with the olive oil, season with salt and pepper, and place on the prepared baking sheet. Roast for 20 minutes, or until tender and lightly browned. Remove from the oven and let cool.
Divide the cooked quinoa between two bowls. Top each with half the sweet potato, beets, arugula, and avocado. Sprinkle with the sunflower seeds and pour over tahini dressing.
# CAULIFLOWER BOWLS, 3 WAYS
If you're not doing grains, cauliflower rice is a major lifesaver. One of my favorite nutritionists, Shira Lenchewski, MS, RD, often recommends it as a healthy dinner option for the time-strapped. And these three cauli rice bowls are so good, you'll want to make them even if grains are part of your regular diet.
# Teriyaki Bowl
PACKABLE
Serves 2
Whenever I make this, I double the marinated chicken. That way I can grill it up alongside veggies, serve it with a little cauli rice, drizzle it with some coconut aminos sauce, and dinner's served.
# FOR THE TERIYAKI CHICKEN:
2 boneless, skinless chicken thighs (about ¾ pound)
1 small garlic clove, grated
1 teaspoon grated fresh ginger
¼ teaspoon kosher salt
2 tablespoons coconut aminos
Olive oil, for brushing the grill pan
# FOR THE BOWL:
1 recipe Cauliflower Rice (here)
1 teaspoon toasted sesame oil
1 scallion, thinly sliced
¼ cup chopped romaine lettuce
¼ cup chopped kimchi
1 small carrot, grated
Pickled Cucumbers (here)
Fresh cilantro leaves
Toasted sesame seeds
Coconut Aminos Sauce (here)
To make the teriyaki chicken, combine the chicken thighs, garlic, ginger, salt, and coconut aminos in a small bowl and marinate at room temperature for 10 minutes.
Heat a grill pan over medium-high heat. Brush the pan with a little olive oil, add the chicken, and cook for 5 minutes per side, or until firm to the touch and cooked through. Transfer to a cutting board and let rest for at least 5 minutes.
Meanwhile, prepare the cauliflower rice, then stir in the toasted sesame oil and scallion. Divide the cauliflower rice between two bowls and top each with half the romaine, kimchi, carrot, and cucumber pickles.
Chop the grilled chicken and season with salt, if needed. Divide the chicken between the bowls and garnish each with cilantro leaves and sesame seeds. Pour a couple of tablespoons of the dipping sauce over each and serve with more sauce on the side.
# Za'atar Chicken Bowl
PACKABLE
Serves 2
This bowl combines some of my favorite things from Middle Eastern cuisine: grilled meat on a stick, tons of herbs and citrus, and a creamy, savory sauce to tie it all together. What more is there to say?
# FOR THE CHICKEN:
1 boneless, skinless chicken breast, cut into 2-inch cubes
¼ red onion, cut into 2-inch cubes
1 tablespoon za'atar
1 teaspoon kosher salt
1 tablespoon olive oil, plus more as needed
# FOR THE CAULIFLOWER RICE PILAF:
1 recipe Cauliflower Rice (here)
½ cup finely chopped black kale leaves (about 4 large leaves)
¼ cup minced fresh parsley
¼ cup minced fresh cilantro
¼ cup minced fresh chives
Zest of 1 lemon
Kosher salt
Tahini Dressing (here)
To make the chicken, in a medium bowl, combine the chicken, onion, za'atar, salt, and olive oil. Skewer the chicken and onion pieces, alternating them.
Heat a grill pan over high heat. Brush the pan with a little olive oil, add the skewers, and cook for about 5 minutes on each side, or until the chicken is cooked through and the onion is browned.
Meanwhile, make the cauliflower rice pilaf. Prepare the cauliflower rice, then add the kale and cook, stirring, for about 2 minutes, until slightly softened. Add the parsley, cilantro, chives, and lemon zest and cook for 1 minute more. Remove from the heat. Finish with a squeeze of lemon juice and some flaky salt.
Serve the skewers with the cauliflower rice pilaf and the dressing.
# Tex-Mex Bowl
PACKABLE
Serves 2
Tostada salads are a popular lunch choice at goop, and while I can get down with a giant tortilla shell most days of the year, we wanted to create a version that didn't include corn, gluten, dairy, or nightshades. This is what I came up with—and I'm happy to say it's officially goop-approved.
1 recipe Cauliflower Rice (here)
Braised Mexican Nomato Chicken (here)
½ recipe Mexican Black Beans (here)
½ recipe Pickled Radishes (here)
¼ cup sliced romaine lettuce
½ avocado, chopped
Flaky sea salt
Cilantro leaves, to garnish
Lime wedges, for serving
Divide the cauliflower rice between two bowls. Top each with half the braised chicken, beans, pickled radishes, romaine, and avocado. Season with a little flaky salt and garnish with cilantro. Serve with lime wedges for squeezing.
# Kale & Sweet Potato Salad with Miso
PACKABLE / VEGAN
Serves 2
I could never get bored of this salad. It has some of my all-time favorite flavors (sweet potato, miso, ginger), and it's totally filling. Tossing the sweet potatoes with the dressing while they're still warm is key—it helps the potatoes absorb all that delicious citrusy-umami flavor.
1 tablespoon chickpea miso paste
2 tablespoons olive oil
1 sweet potato, cut into 1-inch cubes
⅛ medium red onion, thinly sliced
¼ teaspoon kosher salt
¼ teaspoon grated lime zest
2 cups baby kale
¼ cup picked fresh cilantro
¼ cup hulled pepitas (pumpkin seeds), toasted
Miso-Ginger Dressing (here)
1 teaspoon sesame seeds
Preheat the oven to 425°F. Line a baking sheet with parchment paper.
In a large bowl, whisk together the miso and olive oil. Toss the cubed sweet potatoes in the miso mixture until evenly coated. Spread the potatoes out on the prepared baking sheet and roast for 20 to 25 minutes, until soft and caramelized.
Meanwhile, in a small bowl, toss the onion slices with the salt and lime zest and let sit for about 10 minutes.
While the potatoes are still warm (not hot), transfer them to a serving bowl and toss with the kale, cilantro, pepitas, onion, and dressing. Finish with the sesame seeds.
Tarragon Chicken Lettuce Cups (here)
# CHICKEN SALAD LETTUCE CUPS, 4 WAYS
These lettuce cups are all prepared similarly (and simply) but have very different flavor profiles to keep lunchtime interesting.
# Kimchi Chicken Lettuce Cups
QUICK
Serves 2
I can normally get on board with anything fermented and anything Korean (I'll admit, I'm a little bit obsessed with kimchi). Tossing chopped kimchi in chicken salad adds a little heat and funk that is so flavorful, you won't believe you're eating clean. Just be sure to check the label and buy kimchi that is sugar-free.
⅓ cup Aquafaba Mayo (here)
½ cup chopped kimchi
2 scallions, diced
¼ teaspoon kosher salt
1 poached chicken breast (here), shredded
Butter lettuce leaves, for serving
Fresh cilantro leaves, to garnish
Combine the mayo, kimchi, scallions, and salt in a medium bowl, then fold in the shredded chicken and mix thoroughly.
Arrange lettuce leaves on two plates. Spoon the chicken mixture into the lettuce leaf cups and garnish with cilantro.
# Curry Chicken Lettuce Cups
PACKABLE / QUICK
Serves 2
Curried chicken salad is a classic, and there's a reason for that. It's got such incredible flavor and it hardly takes any effort to throw together.
⅓ cup Aquafaba Mayo (here)
Zest of ½ lime
Juice of 1 lime
1 garlic clove, grated
1 teaspoon curry powder
¼ teaspoon kosher salt
1 poached chicken breast (here), shredded
Butter lettuce leaves, for serving
Pickled Red Onions (here), for garnish
Combine the mayo, lime zest, lime juice, garlic, curry powder, and salt in a small bowl, then fold in the shredded chicken and mix thoroughly.
Arrange lettuce leaves on two plates. Spoon the chicken salad into the lettuce leaf cups and top with pickled onion.
# Tarragon Chicken Lettuce Cups
PACKABLE / QUICK
Serves 2
This is the most traditional of the three chicken salads, and that's precisely why it won me over. These flavors have been and always will be delicious together—plus, it's got the best textures: creamy, crunchy, and cool.
⅓ cup Aquafaba Mayo (here)
1 celery stalk, diced
1 small shallot, diced
1 tablespoon chopped fresh tarragon
¼ teaspoon kosher salt
1 poached chicken breast (here), shredded
Butter lettuce leaves, for serving
Cracked black pepper
Combine the mayo, celery, shallot, tarragon, and salt in a small bowl, then fold in the shredded chicken and mix thoroughly.
Arrange lettuce leaves on two plates. Spoon the mixture into the lettuce leaf cups and garnish with cracked pepper.
# Chicken Larb Lettuce Cups
QUICK
Serves 2
Another favorite dish of mine, this take on Thai chicken _larb_ —flecked with fresh herbs and tossed in umami-rich fish sauce—is a staple in my house. Use ground dark meat chicken, which has way more flavor than white meat.
¾ pound ground dark meat chicken
½ bunch of scallions, thinly sliced
2 large garlic cloves, grated or finely minced
1 tablespoon grated fresh ginger
¾ teaspoon kosher salt
2 tablespoons olive oil
2 tablespoons coconut aminos
2 tablespoons fish sauce
6 large butter lettuce leaves
Thinly sliced red onion
1 tablespoon roughly chopped fresh mint leaves
1 tablespoon roughly chopped fresh cilantro leaves
1 tablespoon roughly chopped fresh Thai basil leaves
Lime wedges, for serving
In a medium bowl, combine the chicken, scallions, garlic, ginger, and salt.
In a large sauté pan, heat the olive oil over medium-high heat. Add the chicken and cook, stirring, until firm and no longer pink, about 8 minutes. Add the coconut aminos and fish sauce, stir to combine, and remove from the heat.
Divide the lettuce leaves between two plates. Fill the leaves with the chicken mixture and top each with some sliced red onion and the fresh herbs. Serve with lime wedges on the side for squeezing.
# Crunchy Summer Rolls
PACKABLE / QUICK / VEGAN
Serves 2
I'm a huge fan of Vietnamese summer rolls, but most clean-eating protocols recommend steering clear of white rice. (Which I happen to really love. A little bowl of white rice with soy sauce is one of life's greatest pleasures. But I digress.) The game changer? Brown rice paper wrappers (typically found in the Asian foods aisle at the grocery store, or online). I stuff these with jicama, fresh herbs, and tons of green veggies, but fill them with any mix of herbs and veggies you like. Just don't forget the (addictive) creamy cashew dipping sauce!
# FOR THE CASHEW SAUCE:
¼ cup unsweetened cashew butter
1 lime
1 teaspoon fish sauce
½ teaspoon grated fresh ginger
1 tablespoon water
2 garlic cloves
2 tablespoons coconut aminos
# FOR THE SUMMER ROLLS:
4 to 6 brown rice paper wrappers
6 cilantro sprigs
2 Persian cucumbers, julienned
½ jicama, julienned
¾ cup thinly sliced snap peas
4 basil sprigs
4 mint sprigs
1½ cups mung bean sprouts
To make the cashew sauce, combine all the sauce ingredients in a high-speed blender and blend until smooth.
To assemble the rolls, fill a bowl large enough to hold the spring roll wrappers with warm water. Soak one wrapper for about 1 minute, or until just pliable, then lay it flat on a cutting board. Layer cilantro, cucumber, jicama, snap peas, basil, mint, and sprouts on the wrapper. Carefully roll up the wrapper, folding in the ends as you roll. Repeat with the remaining wrappers and filling ingredients.
Serve with the cashew sauce on the side. If not serving immediately, pack the rolls in an airtight container, layering them with damp paper towels to keep the rice paper moist.
# Nori Salad Roll
QUICK / VEGAN
Serves 2
I have these salad rolls, stuffed with good-for-your-gut kimchi, as a light lunch or afternoon snack. The nori can get a little soggy as it sits, so I try to eat the rolls soon after I assemble them. (This is not usually an issue for me, as they're totally delicious and I inhale them right away.)
1 avocado
Juice of 1 lime
4 sheets of nori
¾ cup finely chopped lettuce
1 medium carrot, julienned
½ cup chopped kimchi
4 mint sprigs
6 cilantro sprigs
Coconut Aminos Sauce (here), for serving
In a small bowl, mash together the avocado and lime juice.
Place the nori sheets flat on a cutting board. Top each nori sheet with a spoonful of the mashed avocado, spreading it evenly and leaving a ½-inch border. Top with the lettuce, carrots, kimchi, and herbs.
Working with one sheet at a time, lightly wet the top border of nori with water and, starting at the bottom, carefully roll up the nori around the filling as tightly as possible, using more water as necessary to hold the roll together. Repeat to roll up the remaining nori sheets.
Serve the rolls with dipping sauce.
# A LITTLE MORE FILLING
Despite what you may have heard, clean eating isn't all green juice and sad vegan food. While I tend to gravitate toward lighter salads and wraps for lunch, when it comes to dinner, I'm definitely a turkey burger/spaghetti and meatballs/roast chicken and potatoes kind of girl. I want something that will fill me up and warm me from the inside out; something cooked with love and layered with flavor. If I'm not doing a January kickoff cleanse or testing detox recipes, I typically eat whatever I want at dinner. But with no nightshades (bye, potatoes), gluten (adios, burger bun), or dairy (sayonara, Parmesan), so many of those classic comfort foods can seem totally off-limits. So I've made it my mission to develop clean versions of some of my favorites. You'll find a killer turkey burger wrapped in lettuce and piled high with avocado, pickle, and creamy aquafaba; lemony turkey meatballs slow-simmered in "nomato" sauce; meat loaf; chicken tacos; and more. These delicious and totally satisfying dishes never feel like a compromise. They leave me—and my French fry–loving friends and family—feeling full and happy, and I suspect they'll do the same for you and yours.
# Fish Tacos on Jicama "Tortillas"
Serves 2
These grain-free, gluten-free, etc.-free taco shells are legit brilliant. I first heard about using jicama as a tortilla replacement from a friend of a friend who'd recently returned from a trip to Tulum, Mexico. Let's just say my (healthy) taco life has been made infinitely better.
# FOR THE FISH:
½ pound halibut fillet, skin and bones removed
2 tablespoons olive oil
2 tablespoons chopped fresh cilantro
Juice of ½ lime
1/8 teaspoon ground cumin
Flaky sea salt
# FOR THE TACOS:
1 small jicama (about 5 inches in diameter), peeled
½ cup shredded cabbage
Juice of ½ lime
Kosher salt
Sliced red onion
½ avocado, thinly sliced
Drizzle of Aquafaba Crema (here)
Fresh cilantro leaves
Lime wedges, for serving
To make the fish, cut the halibut into 4 equal strips. Place in a bowl and toss with the olive oil, cilantro, lime juice, cumin, and a large pinch of flaky salt. Cover and set aside to marinate for a few minutes.
To make the tacos, use a jumbo mandoline to slice four ⅛-inch-thick "tortillas" from the jicama. (If you don't have a jumbo mandoline, do this carefully with a sharp knife.)
In a small bowl, toss the cabbage with the lime juice and a pinch of kosher salt and set aside.
Heat a small nonstick pan over medium-high heat. Add the halibut and cook for about 2 minutes on each side, or until just cooked through.
Place one piece of fish on each jicama "tortilla" and top with the shredded cabbage, red onion, and avocado. Drizzle with aquafaba crema and finish with cilantro.
Serve with lime wedges on the side for squeezing.
# Faux Meat Beet Tacos
VEGAN
Serves 2
Legumes can be a nice source of protein, especially if you're trying to eat a plant-based diet. But they can be difficult for some of us to digest. Enter this beet taco recipe. Beets are earthy and have a minerality to them that means they can be a great stand-in for meat or beans. (Don't knock it till you try it.) And don't worry about letting the beets brown on the stovetop for a while—the flavor gets much deeper and richer if you let them caramelize nicely.
# FOR THE FILLING:
2 tablespoons olive oil
½ white onion, finely diced
2 garlic cloves, minced
¼ teaspoon ground coriander
½ teaspoon ground cumin
¼ teaspoon chili flakes (optional)
½ teaspoon kosher salt
¾ cup finely diced Roasted Beets (here)
# FOR THE TACOS:
4 small grain-free tortillas (I like coconut and cassava flour tortillas)
½ avocado, sliced
Pickled Radishes (here)
Aquafaba Crema (here)
½ cup shredded cabbage
Lime wedges, for serving
Fresh cilantro leaves, for garnish
To make the filling, in a large sauté pan, heat the olive oil over medium heat. Add the onion, garlic, and spices and cook, stirring, for 5 to 7 minutes, until the onion and garlic soften and begin to caramelize. Add the beets and cook for 7 to 10 minutes more, until they are nicely browned.
To assemble the tacos, pile one-quarter of the beet mixture into each tortilla, then top with a slice of avocado, a few pickles, a drizzle of crema, and a bit of cabbage. Finish with a squeeze of lime juice and a few cilantro leaves and serve.
# Braised Chicken Tacos on Butternut Squash "Tortillas"
Serves 2
After my jicama tortilla shell revelation (see here), I realized the neck of a butternut squash was also a good tortilla shape—they just need to be blanched quickly before serving. Sometimes I'll cook a few of these for myself, make a double batch of the braised chicken, and put out all the usual taco bar fixings for the kids. Depending on the size of your squash, you may need more than six "tortillas" to hold all the filling, so adjust accordingly. Look for a squash with a long, skinny neck that's about 4 inches in diameter.
Neck of 1 butternut squash (see headnote), stemmed and peeled
½ cup shredded cabbage
Juice of ½ lime
Kosher salt
½ recipe Braised Mexican Nomato Chicken (here)
Pickled Red Onions (here)
Fresh cilantro leaves
Sliced serrano chili (optional)
Lime wedges
Bring a wide, shallow saucepan of water to a simmer.
Use a large mandoline or a sharp knife to slice the neck of the squash crosswise into 6 as-thin-as-you-can-make-them (no thicker than 1/8-inch) "tortillas." Blanch the squash slices in the simmering water until just tender, about 1 minute. Transfer to a paper towel–lined plate or baking sheet to drain.
In a small bowl, toss the shredded cabbage with the lime juice and a large pinch of salt.
Divide the "tortillas" between two plates and fill each with a large spoonful of braised chicken. Top with the cabbage, some pickled onions, cilantro, and sliced serrano, if desired. Serve with lime wedges for squeezing.
# Zoodle Chow Mein
QUICK
Serves 2
My whole family loves chow mein, but the classic recipe, made with wheat noodles and tons of oil, is not exactly "clean." Thanks to the trusty zucchini noodle, or "zoodle," this version scratches the Chinese takeout itch, minus the sodium bomb.
1 boneless, skinless chicken breast
Kosher salt
4 tablespoons olive oil
½ large yellow onion, thinly sliced
2 shiitake, oyster, or maitake mushrooms, thinly sliced
1 garlic clove, grated
½ teaspoon grated fresh ginger
1 large or 2 small zucchini, spiralized
1 teaspoon toasted sesame oil
2 tablespoons coconut aminos
Kosher salt
Cut the chicken breast into ½-inch-thick slices, then cut the slices into 2-inch pieces. Season generously with salt.
In a large nonstick pan or wok, heat 2 tablespoons of the olive oil over medium-high heat. Add the onion and cook, stirring, for 2 minutes, or until beginning to brown. Add the mushrooms and cook, stirring often, for 1 minute.
Push the onion and mushrooms to the side of the pan and add the remaining 2 tablespoons olive oil and the chicken to the empty side. Cook until the chicken is starting to brown on the first side, then flip and cook for another minute or two, until almost cooked through.
Stir in the mushroom-onion mixture, then add the garlic and ginger and sauté for 30 seconds, adding a little water if the pan looks dry.
Add the zucchini noodles, sesame oil, and coconut aminos and season everything with a big pinch of salt. Toss to combine, taste and adjust the seasoning, and serve.
# Black Rice with Braised Chicken Thighs
Serves 2
One of my favorites for entertaining, this dish is both incredibly delicious and gorgeous. As the chicken simmers in the gingery, lemongrassy, coconutty black rice, it takes on tons of flavor and an intense purple hue. Eat this plain, curled up in front of the TV, or serve it to guests with a simple salad or grilled vegetable on the side.
4 bone-in, skin-on chicken thighs
¼ teaspoon kosher salt, plus more for seasoning the chicken
1 teaspoon coconut oil
1 (2-inch) piece fresh ginger, peeled and sliced
3 large garlic cloves, sliced
2/3 cup black rice
1½ cups water
½ cup full-fat coconut milk
1 lemongrass stalk, outside layer peeled off, smashed with the side of a knife
Season the chicken thighs generously with salt on both sides.
In a 12-inch braiser or sauté pan with a lid, melt the coconut oil over medium-high heat. Add the chicken thighs, skin-side down, and sear until very nicely browned. Flip and cook until browned on the second side, about 5 minutes each side. Transfer to a plate and set aside.
Add the ginger and garlic and cook, stirring, for 30 seconds, or until fragrant but not browned.
Add the rice and give it a stir, coating each grain with the oil.
Add the water, coconut milk, lemongrass, and salt. Bring the mixture to a simmer, then return the chicken thighs to the pan, skin-side up, and cover. Reduce the heat to the lowest setting and simmer gently for 45 minutes.
Divide rice and serve with two thighs.
# Kitchari
VEGAN
Serves 2
This humble dish is so warming and nourishing. In Ayurvedic medicine, _kitchari_ is a balancing staple (see here for more on Ayurveda from Dr. Aruna). I enjoy it for breakfast, but you can have it any time of day. It's pretty delicious with a dollop of unsweetened coconut yogurt swirled in, too.
⅓ cup moong dal (split mung beans)
¼ cup basmati rice
1 tablespoon coconut oil
1 tablespoon curry powder
½ teaspoon kosher salt
2 cups water
½ cup roughly chopped fresh cilantro
Put the beans and rice in a bowl and rinse with cool running water. Drain and repeat once or twice more.
In a medium saucepan, melt the coconut oil over medium heat. Add the curry powder, beans, rice, and salt. Cook, stirring, for a couple of minutes, then add the water. Bring to a boil, then cover and reduce the heat to low. Cook for 20 minutes, undisturbed, until thick and creamy.
Divide the kitchari between two bowls. Garnish with the cilantro.
# FISH EN PAPILLOTE, 2 WAYS
Cooking fish in parchment paper, or as the French say, _en papillote_ , is cool for many reasons—wonderful flavor, moist fish, fun presentation—but I'd have to say the biggest draw is the easy cleanup. Everything cooks in one little parchment parcel, making it the most elegant one-pot dinner.
# Mediterranean Salmon en Papillote
QUICK
Serves 2
It's hard to go wrong with fennel, lemon, and capers, especially when you throw salmon in the mix. This is a foolproof method regardless of your dietary philosophy.
2 (6-ounce) pieces salmon, skin removed
Flaky sea salt and cracked black pepper
1 small fennel bulb, thinly sliced
1 small shallot, thinly sliced
4 lemon slices
2 teaspoons capers
½ teaspoon grated lemon zest
Olive oil
A dash of Aleppo pepper (optional)
Preheat the oven to 400°F. Lay out two 9x11-inch sheets of parchment paper on a flat surface.
Season the salmon fillets generously with salt and black pepper and place one in the lower third section of each parchment sheet. Divide the fennel, shallot, lemon slices, capers, and lemon zest evenly between the fillets.
Drizzle a little olive oil (about 1 tablespoon total) over each and sprinkle with Aleppo pepper, if desired.
Fold the top half of the parchment paper over the fish to make a rectangle. Starting on one side, crimp the edges together tightly so no liquid can escape and the contents are completely enclosed. Repeat with the other two sides, then place the parcels on a baking sheet and bake for 12 to 15 minutes, depending on the thickness of the fillets.
Transfer the parcels to individual plates. Carefully cut open the parchment paper (the steam inside is hot) and serve.
# Halibut en Papillote with Lemon, Mushrooms & Toasted Sesame Oil
QUICK
Serves 2
As far as I'm concerned, almost anything with toasted sesame oil is going to be good, and that's certainly true of this ginger-scented halibut-and-mushroom parcel. Like the previous papillote recipe, this serves two, but could be easily scaled up for a bigger crowd. (It seems like all my friends want to come over when I'm making it.)
2 medium shiitake, oyster, or maitake mushrooms, thinly sliced
2 (6-ounce) halibut fillets, skin removed
Flaky sea salt and cracked black pepper
½ teaspoon grated lemon zest
½ teaspoon grated fresh ginger
2 scallions, thinly sliced
1 teaspoon coconut aminos
½ teaspoon toasted sesame oil, plus more for drizzling
4 lemon slices
Preheat the oven to 400°F. Lay out two 9x11-inch sheets of parchment paper on a flat surface.
Place the sliced mushrooms in the lower third of each parchment sheet. Season the halibut fillets generously with salt and pepper and place them on top of the mushrooms.
Divide the lemon zest, ginger, scallions, coconut aminos, sesame oil, and lemon slices evenly between the fillets.
Fold the top half of the parchment paper over the fish to make a rectangle. Starting on one side, crimp the edges together tightly so no liquid can escape and the contents are completely enclosed. Repeat with the other two edges, then place the parcels on a baking sheet and bake for 12 to 15 minutes, depending on the thickness of the fillets.
Transfer the parcels to individual plates. Carefully cut open the parchment paper (the steam inside is hot) and serve.
# Sheet Pan Chicken Curry
Serves 2
This recipe is stupid easy. All you do is toss everything together, put it on a baking sheet, pop it in the oven, and you're done. On its own, it's a super-clean dinner, but you could tweak it to work as a family meal—just double the recipe and warm up some store-bought naan to round this out for kiddos.
3 garlic cloves, grated
1 teaspoon curry powder
½ teaspoon garam masala
½ teaspoon ground turmeric
½ teaspoon grated fresh ginger
2 tablespoons coconut oil
1 pound boneless, skinless chicken breast, cut into 2-inch pieces
½ head of cauliflower, cut into about 2-inch florets
Pickled Red Onions (here)
Fresh cilantro leaves, to garnish
Preheat the oven to 425°F. Line a baking sheet with parchment paper.
In a large bowl, combine the garlic, curry powder, garam masala, turmeric, ginger, and coconut oil and work them into a paste. Toss the chicken and cauliflower in the paste to coat. Spread the chicken and cauliflower in an even layer over the prepared baking sheet. Roast for 15 to 20 minutes, rotating the pan halfway through.
Serve topped with pickled onions and cilantro.
# Sheet Pan Chicken with Broccolini & Radicchio
Serves 2
Italian food is easily the thing I miss most when I'm eating really clean (Parm, prosciutto, pasta—I'm looking at you). This sheet pan dinner is great because you get to enjoy a lot of those flavors, but in a much lighter dish. If you don't care for the bitter flavor of radicchio, you could try using Tuscan kale instead.
1 pound boneless, skinless chicken thighs, cut into 2-inch pieces
1 lemon, thinly sliced
2 tablespoons Parsley Salsa Verde (here), plus more for serving
Kosher salt
1 bunch of Broccolini, ends trimmed, split lengthwise
½ head of radicchio, cut into 4 wedges through the core
6 garlic cloves, smashed
Olive oil
Flaky sea salt
Preheat the oven to 425°F. Line a baking sheet with parchment paper.
In a large bowl, toss together the chicken, lemon slices, salsa verde, and a generous pinch of kosher salt. Spread the mixture over the prepared baking sheet and roast for 10 minutes.
In a separate large bowl, toss together the Broccolini, radicchio, and garlic with a glug of olive oil and a generous pinch of kosher salt. Add the vegetable mixture to the baking sheet with the chicken and lemon, toss to combine, and bake for 5 to 8 minutes more, or until the Broccolini is crispy and the radicchio is wilted.
Finish with flaky salt and drizzle with salsa verde before serving.
# Beet Falafel Sliders
VEGAN
Serves 4
Falafel flavors are decidedly rich and savory, yet still so fresh-tasting with all that parsley. These are a fun take on the classic, and the beets add a subtle sweetness, not to mention a beautiful color. My kids eat them with hummus, but they're also delicious with tahini dressing.
1 cup chopped Roasted Beets (here)
1 cup chickpeas
1 large handful of parsley (just shy of 1 cup)
4 garlic cloves
¼ red onion, chopped
1 teaspoon ground cumin
½ teaspoon kosher salt
¼ cup chickpea flour
2 tablespoons olive oil
1 cup baby arugula
¼ cup fresh cilantro leaves
1 tablespoon fresh lemon juice
Flaky sea salt
Tahini Dressing (here), for serving
Preheat the oven to 425°F. Place an unlined baking sheet in the oven to preheat.
In a food processor, combine the beets, chickpeas, parsley, garlic, onion, cumin, kosher salt, and chickpea flour and pulse for a few minutes until well combined. Divide the mixture into 8 portions and form them into even patties, 2 to 3 inches in diameter.
Carefully remove the hot baking sheet from the oven and brush it with the olive oil. Lay the falafel patties on the baking sheet and bake for about 10 minutes. Flip the falafel and bake for 10 minutes more.
Meanwhile, in a small bowl, toss the arugula and cilantro with the lemon juice and a pinch of flaky salt.
To serve, lay two beet burgers on each plate. Drizzle with dressing and top each with half the arugula salad.
# White Bean & Zucchini Burgers
VEGAN
Serves 4
Typically, I loathe bean burgers—more often than not, they're bland and stodgy. These burgers, however, are different. The texture of grated zucchini and millet becomes so pleasantly crispy, and then there's the schmear of lemony garlic aquafaba and a pickled onion to boot.
2 tablespoons olive oil, plus more as needed
2 medium leeks, washed well and diced
8 crimini mushrooms, diced
8 garlic cloves, minced
Zest of 1 lemon
1 cup canned cannellini beans, drained
1 cup cooked millet, cooled
1 cup grated zucchini
2 teaspoons kosher salt
2 cups baby kale
Pickled Red Onions (here)
Lemony Garlic Aquafaba Sauce (here)
In a sauté pan, heat the olive oil over medium-high heat. Add the leeks, mushrooms, and garlic and cook, stirring, until caramelized, about 7 minutes. Remove from the heat, add the lemon zest, and set aside to cool.
Meanwhile, in a medium bowl, mash together the beans, millet, zucchini, and salt. Once the vegetable mixture has cooled, add it to the bowl and stir well to combine. Form the mixture into 4 patties.
In a nonstick skillet, heat a few tablespoons of olive oil over medium-high heat. Add the patties and cook for 3 to 5 minutes on each side, until crispy and browned, being careful not to break them on the flip.
Meanwhile, in a small bowl, toss the baby kale with pickled onions.
Serve the kale and onions alongside the bean burgers and lemony garlic aquafaba.
# Five-Spice Salmon Burgers
QUICK
Makes 6 burgers
A riff on the salmon burgers from _It's All Good_ , these are soy-free but don't sacrifice anything on taste. The Chinese five-spice powder (usually a mix of star anise, cloves, cinnamon, Sichuan peppercorn, and fennel) is clutch.
1½ pounds salmon, skin removed, cut into 1-inch pieces
4 scallions, thinly sliced
1 garlic clove, minced
1 (2-inch) piece fresh ginger, peeled and minced
½ teaspoon Chinese five-spice powder
2 tablespoons coconut aminos
½ teaspoon kosher salt
½ teaspoon toasted sesame oil
Place the salmon pieces on a plate and freeze for about 10 minutes, until very cold but not frozen. In batches, transfer the salmon to a food processor and pulse until it's well minced but not so much that it breaks down into a paste, about ten 1-second pulses. Transfer the salmon to a large bowl.
Combine the remaining ingredients in the food processor and process until very smooth, about 1 minute. Add to the bowl with the salmon and use a fork, spatula, or clean hands to thoroughly incorporate all the ingredients. Either cook right away or cover and refrigerate for up to 2 days.
Form the salmon mixture into 6 equal patties. Heat a grill pan over medium-high heat. When the pan is hot (but not smoking), add the salmon patties and cook for about 3 minutes on each side.
# Turkey Burgers
QUICK
Serves 4
At the end of the day, I'm a burger kind of woman, and this one hits all the marks: grilled, savory, herbed patty; cool, lemony aquafaba; crisp lettuce; creamy avocado; and a little crunch and tang from a pickle. This does not feel like detox food, that's for sure.
1 pound dark meat ground turkey
4 garlic cloves, grated
½ cup chopped fresh cilantro
2 tablespoons chopped fresh mint
½ teaspoon ground cumin
½ teaspoon kosher salt
Zest of 1 lime
Olive oil
Red-leaf lettuce, for serving
½ avocado, sliced
Pickled Cucumbers (here)
Lemony Garlic Aquafaba Sauce (here)
In a medium bowl, combine the ground turkey, garlic, cilantro, mint, cumin, salt, and lime zest and mix well. Shape the mixture into 4 equal patties.
Heat a grill pan over medium-high heat. Brush the pan with a little olive oil, then add the patties and cook for a few minutes on each side.
To serve, wrap a turkey burger in lettuce and top with sliced avocado, pickles, and lemony garlic aquafaba.
# Italian Braised Chicken
Serves 2 or 3
When I'm flipping through recipes in the realm of clean eating, I'm often thinking, _But those are just raw vegetables..._ And sure, that can be great, and sometimes when I'm eating clean I weirdly do end up craving snap peas and radishes more than my Gouda and baguette. But what I really want is comfort food that is also totally clean, for those days I just really need it. Enter nomato sauce. I took inspiration from a tomato-y braised chicken a friend made me, but I tried it with nomato sauce, and it really worked. We've been eating it over brown rice pasta, but you can try it with lentil or chickpea pasta or over cauliflower rice.
3 tablespoons olive oil
2 bone-in, skin-on chicken breasts
Kosher salt
½ white onion, diced
2 garlic cloves, grated
2 thyme sprigs
2 cups Nomato Sauce (here)
12 ounces brown rice spaghetti
A few torn fresh basil leaves, to garnish
Flaky sea salt and cracked black pepper
Preheat the oven to 350°F.
In a Dutch oven, heat the olive oil over medium-high heat. Season the chicken generously with kosher salt. Add it to the pan, skin-side down, and sear for a few minutes, just until nicely browned, then flip and do the same on the other side.
Remove the chicken from the pan and set it aside on a plate. Reduce the heat to medium-low and add the onion, garlic, and thyme. Cook until lightly caramelized, scraping up all the browned bits from the bottom of the pan, then add the nomato sauce and return the browned chicken breasts to the pan. Cover the pot and transfer to the oven. Braise for about 1½ hours, or until the chicken is tender.
Meanwhile, cook the spaghetti according to the package directions.
Remove and discard the thyme sprigs. Remove the chicken and let cool slightly, then shred the meat and stir it back into the sauce. Add the pasta to the pot with the chicken and sauce and toss to combine.
Garnish with the basil, finish with flaky salt and cracked pepper, and serve.
# Kale Aglio e Olio
QUICK / VEGAN
Serves 2
I enjoy making _aglio e olio_ (pasta with garlic and olive oil), and adding kale to it is a good way to sneak in some extra greens. (If you sprinkle some Parm on top for your kids, they might go nuts for it as well.) You can use brown rice pasta here if you prefer, or go for lentil or chickpea pasta for a bit of extra protein.
8 ounces lentil spaghetti
¼ cup olive oil, plus more if needed
6 garlic cloves, thinly sliced
3 anchovy fillets (optional)
1 bunch of black kale, stemmed and sliced into thin ribbons (about 2 cups)
Zest of ½ lemon
Flaky sea salt
Chili flakes (optional)
Bring a large pot of water to a boil. Add the spaghetti and cook for a little bit less time than the box suggests so it will stay al dente. Drain and rinse with cold water for a couple of minutes. Set aside.
In a large, shallow sauté pan, heat the olive oil over medium-high heat. Add the garlic and anchovy (if using) and cook, stirring frequently and making sure nothing burns, for about 1 minute. Add the kale and cook for another minute, then add the lemon zest and spaghetti and toss everything together. Drizzle in a little extra olive oil, if needed.
Divide the pasta between two bowls. Finish with flaky salt and chili flakes, if desired, and serve.
# Herby Meatballs with Nomato Sauce
Serves 2
These meatballs are inspired by the Elimination Diet meatballs in _It's All Good_. Packed with herbs and lemon zest, you hardly miss the Parmesan and bread crumbs. Simmering the meatballs in the nomato sauce lends such richness and depth to the dish.
# FOR THE MEATBALLS:
½ pound ground dark meat turkey
1 cup finely chopped arugula
2 tablespoons finely chopped fresh basil
2 tablespoons finely chopped fresh parsley
Zest of ½ lemon
4 garlic cloves, grated
¼ teaspoon chili flakes (optional)
# TO SERVE:
4 tablespoons olive oil
⅓ yellow onion, diced
2 cups Nomato Sauce (here)
½ cup water
1 recipe Cauliflower Rice (here)
To make the meatballs, combine all the meatball ingredients in a large bowl. Shape the mixture into 1-inch balls, setting them on a large plate as you go.
In a heavy-bottomed pan, heat 2 tablespoons of the olive oil over medium-high heat. Working in batches, add the meatballs, being careful not to overcrowd the pan. Cook for 2 to 3 minutes, turning to brown the other sides. Once they're nice and brown all over, set aside on a plate; repeat to cook the remaining meatballs.
In the same pan, heat the remaining 2 tablespoons olive oil. Add the onion and cook, stirring and scraping up the browned bits from the bottom of the pan, for about 5 minutes. Add the nomato sauce and water. Reduce the heat to maintain a simmer and add the meatballs to the sauce. Cover the pan with the lid ajar and cook for 25 to 30 minutes, until the meatballs are cooked through and the sauce has reduced a bit.
Serve the meatballs and sauce over cauliflower rice.
# Turkey Meat Loaf with Nomato Glaze
Serves 4
Meat loaf has been a family dinner staple for decades—and for good reason. I wanted to try to clean up this easy, quick classic, and thought using nomato sauce as my glaze would be a great fix for the sugar-laden ketchup that inevitably ends up smothered on top. A little apple cider vinegar, mustard, and caramelized onions make the glaze so tasty, you really won't believe there are no tomatoes.
# FOR THE CARAMELIZED ONION:
2 tablespoons olive oil
1 large white onion, diced (about 1½ cups)
# FOR THE GLAZE:
1 cup Nomato Sauce (here)
1½ teaspoons grainy mustard
2 tablespoons apple cider vinegar
¼ teaspoon kosher salt
# FOR THE MEAT LOAVES:
1 pound dark ground turkey
1 teaspoon grainy mustard
2 garlic cloves, grated
½ teaspoon fresh thyme leaves
½ teaspoon kosher salt
To caramelize the onion, in a medium sauté pan, heat the olive oil over medium heat. Add the onion and cook, stirring every few minutes, until dark brown, about 20 minutes. Set aside to cool.
To make the nomato glaze, combine all the glaze ingredients in a saucepan. Add about one-quarter of the caramelized onions and cook over medium heat for about 10 minutes, then remove from the heat and set aside.
To make the meat loaves, preheat the oven to 375ºF. Line a baking sheet with parchment paper.
In a large bowl, combine the turkey, mustard, garlic, thyme, salt, and remaining caramelized onions. Form the mixture into 4 small football-shaped loaves on the prepared baking sheet. Bake for about 30 minutes, then slather each meat loaf with some of the glaze and return to the oven for 15 to 20 minutes more, until the meat loaves are cooked through and the glaze has caramelized.
# Chicken & Zucchini Kefta
Serves 2
Packed with flavor and fresh ingredients, these chicken kebabs feel like the opposite of a sacrifice, and are a summer BBQ staple. If skewering sounds like too much work, form these into little meatballs or burgers instead. And don't skip the tahini sauce—it really ties everything together.
# FOR THE KEBABS:
1 medium zucchini, grated
1½ teaspoons kosher salt
1 pound ground dark meat chicken
2 scallions, thinly sliced (about ⅓ cup)
2 tablespoons chopped fresh cilantro
⅓ cup chopped fresh mint
3 garlic cloves, finely chopped
2 tablespoons very finely chopped fresh ginger
1 teaspoon ground cumin
1 teaspoon ground coriander
½ teaspoon ground cinnamon
2 tablespoons tahini
Olive oil
# TO SERVE:
Whole romaine or butter lettuce leaves
½ cucumber, sliced
16 cilantro sprigs
1 cup whole mint leaves
Tahini Dressing (here)
To make the kebabs, soak 12 small wooden skewers in water for 20 minutes.
In a small bowl, combine the zucchini and ½ teaspoon of the salt and let sit for 5 minutes. Squeeze out as much liquid as possible.
In a large bowl, combine the zucchini, chicken, scallions, cilantro, mint, garlic, ginger, cumin, coriander, cinnamon, and tahini. Divide the mixture into 12 portions, then, with damp hands, shape each portion around a skewer.
Heat a grill or grill pan to medium-high. Brush the kebabs with a bit of olive oil, then grill for 8 to 10 minutes on each side, until cooked through. Place the kebabs on a platter with whole lettuce leaves, the cucumber, cilantro, and mint. Serve with the dressing.
# Braised Mexican Nomato Chicken
Serves 2
I'm addicted to my Braised Mexican Chicken recipe, which you can find on goop.com, but when I'm cutting out nightshades, that tomato-based sauce is, unfortunately, off-limits. Here's the super-easy clean version—welcome at taco night with the fam.
1 boneless, skinless chicken breast
Kosher salt
1 tablespoon olive oil
2 garlic cloves, minced
2 tablespoons finely chopped fresh cilantro (stems and leaves)
¼ teaspoon ground cumin
1 cup Nomato Sauce (here)
Season the chicken breast generously with salt.
In a 2-quart saucepan, heat the olive oil over medium-low heat. Add the garlic, cilantro, and cumin and cook, stirring, for 30 seconds, or until the garlic is fragrant but not browned. Add the nomato sauce, then add the chicken and bring the mixture to a simmer. Reduce the heat to low, cover, and cook for 10 to 15 minutes, or until the chicken is firm and cooked through.
Remove from the heat and let cool for 10 minutes, then use your fingers or two forks to shred the chicken.
# Chickpea & Kale Curry
VEGAN
Serves 4
This is an old goop detox recipe that I return to again and again. It's easy to throw together, full of flavor, and—bonus—kid-friendly.
3 tablespoons olive oil
1 medium yellow onion
Kosher salt
4 large garlic cloves, minced
3 tablespoons minced fresh ginger
1 teaspoon garam masala
2 teaspoons curry powder
½ teaspoon ground coriander
A small pinch of cayenne pepper
1 (14.5-ounce) can chickpeas, drained and rinsed
1 cup Vegetable Stock (here)
1 cup light coconut milk
2 cups packed finely chopped kale leaves (about ½ bunch)
Flaky sea salt
Fresh lemon juice
In a Dutch oven, heat the olive oil over medium heat. Add the onion and a pinch of kosher salt and cook for 7 minutes, or until beginning to soften. Add the garlic, ginger, garam masala, curry powder, coriander, and cayenne and cook for 2 minutes more.
Add the chickpeas, stock, and coconut milk and bring the mixture to a boil. Reduce the heat to maintain a simmer and cook gently for 10 minutes to allow the flavors to meld.
Add the kale and another pinch of salt and simmer gently for 10 minutes more.
Season with flaky salt and pepper and finish with a squeeze of lemon juice just before serving.
# Chicken & Cabbage Dim Sum
Serves 4
This recipe got me through a recent cleanse—turns out, eliminating gluten didn't exactly eliminate my love of dumplings. A sort of dim sum/cabbage roll hybrid, these little parcels are flavorful, satisfying, and fun to make. They're also super versatile—I've made versions with fish or just veggies instead of chicken, and they've all gotten thumbs up.
1 pound dark meat ground chicken
1 bunch of scallions, minced
5 garlic cloves, minced
2 tablespoons minced fresh ginger
1½ teaspoons kosher salt
1 head of green or savoy cabbage
Coconut Aminos Sauce (here) for serving
Combine all the ingredients except the cabbage in a large bowl. Mix well, then refrigerate while you prep the cabbage leaves.
Bring a large pot of water to a boil and fill a large bowl with ice and water.
Using a sturdy pair of tongs, place the entire cabbage head into the boiling water and cook for 45 to 60 seconds. Carefully pull the cabbage out. Remove the two or three layers of softened outer leaves and transfer them to the ice bath. Repeat until you have 10 blanched leaves. Dry the leaves well before assembling the rolls.
Place one cabbage leaf on your work surface with the stem end closest to you. Add about 3 tablespoons of the chicken mixture at the base of the cabbage leaf and fold it up and away from you, gently tucking the sides in over your first fold and rolling until you've formed a nice little package. Set aside and repeat to use the remaining cabbage leaves and filling.
Fill a pot with about 1 inch of water and set a wire or bamboo steamer basket inside. Line the bottom of the steamer with any extra blanched cabbage leaves. Bring the water to a simmer.
Set the dumplings in the steamer basket, seam-side down, cover, and steam for about 15 minutes—just until the filling looks cooked and is slightly firm.
Serve with coconut aminos sauce.
# Spinach & Pea Curry
QUICK / VEGAN
Serves 2
I was obsessed with the Indian Creamed Spinach from my last cookbook, _It's All Easy_ , but I wanted to try to make a vegan version using coconut milk. It's just as creamy and the flavors go so well together. This is the result: a riff on _palak matar_ , a spinach and pea curry. It's light and flavorful and works equally well over basmati or cauliflower rice.
1 tablespoon coconut oil
1 garlic clove, grated
1 teaspoon grated fresh ginger
½ teaspoon garam masala
½ teaspoon curry powder
1 cup frozen peas
1 cup full-fat coconut milk
5 ounces baby spinach
Flaky sea salt
In a large sauté pan, melt the coconut oil over medium heat. Add the garlic, ginger, garam masala, and curry powder and cook, stirring occasionally to prevent burning, for a few minutes, until fragrant. Add the peas and coconut milk and cook for about 5 minutes more, gently mashing some of the peas to thicken the mixture. Add the spinach and stir. Simmer for a few more minutes, until the spinach is wilted. Finish with a pinch of flaky salt and serve.
# DRINKS & SNACKS
For most people, the hardest part of eating clean isn't breakfast, lunch, or dinner but all the time in between. That eleven a.m. hunger pang or the four p.m. caffeine/sugar craving isn't easy to ignore, particularly when the smell of your work wife's almond milk latte is somehow wafting right over your desk. So instead of depriving myself and simply "pushing through" until the next meal, I rely on an arsenal of delicious snacks and drinks to keep me going—like my dandelion mocha that, while being totally caffeine-free, does a valiant job at scratching a cappuccino itch; cacao date balls that taste as good as a piece of chocolate but have zero refined sugar and give me tons of energy; and a couple of smoothies that do double duty as a breakfast or post-breakfast snack for me, and a post-school snack for my kids.
# Dandelion Coconut Mocha
QUICK / VEGAN
Serves 1
When I eliminate coffee from my diet—which is very rare, because I live for it—the hardest part (aside from caffeine withdrawal) is losing the ritual of my morning cuppa joe. While instant dandelion coffee comes close—and there's no arguing with the ease of it—adding coconut milk and some raw cacao makes a really delicious mock-mocha.
¼ cup boiling water
¾ cup boiling coconut milk
2 teaspoons instant dandelion coffee
½ teaspoon raw cacao powder
Heaping 1 teaspoon coconut oil
A pinch of flaky sea salt
Combine all the ingredients in a powerful blender and blend until frothy and smooth.
# Chia Watermelon Cooler
QUICK / VEGAN
Serves 2
In LA, it's all about your wellness beverage game. At any given moment at the goop offices, every desk I see has some combination of Americanos, matcha lattes, green juices, kombuchas, and mineral waters. This watermelon chia cooler is a fun, refreshing beverage that's been added to the mix.
½ cup plus ¼ cup coconut water
1 tablespoon chia seeds
1 pound seedless watermelon, cubed
Zest and juice of 1 lime
In a bowl, combine ½ cup of the coconut water and the chia seeds. Stir and let sit for 5 to 10 minutes.
While that sits, combine the watermelon, remaining ¼ cup coconut water, and the lime zest and lime juice in a blender. Blend on high until smooth.
Whisk the watermelon mixture into the chia seed mixture. Serve in 2 glasses over ice.
# Cashew Turmeric Iced Latte
VEGAN
Serves 2
This latte is not only pretty, but also packs in hero health ingredient turmeric—extolled by basically every functional practitioner I see—and is naturally caffeine-free. I make a quick cashew milk here, but if you're in a rush or don't have a powerful enough blender on hand, just sub in the same amount of a good (unsweetened) store-bought variety.
1 cup organic raw cashews
1½ cups boiling water
1½ teaspoons ground turmeric
3 dates, pitted
2 cups filtered water
A large pinch of flaky sea salt
Freshly ground black pepper
Place the cashews in a bowl and cover with the boiling water. Let sit for 20 minutes, then drain.
Place the softened cashews in a high-speed blender, add the remaining ingredients, and blend until smooth.
Strain through a fine-mesh sieve or nut milk bag into 2 glasses over ice and serve.
# Rosemary Sea Salt Nuts
PACKABLE / QUICK / VEGAN
Makes 1 cup
The hardest part of eliminating processed foods is losing your beloved snacks. Luckily there are plenty of clean snack opportunities. This is one of my faves—delicious and couldn't be easier to pull off. We all deserve a zesty snack, detox or not.
1 cup pecans
2 tablespoons neutral oil (such as grapeseed oil)
Zest and juice of 1 lemon
¼ cup fresh rosemary, chopped
1 teaspoon flaky sea salt
Preheat the oven to 375ºF. Line a baking sheet with parchment paper.
Combine all the ingredients in a medium bowl. Spread the nuts over the prepared baking sheet. Bake for 15 to 18 minutes, tossing and turning the nuts a few times as they bake.
As soon as the nuts are done, toss with salt. Let cool completely before serving.
# Everything Bagel Cashews
PACKABLE / QUICK / VEGAN
Makes 1 cup
Cashews are one of my favorite midday snacks. They're packable and filling, satisfying that after-lunch craving for something crunchy. But a handful of raw nuts can only get you so far, which is why I wanted to combine them with one of my other favorite but not-so-clean foods: the everything bagel. They're a great way to satisfy that craving without all the schmear.
1 cup raw cashews
1 tablespoon neutral oil (such as sunflower seed oil)
2 teaspoons crushed dried onion
1 teaspoon onion powder
2 teaspoons crushed dried garlic
1 teaspoon garlic powder
1 teaspoon sesame seeds
1 teaspoon poppy seeds
1 teaspoon coarse salt
Preheat the oven to 375ºF. Line a baking sheet with parchment paper.
Combine the cashews and oil in a medium bowl. Spread the nuts over the prepared baking sheet. Bake for 15 to 18 minutes, tossing and turning the nuts a few times as they bake.
In another medium bowl, combine the dried onion, onion powder, dried garlic, garlic powder, sesame seeds, poppy seeds, and salt. As soon as the nuts are done, toss with the spice mix. Let them cool before serving.
# Apricot, Cashew & Coconut Truffles
PACKABLE / QUICK / VEGAN
Makes 12 balls
These fruity, tropical bites are made for an afternoon treat; they're also great for packing in the kids' lunches. Treats all around!
½ cup dried apricots
½ cup raw cashews
½ cup unsweetened shredded coconut
Zest of 1 lime
A pinch of flaky sea salt
1 tablespoon water
Combine all the ingredients in a food processor and process until the mixture is smooth and forms a ball around the processor blade (this may take a while).
Using wet hands, roll the mixture into 12 tablespoon-sized balls, setting them on a plate as you work.
Let set in the fridge until ready to eat.
# Mango Lassi
PACKABLE / QUICK / VEGAN
Serves 1
A play on a mango lassi, this refreshing drink uses coconut milk in place of yogurt and gets a color boost from a little ground turmeric.
½ cup coconut water
½ cup fresh mango
¼ teaspoon ground turmeric
¼ cup unsweetened full-fat coconut milk
A pinch of flaky sea salt
Combine all the ingredients in a high-speed blender and blend until smooth.
Pour into a glass over ice and serve.
# Strawberry Cauliflower Smoothie
PACKABLE / QUICK / VEGAN
Serves 1
I know, I know, frozen cauliflower in a smoothie sounds gross, but it adds incredible creaminess without all the sugar of bananas (on the "no" list for set cleanses like Dr. Junger's Clean Program) and, paired with tropical fruit and lime, actually tastes really good. Even my kids happily slurp it down for breakfast or an afternoon snack. If you're sensitive to strawberries, try pineapple here instead.
½ cup frozen cauliflower
½ cup frozen mango
½ cup frozen strawberries (optional)
½ cup coconut water
2 tablespoons unsweetened canned or refrigerated coconut milk
Juice of ½ lime
Combine all the ingredients in a high-speed blender and blend until smooth.
# Blueberry Cauliflower Smoothie
PACKABLE / QUICK / VEGAN
Serves 1
With antioxidant-rich blueberries and protein-packed almond butter, this smoothie is one I often throw together after a workout. Buy organic wild blueberries, if you can find them.
½ cup frozen blueberries
½ cup frozen cauliflower
1 tablespoon unsweetened almond butter
¾ cup unsweetened almond milk
1 date, pitted and roughly chopped
Juice of ½ lime
Combine all the ingredients in a high-speed blender and blend until smooth.
# Chlorella Smoothie
PACKABLE / QUICK / VEGAN
Serves 1
A freshwater algae that happens to be a complete plant protein, chlorella is also thought to be a good chelator (see more on chelation from Dr. Novak here). I pair it with fruit and fresh mint to balance its slightly mossy flavor.
¼ cup frozen mango
¼ cup frozen peach
½ cup spinach
¼ cup full-fat coconut milk
½ teaspoon monk fruit or liquid stevia (not stevia-based sweetener)
¼ teaspoon chlorella
6 fresh mint leaves
Combine all the ingredients in a high-speed blender and blend until creamy and smooth.
# Cacao Date Truffles
PACKABLE / QUICK / VEGAN
Makes 12 balls
I'm not a big dessert person (I know, I know, what's wrong with me?), but every once in a while I crave something sweet and chocolatey, and these cacao date balls really hit the spot. Plus, there's the added bonus of satisfying your chocolate craving with iron- and antioxidant-rich raw cacao (which I've been told is good for the heart—see here). Ashwagandha is an adaptogenic herb that may help with stress, so you can add that also if you'd like (you can find both raw cacao and ashwagandha in health food stores or online). Make a double batch and store them in the fridge to combat candy bar cravings at any time.
½ cup pitted Medjool dates
½ cup raw cashews
½ cup raw almonds, preferably sprouted
1/8 teaspoon vanilla powder
¼ teaspoon ashwagandha (optional)
1 tablespoon raw cacao powder
A large pinch of flaky sea salt
2 tablespoons water
2 tablespoons cacao nibs
Combine the dates, cashews, almonds, vanilla, ashwagandha (if using), cacao, salt, and water in a food processor and process until the mixture is smooth and forms a ball. Mix in the cacao nibs by hand.
Using wet hands, shape the mixture into 12 tablespoon-sized balls, setting them on a plate as you work.
Store in the fridge until ready to eat.
# Ginger & Cilantro Tea
Serves 2
Cilantro is thought to have great detoxifying properties, and adding it to a tea is an easy way to get a dose. The addition of ginger to this tea makes it so soothing that you'll want to make it part of your everyday routine.
2½ cups water
1 (2-inch) piece fresh ginger, cut in half lengthwise
4 cilantro sprigs
In a small saucepan, bring the water to a boil. Add the ginger and cilantro, reduce the heat to low, and simmer for 3 to 4 minutes.
Strain into 2 mugs and discard the ginger and cilantro. Enjoy.
# Chamomile & Mint Tea
Serves 1
Reducing stress and getting enough rest are essential to overall wellness. This tea is an easy (and tasty) way to help you wind down after a hectic day. Ethereal and light chamomile paired with fresh, cleansing mint make for a delightfully warm cuppa.
2½ cups water
1 tablespoon dried chamomile
1 tablespoon dried mint leaves
In a small saucepan, bring the water to a boil. Combine the chamomile and mint in a teabag or infuser and place it in a mug. Pour over the boiling water and steep for 3 to 6 minutes, depending on how strong you prefer your tea.
Remove the teabag or infuser and enjoy.
# BASICS
While the basics section of any cookbook is important, it's arguably _especially_ crucial here, because the majority of store-bought sauces, condiments, and snacks are no-go's on a full-out clean-eating plan (they tend to be full of hidden additives, gluten, and sugar). This chapter is full of super-flavorful clean versions of my fridge and pantry staples, including delicious sugar- and vinegar-free salad dressings; a soy-free dipping sauce that's perfect for hand rolls, salads, and grain bowls; a tomato-free sauce that both looks and tastes remarkably like marinara; and a magical vegan mayonnaise made from the liquid from a can of chickpeas (I'm serious). Yes, cooking from scratch like this takes a little time, but once you're stocked up with a few of these basics, the recipes in the other chapters will come together quickly and easily. Plus, with this repertoire of clean-eating building blocks under your belt, you'll be able to adapt and improvise, whipping up, say, a clean nomato lasagna or drizzling a dressing over a perfect grain bowl. Once you've got the basics down, the possibilities are endless.
# Aquafaba Mayo
VEGAN
Makes about 2 cups
Aquafaba mayo is a clean-eating game changer. For the homemade egg-free mayo of your dreams, you emulsify aquafaba—the liquid from a can of chickpeas—with a neutral oil for several minutes. The best part is that it makes an ideal neutral base for any flavored aioli you can think of. I came up with three tasty riffs, but that's just to get you started. Try swirling in some vegan pesto or Tunisian harissa paste (found in most gourmet markets and online) or even Paleo sriracha.
½ cup aquafaba (liquid from a can of organic chickpeas)
1 teaspoon fresh lemon juice
¾ teaspoon kosher salt
About 1½ cups neutral oil (such as sunflower seed or grapeseed oil)
Combine the aquafaba, lemon juice, and salt in a small bowl. Using an immersion blender on a medium-high speed, pour in the oil in a very slow, steady stream. The aquafaba should begin to stiffen and expand—this will take a few minutes; it will be smooth and thickened.
Transfer the mayo to an airtight container and store in the fridge for up to about a week.
# Lemony Garlic Aquafaba Sauce
QUICK / VEGAN
Makes about ½ cup
This egg-free sauce is great when you're missing that creamy "special sauce" on your burger. But beyond that, use it with abandon—it plays well with wraps or works dolloped on a grain bowl, and I especially love it as a dip for roasted sweet potato wedges.
½ cup Aquafaba Mayo (here)
1 garlic clove, grated
Zest of ½ lemon
½ teaspoon fresh lemon juice
Combine all the ingredients in a small bowl and gently mix. Cover and store in a jar with a lid in the fridge for up to 1 week.
# Aquafaba Crema
QUICK / VEGAN
Makes about ½ cup
There's no real replacement for sour cream on a taco, but this comes pretty close. It's creamy and citrusy, and the earthy, savory flavor of cumin will make you forget it's vegan (no small feat).
½ cup Aquafaba Mayo (here)
¼ teaspoon ground cumin
Juice of ½ lime
Kosher salt
Combine all the ingredients in a small bowl and gently mix. Cover and store in a jar with a lid in the fridge for up to 1 week.
# Aquafaba Ranch Dressing
QUICK / VEGAN
Makes about ½ cup
You'll have a hard time going back to old-school ranch dressing once you try this. It's incredibly flavorful and packed full of fresh herbs, and not full of dairy and gluten—not to mention the additives and preservatives listed on the back of store bottles that I can't even pronounce. Serve it with crudités, or in the crunchy garden salad here.
½ cup Aquafaba Mayo (here)
Zest and juice of ½ lemon
1 garlic clove, grated
1 shallot, minced
1 teaspoon finely chopped fresh dill
1 tablespoon finely chopped fresh parsley
1 tablespoon finely chopped fresh chives
¼ teaspoon kosher salt
¼ teaspoon freshly ground black pepper
Combine all the ingredients in a small bowl and gently mix. Cover and store in a jar with a lid in the fridge for up to 1 week.
# Parsley Salsa Verde
QUICK
Makes about ⅔ cup
This is one of my all-time favorite condiments. The briny flavor of anchovies, with bright parsley and a little bite from raw shallot is just beyond. You can pretty much vary it however you like—try adding garlic, capers, or even thyme to the mix.
1 bunch of parsley, chopped
1 tablespoon shallot, finely chopped
½ teaspoon grated lemon zest
¼ teaspoon kosher salt
¼ teaspoon chili flakes (optional)
4 anchovy fillets, chopped (optional)
⅔ cup extra virgin olive oil
Combine all the ingredients in a small bowl and mix well to combine. Cover and store in the fridge for up to 1 week.
# Cilantro Salsa Verde
QUICK / VEGAN
Makes about ⅔ cup
A light, herby salsa verde can brighten up everything from soups to grilled fish to poached eggs. This one is a little different, as it uses cilantro, lime, and cumin, giving it some earthy flavors to balance the citrus notes.
1 bunch of cilantro, chopped
4 scallions, finely chopped
Zest of ½ lime
¼ teaspoon ground cumin
¼ teaspoon kosher salt
⅔ cup extra virgin olive oil
Combine all the ingredients in a small bowl and mix well. Cover and store in the fridge for up to 1 week.
# Coconut Aminos Sauce
QUICK / VEGAN
Makes about ⅓ cup
A soy-free version of my go-to dipping sauce is a reality thanks to coconut aminos, aka fermented coconut nectar, whose natural sweetness lends an almost teriyaki flavor to any dish it's used in. It's the bomb. And this easy condiment is sure to become your go-to for an extra pop of flavor on dumplings, hand rolls, and cauli bowls.
¼ cup coconut aminos
Juice of 1 lime
½ teaspoon grated fresh ginger
1 tablespoon toasted sesame oil
½ teaspoon sesame seeds
Combine all the ingredients in a small bowl and whisk together.
# Miso-Ginger Dressing
QUICK / VEGAN
Makes about ½ cup
Miso is one of my favorite ingredients. Switching to chickpea miso—which is made by fermenting chickpeas instead of soybeans—allows me to enjoy it even when I'm eating clean. You can find chickpea miso in most health food stores and online.
2 teaspoons chickpea miso paste
2 teaspoons grated fresh ginger
Zest and juice of 2 limes
½ cup extra virgin olive oil
Flaky sea salt
In a small bowl, whisk together the miso, ginger, lime zest, and lime juice. While whisking continuously, slowly add the olive oil, then whisk until emulsified. Taste and season with salt. Cover and store in the fridge for up to 1 week.
# Tahini Dressing
QUICK / VEGAN
Makes about ½ cup
Tahini can easily overpower a recipe, but it's one of the best vegan alternatives for creamy dressings. Thinned out with water and balanced with super-savory garlic and shallots, this dressing has the potential to become your new everything sauce.
¼ cup tahini
¼ cup water
2 garlic cloves, grated
Juice of ½ lemon
1 tablespoon apple cider vinegar
2 small shallots, minced
¼ teaspoon kosher salt
In a small bowl, whisk the tahini briefly to loosen it. While whisking continuously, slowly stream in the water; the mixture might seize up and be quite thick, but keep whisking and adding the water until it smooths out (it'll take a minute). Once it has smoothed out and become a bit looser, whisk in the garlic, lemon juice, vinegar, shallots, and salt until combined. Cover and store in the fridge for up to 1 week.
# Nomato Sauce
VEGAN
Makes about 6 cups
Eliminating nightshades—vegetables in the Solanaceae family, like tomatoes, peppers, and eggplant, which may trigger inflammation in some people—can be tricky; those flavors and textures are not easy to replace. I found that butternut squash and beets could mimic the texture and color of tomato sauce, and that by pairing the "nomato" sauce with foods and aromatics you'd normally eat with tomato sauce (herbs, fennel, meatballs, etc.), you don't really miss the actual tomatoes. It's a good solution for a hearty, comforting meal that still plays by super-clean rules.
3 tablespoons olive oil
1 large yellow onion, chopped (about 2½ cups)
3 celery stalks, chopped (about 1½ cups)
6 small carrots, chopped (about 1 cup)
½ small butternut squash, peeled, seeded, and chopped (about 2 cups)
1 small beet, peeled and chopped (about 1 cup)
4 garlic cloves
¼ teaspoon fennel seeds
Bouquet garni: 2 thyme sprigs, 2 rosemary sprigs, and 2 bay leaves, tied together with kitchen twine
1 tablespoon kosher salt
¼ teaspoon freshly ground black pepper
4 cups water
In a large, heavy-bottomed saucepan, heat the olive oil over medium heat. Add the onion, celery, carrot, squash, beet, and garlic and cook, stirring, for about 5 minutes. Add the fennel seeds, bouquet garni, salt, and pepper and cook for 5 minutes or so more. Add the water, reduce the heat to medium-low, partially cover the pot, and cook for about 45 minutes, or until all the vegetables are soft. Discard the bouquet garni and let the sauce cool. Carefully transfer the mixture to a high-speed blender and blend until smooth. (Work in batches, if necessary, and be careful when blending hot or still-warm liquids.)
If the sauce is thinner than you like, return it to the pot and simmer over medium-high heat for about 10 minutes, until thickened and reduced to your liking.
# Vegetable Stock
VEGAN
Makes about 8 cups
I'm not a big fan of boxed or canned vegetable stock, especially when it's so easy to make at home. You essentially throw everything in water and let it do its thing. This is a great recipe to store in the freezer for last-minute dinners. I like to buy everything from the farmers' market—that way, I get the brightest, freshest flavors in the stock.
3 tablespoons olive oil
1 red onion, cut in half
1 yellow onion, cut in half
1 bunch of celery, cut into thirds
1 large parsnip, cut in half
1 large carrot, cut in half
1 head of garlic, cut in half
10 cups water
1 teaspoon kosher salt
1 teaspoon whole black peppercorns
1 bay leaf
8 parsley sprigs
In a large stockpot, heat the olive oil over medium-high heat. Add the onions, celery, parsnip, carrot, and garlic. Cook, stirring often, for 5 to 8 minutes, until vegetables become aromatic and soften slightly.
Add the water and stir. Add the salt, peppercorns, bay leaf, and parsley. Bring to a boil, then reduce the heat to maintain a simmer. Cover the pot and simmer for 1½ hours.
Carefully strain the stock and discard the solids. Use immediately or let cool completely, then store in airtight containers in the fridge for up to 1 week or in the freezer for up to 3 months.
# Chicken Stock
Makes about 6 cups
Boxed and canned chicken stocks are not my favorite, so when I have the time, I make a batch of my own. I like to use chicken feet, which contain beneficial collagen—great for the gut and joints—but if you can't find them (or if they freak you out), you can absolutely skip them. Ideally, though, you want whatever chicken pieces you're using to be organic and pasture-raised.
2 fresh or frozen chicken carcasses (about 1 pound)
½ pound chicken feet (optional)
1 medium carrot, cut in half
1 large celery stalk, cut in half
1 medium leek, washed well and cut in half
2 teaspoons whole black peppercorns
1 bay leaf
8 cups water
Place the chicken pieces, chicken feet (if using), carrot, celery, leek, peppercorns, and bay leaf in a very large Dutch oven or stockpot. Add the water and bring to a simmer over medium heat. Skim off any scum from the surface with a ladle, then reduce the heat to maintain a very gentle simmer and cook for 1 hour, skimming the surface every 20 minutes or so.
Fill a large bowl with ice and set a second large bowl on top. Strain the stock into the large bowl, discard the solids, and let cool.
Transfer the stock to airtight containers and store in the fridge for up to 1 week or in the freezer for up to 1 month.
# Poached Chicken
Makes 2 chicken breasts
A solid poached chicken breast is a staple in any cook's arsenal, and is particularly handy in a clean-leaning cook's arsenal. It doesn't get much cleaner than water and herbs! You can use poached chicken in just about anything—toss it in a cauliflower rice bowl or a lettuce cup, on a salad, or in a soup. This is the basic recipe, but feel free to add different aromatics to change up the flavor—celery, carrots, parsley, cilantro, garlic, ginger, and lemongrass are just a few ideas.
1 white onion, quartered
½ teaspoon whole black peppercorns
½ teaspoon kosher salt
2 boneless, skinless chicken breasts
In a small saucepan, combine all the ingredients. Add just enough water to cover the chicken. Bring to a boil, then reduce the heat to medium-low and simmer for 20 to 25 minutes; the chicken should look opaque and (obviously) be cooked through.
# Roasted Chicken
Serves 4
I use chicken a fair amount in this book, and while a quick poached chicken breast (here) is handy, I also wanted to include a really tasty, simple, and clean roasted chicken recipe. After years of slathering birds with brines and butter and herbs, I finally realized that all you really need is a good-quality organic or pasture-raised chicken and some salt. Since I love the crispy skin, I avoid basting or rubbing oil on the outside—there's enough naturally occurring fat in the bird that will render beautifully, resulting in a perfect roasted chicken.
1 (3- to 4-pound) organic chicken
Heaping 1 tablespoon kosher salt
Preheat the oven to 425ºF. Line a rimmed baking sheet with parchment paper and set a wire rack on top.
Set the chicken on the rack. Pat it dry all over with paper towels and sprinkle the salt all over the bird. Roast for 1 to 1½ hours, until an instant-read thermometer registers 165ºF. Let rest for 20 minutes before carving.
# Roasted Beets in Parchment
VEGAN
Makes 6 beets
I use cooked beets a lot when I'm on a clean-eating kick—their sweet, earthy flavor works well in so many dishes, and they feel heartier than a lot of other veggies. I like to roast them _en papillote_ , as cooking with aluminum foil is something I try to avoid, and aluminum is at the top of the list of things to eliminate whenever I've talked with functional doctors about their heavy metal detox protocols (see here). Make a batch of these at the beginning of the week and use for everything from tacos (here) to gazpacho (here) to a quinoa sweet potato grain bowl (here).
6 beets, scrubbed
2 tablespoons olive oil
3 tablespoons water
½ teaspoon kosher salt
½ teaspoon cracked black pepper
Preheat the oven to 400ºF.
Rub each beet with a bit of the olive oil and place them on one half of a 15×12-inch sheet of parchment paper. Add the water. Sprinkle the beets with the salt and pepper.
Carefully fold the top half of the parchment paper over the beets. Starting with one side, roll and crimp the edges of the parchment, making sure no liquid can escape and the beets are completely enclosed. Repeat with the remaining two sides.
Place the parcel on a rimmed baking sheet and roast for 90 minutes.
Remove from the oven, carefully open the parchment (the steam inside is hot), and serve.
# Mexican Black Beans
QUICK / VEGAN
Serves 2 or 3
The fifteen minutes required to simmer black beans with a few simple aromatics gives them incredible flavor and is totally worth the effort. My kids love to eat these with rice and guac (don't forget the hot sauce!), and I love them on the Tex-Mex Bowl (here).
1 (14-ounce) can organic black beans, undrained
4 cilantro sprigs
1 garlic clove, crushed
A pinch of kosher salt
Combine all the ingredients in a small saucepan and simmer over low heat for 15 to 20 minutes. Be sure to simmer the beans long enough that they're not watery.
Serve warm or at room temperature, or store in an airtight container in the fridge for later use, up to 5 days.
# Seed Cracker
VEGAN
Makes 1 roughly 8×11-inch cracker
Chef Magnus Nilsson made a version of this 100 percent grain- and gluten-free cracker when he came to visit the goop test kitchen, and it blew my mind. I use it as a base for cleaned-up avocado toasts (here) or dip pieces into hummus for an easy afternoon snack.
½ cup flaxseeds
1 tablespoon arrowroot powder
¼ teaspoon kosher salt
3 tablespoons black sesame seeds
3 tablespoons white sesame seeds
3 tablespoons hulled pepitas
1 cup boiling water
Flaky sea salt
Preheat the oven to 320°F.
In a medium bowl, mix together all the ingredients except the flaky salt. Let sit for 15 minutes to firm up.
Lay a large sheet of parchment paper on your work surface and use a spatula to scrape the seed mixture onto the paper. Top with another piece of parchment and use a rolling pin to roll the mixture into an 8×11, ¼-inch-thick sheet.
Transfer to a baking sheet and carefully peel off the top layer of parchment (go slowly, as a few seeds may stick to it). Sprinkle with flaky salt and bake for 45 minutes.
Let cool, then break the cracker into large pieces and store in an airtight container for up to 1 week.
# QUICK PICKLES, 3 WAYS
Pickles are such a great way to add flavor and texture to a dish. All three of these pickles use a similar brine, but I changed up the aromatics a little in each one. Feel free to play around with what you use to flavor the brine—spices like cinnamon sticks, fennel seeds, or turmeric would all taste good.
# Pickled Red Onions
QUICK / VEGAN
Makes about 2 cups
Even people who say they don't like onions will like these pickled onions.
1 medium red onion, thinly sliced
¾ cup water
1 tablespoon whole black peppercorns
1 teaspoon kosher salt
1 garlic clove, crushed
1 star anise pod
½ teaspoon coriander seeds
1¼ cups apple cider vinegar
4 drops of liquid stevia (not a stevia-based sweetener)
Put the onion in a medium bowl and set aside.
Combine the water, peppercorns, salt, garlic, star anise, and coriander in a small saucepan and bring to a boil. Remove from the heat, add the vinegar and stevia, then pour directly over the onions.
Let cool, then refrigerate. The pickles are ready to eat as soon as they're cool. These should last for a few days stored in a covered jar or bowl in the refrigerator.
# Pickled Radishes
QUICK / VEGAN
Makes about 2 cups
Mexican oregano is the key to making these taste like authentic taqueria-style pickles. (This recipe would work great with carrots, too.)
1 bunch of radishes, thinly sliced
¾ cup water
1 tablespoon whole black peppercorns
1 teaspoon kosher salt
3 garlic cloves, smashed
½ teaspoon dried Mexican oregano
1 bay leaf
¼ teaspoon chili flakes (optional)
1¼ cups apple cider vinegar
4 drops of liquid stevia (not a stevia-based sweetener)
Put the radishes in a medium bowl and set aside.
Combine the water, peppercorns, salt, garlic, oregano, bay leaf, and chili flakes (if using) in a small saucepan and bring to a boil. Remove from the heat, add the vinegar and stevia, then pour directly over the radishes.
Let cool, then refrigerate. The pickles are ready to eat as soon as they're cool. These should last for a few days stored in a covered jar or bowl in the refrigerator.
# Pickled Cucumbers
QUICK / VEGAN
Makes about 2 cups
These are nice on top of a burger, but I love them as little snackers on their own.
4 Persian cucumbers, sliced about ¼ inch thick
¾ cup water
1 tablespoon whole black peppercorns
1 teaspoon kosher salt
3 garlic cloves, smashed
1 teaspoon mustard seeds
1 tablespoon chopped fresh dill
4 drops of liquid stevia (not a stevia-based sweetener)
1¼ cups apple cider vinegar
Put the cucumbers in a medium bowl and set aside.
Combine the water, peppercorns, salt, garlic, and mustard seeds in a small saucepan and bring to a boil. Remove from the heat, add the dill, stevia, and vinegar, then pour directly over the cucumbers.
Let cool, then refrigerate. The pickles are ready to eat as soon as they're cool. These should last for a few days stored in a covered jar or bowl in the refrigerator.
# Cauliflower Rice
QUICK / VEGAN
Makes about 2 cups
You can now buy riced cauliflower in the produce section (or sometimes even in the frozen aisle) at most grocery stores, but I like to make my own when I can.
½ head of cauliflower
1 tablespoon olive oil
Trim the leaves off the cauliflower.
If you're using a food processor, trim off most of the stem and cut the head into large florets. Place the florets in the food processor and pulse until broken down to pieces about the size of couscous.
If you're using a box grater, grate the cauliflower on the side with the largest holes, using the stem as a handle.
In a 9-inch nonstick sauté pan, heat the olive oil over medium-high heat. Add the cauliflower rice and cook, stirring occasionally, until just beginning to brown, about 5 minutes.
# Chia Seed Jam
VEGAN
Makes about 1 cup
There always seems to be a surplus of blueberries in my fridge. This blueberry jam solves the problem of using them before they turn. (I know, good problem to have.) The trick is to let the blueberries reduce a bit so the jam sweetens up without added sugars or sweeteners.
1 cup blueberries
2 tablespoons fresh lemon juice
3 tablespoons chia seeds
In a small saucepan, combine blueberries and lemon juice. Cook over medium-high heat, stirring frequently, for 5 to 6 minutes, until the blueberries start to break down and have released most of their liquid.
Using an immersion blender, blend the blueberries directly in the pot until smooth. Simmer over medium-high heat for 4 to 5 minutes, until the blueberry puree thickens a little bit.
Transfer the blueberry puree to a small bowl or small jar. Stir in the chia seeds and let sit for 10 minutes before using, or let cool completely, cover, and store in the fridge for up to 5 days.
# PART II
# HEALING CLEANSES
Nearly all my doctors over the years have had a food-first philosophy in common. Here I'm sharing interviews with six practitioners who inspire me, each on their respective area of expertise, and all with an eye toward eating to maximize our health.
# FAT FLUSH
Here's what I know from talking to friends who have felt betrayed by their bodies after having kids or hitting a certain age, and from sitting down with some of the most cutting-edge hormone and nutrition experts: The equations around the health of our metabolism, the balance of our hormones, how we gain or lose weight, how we process fat and where we store it, and ultimately our relationship to food and our bodies is infinitely complex and about much more than digits on a scale or how we look. For help decoding it all, enter Dr. Taz Bhatia, an Atlanta-based, board-certified integrative medicine physician, and author of _Super Woman Rx_ , who takes an East-meets-West, holistic approach to matters of metabolism.
# A Q&A WITH TAZ BHATIA, MD
Q: What's typically behind weight loss resistance?
**A:** Weight loss is tricky, especially for women. Doctors and researchers today are realizing that weight loss and weight gain have to be treated comprehensively and holistically to really gain tangible results. Weight loss resistance occurs when the classic balance of "calories in, calories out" no longer works. Many patients will say they went back to restricting their calories to lose that stubborn 5 or 10 pounds, and it did not work like it once had. There are a number of reasons this may be the case:
First, hormone balance in women plays a role in our metabolic weight or ability to gain or lose weight. Thyroid, insulin, estrogen, progesterone, testosterone, and cortisol are all hormones that can affect someone's ability to lose weight.
Gut function is another factor that impacts weight gain. Declining pancreatic function, resulting in a loss of digestive enzymes, and declining levels of stomach acid and shifts in the microbiome are all potential causes of weight gain. Additionally, food intolerances can contribute to this equation, making weight loss all the more challenging.
Nutritional deficiencies are a factor, as they affect weight, among many other aspects of health. I see a lot of vitamin B deficiencies in women who struggle with weight loss resistance. Low levels of vitamin D, iron, magnesium, and fatty acids can also come into play. Fatty acid deficiencies affect how we metabolize fat—instead of using fat effectively, we may hold on to it or not break it down to where the body can use it. This can cause blood sugar levels to fluctuate, which gives way to food cravings and can contribute to insulin resistance patterns (more on this in a moment).
Q: Which hormones are most likely to get thrown out of whack, and why?
**A:** The hormones most likely to be thrown off include cortisol (the stress hormone produced by the adrenal glands) and insulin (a hormone made in the pancreas in response to food). There are a number of reasons for this: Many of us live highly stressful lives, and that drives our bodies to produce more cortisol, for longer periods of time; after a point, this affects blood sugar and insulin regulation. Insulin irregularity can trigger fat storage, which can lead to stubborn belly weight, back fat, and general weight gain.
Insulin is meant to help the body absorb glucose and use it for energy. Insulin resistance occurs when cells don't respond properly to insulin, which prompts the body to produce more insulin, while blood sugar levels continue to rise. Unfortunately, once that switch is flipped and the body becomes insulin resistant, it's tougher to reverse it. Studies often show that to really affect weight loss, a total reset of insulin is required, which can take 3 to 6 weeks.
Leptin and ghrelin—known as the hunger hormones—also play a role. In short, leptin decreases appetite and ghrelin signals hunger. An imbalance of these two hormones can throw off insulin and cortisol. One issue I see with women who consistently don't get enough sleep is that they have high levels of ghrelin, and they don't produce leptin in the right amount, which can make you hunger resistant—in other words, you don't recognize when you're full anymore. Stress, coupled with lack of sleep, can cause cortisol to spike as well.
Thyroid hormone is critical. When the thyroid is sluggish, the body typically stores more estrogen rather than using it and then getting rid of it, which can cause weight gain. (High estrogen, called estrogen dominance, can trigger or worsen insulin resistance.) An overactive thyroid can actually be associated with weight gain, too, because this dysfunction can throw metabolism off. Nutritional deficiencies can negatively impact thyroid health—particularly low levels of iron, iodine, selenium, and zinc. And if the issue is not super severe, patients can often reset the thyroid through diet.
For the thyroid to function effectively, you need your adrenal glands to be functioning effectively. This goes back to stress and cortisol—so many of us are burning the candle at both ends, which means the adrenals have to work harder, the thyroid has to work harder, and our hormone balance often suffers. It's all interconnected.
Q: Is there testing that you typically recommend for people coming to you for weight loss resistance?
**A:** We conduct a full battery of testing at my practice, CentreSpring MD, including testing all hormone and nutrient levels, for food allergies and food intolerances, and stool tests to assess microbiome function.
When it comes to hormone testing, you might have heard that your hormone levels vary depending on where you are in your menstrual cycle. This is true, but there are boundaries regardless—for instance, I don't like to see estrogen over 200 (this is estrogen dominance) or progesterone below 0.5. Here are some hormone highlights and ideal value ranges to be aware of:
• Thyroid-stimulating hormone (TSH): 1–2 mIU/L
• Free triiodothyronine (free T3): 3–5 pg/ml
• Estradiol (a form of estrogen): 50–150 pg/ml
• Estrone (a form of estrogen): less than 150 pg/ml
• Progesterone: 0.5–2 ng/ml
• C-peptide (a marker of pre–insulin resistance): 1–3 ng/ml
• Leptin: 5–15 ng/ml
You can also get your fasting insulin level checked, which can tell you if your insulin is too high (over 5 ng/l or MG/DL when fasting). You can get your cortisol checked with a spit test—for the most accurate picture, you'll do this throughout the day (a healthy range is 10 to 20 ug/dl in the morning, and drops to 3 to 4 ug/dl in the afternoon).
It's easier to have a stool test ordered than hormone testing—stool tests are typically covered by lab insurance and can be done by conventional doctors, so you don't need to see a functional MD, necessarily. With stool tests, one thing I look for is fecal fats—when the body is spilling fat globules it's a sign that you aren't digesting fat well, which, as mentioned, can affect blood sugar levels.
Depending on the patient, more complex gut tests might be ordered to check for microbiome dysfunctions—e.g., Candida, small intestinal bacterial overgrowth (SIBO), enzyme deficiency, or other outliers like parasites.
In terms of nutrient levels, some key ranges I look for are:
• Vitamin D: 50–70 ng/ml
• B12: over 500 pg/ml
• Ferritin (iron): 50–70 ng/ml
• Magnesium: greater than 2.2 mg/ml
Q: Can a cleanse actually help kick-start your metabolism and help you shed pounds? Are there any detox add-ons you recommend?
**A:** Yes—the essential purpose of a detox is to give the liver and the gut a resting period, reset blood sugar and insulin irregularity, and give the body a nutrient superboost to help improve metabolism.
If you don't get your diet right, no detox add-ons are going to make a difference. But to complement a good diet, I like steaming, infrared saunas, whirlpools, and Epsom salt soaks.
Q: What does your protocol for weight loss look like?
**A:** My protocol for weight loss focuses on three main steps, each lasting a week.
• The first week is focused on detoxification and hormone balancing. This includes removing grain, dairy, sugar, and red meat.
• The second week is about balancing and improving digestion. During this week, we add gut-balancing foods like probiotic-rich kefir or kombucha, bone broth, and digestive enzymes and probiotics.
• The third week is more focused on movement, exercise, and sleep. Learning to increase heart rate, build muscle, and stretch can all be components of weight loss.
Q: Can you go into more detail on foods to steer clear of?
**A:** High-sugar foods, sodas, sweet drinks are obvious no's.
Refined sugar, added sugars, processed sugars—monk fruit, stevia, sugar, honey, agave—all of that should be under 25 grams a day, or below 3 teaspoons. Lower is ideal.
Excessive consumption of fruit can be an issue as well. The sugar in fruit, although natural, can be just another addition to an already ricocheting blood sugar level. I suggest sticking to one serving of fruit per day, and picking lower-sugar fruit, like berries (as opposed to bananas and citrus). Apples are also a good choice, as the fiber they contain brings the glycemic index down. If you have Candida overgrowth (which many of my patients with insulin resistance and weight loss resistance do), you should skip fruit, as the sugar feeds the yeast.
If you're counting carbs, you likely want to be under 150 total carbs per day. But if you're an athlete or otherwise super active, you may need more. If you have a mood disorder or severe anxiety, be aware that dropping your carbs too low can worsen these symptoms.
Q: What about good ingredients to add?
**A:** Adding in MCT (medium-chain triglycerides) oils in the form of coconut oil and other healthy fats like avocado, olive oil, and ghee can help to better balance insulin and keep blood sugar levels stable. Getting enough fat helps you to feel satiated and cut sugar cravings down.
Superfoods like greens, seeds, and nut butters help to boost antioxidants. Fiber and protein are key for energy.
Bitter foods like artichokes, radishes, and dandelion root help the digestive process by activating the production of hydrochloric acid and digestive enzymes. You can also take squeeze bitters before eating.
Apple cider vinegar is easy to add to foods (like salad dressings), and some patients report that it helps with weight management, particularly belly fat.
The first few days of a cleanse, if you're going through sugar withdrawal, you might feel the urge to eat even if you're not really hungry. Sipping on green tea or decaf tea can help blunt the craving. You can also try drinking mini tonics made with ingredients like dandelion, celery, or turmeric throughout the day.
It's less "natural," but people report that chewing gum or sucking on a mint prevents them from feeling like they need to snack constantly.
Q: Which supplements do you recommend?
**A:** For weight loss, I recommend a multi-B vitamin, as it is essential for women's health. B vitamins are involved in estrogen metabolism, as well as progesterone, thyroid, and adrenal health. They play a role with almost every hormone, and deficiencies can correlate with estrogen buildup and cortisol spikes.
Probiotics can be beneficial for overall microbiome health and are ideally tailored to your personal health. For many of my patients with weight resistance and correlated Candida issues, I recommend probiotics higher in _Lactobacillus_ and _Bifidobacterium_ strains.
Digestive enzymes with ox bile and lipase can help the body metabolize fat.
Q: What about exercise and other lifestyle changes?
**A:** Once insulin resistance sets in, daily movement is usually needed for approximately 45 minutes per day. For at least 20 of those minutes, you want your heart rate at double your resting rate.
Adding weight training, at least 2 days per week, is also great because the body's metabolic response helps with blood sugar balance and burning energy more efficiently.
But if stress levels are high—then yoga, Pilates, and sometimes swimming are often better options. For patients who are in danger of being further fatigued, injured, or quitting, I like to focus on food, resetting the gut, getting adequate sleep, and bringing cortisol down. Once cortisol is in check, then it might be time to consider higher-powered exercise.
Overall, sleep is crucial, especially for women because of our cycling. Not getting sufficient sleep can affect hormone signaling (cortisol in particular), causing weight gain, among other health issues.
Q: Once the body is reset, what's important to keeping everything in balance and maintaining a healthy weight?
**A:** Doing more of the same: Maintaining good sleep, food, and exercise habits while keeping your gut balanced are key to maintaining your weight.
A daily fasting interval of 14 hours, and a weekly fast, can be helpful as well, and continues the benefits of a softer, less involved detox. Keeping close to a 14-hour window (including overnight) in which you're not eating helps to support digestion. Between meals, try to avoid grazing constantly, and give your body at least 4 hours to digest and break everything down.
A weekly fast doesn't have to mean only drinking water for a day. Some people will go meatless, all-veggie, and sugar-free for one day a week as a way of cutting back what might build up in the system.
You also want to be sure you're sweating regularly and having good bowel movements (going to the bathroom at least once a day).
Q: Do you see an emotional component to metabolism and weight?
**A:** In Chinese medicine, there is thought to be an emotional root to all energy meridian imbalances. People in pain, stress, or trauma, or who have lost their spark or passion, will sometimes eat and use food as a way to medicate. They are so disconnected that they usually don't realize they're doing this. A lot of us can relate to this—raiding the pantry when we're just bored or stressed but not really hungry.
We have to learn to recognize our emotional cues and how we're actually feeling. If we don't understand our emotional and mental landscape, we can have the best plan in the world but still sabotage it. All the chemistry and talk of hormones and gut balance is a moot point if our relationship to food and our bodies is dysfunctional.
Changing your emotional landscape is the hardest and probably the most important work. There is no quick fix. But what might help? Try checking in with your body, spending time in nature, developing a spiritual connection, taking time to care for yourself, doing work you love.
**The Food Shortlist**
#### BEST TO AVOID
• Dairy (ghee is okay)
• Gluten
• Limit red meat
• Refined sugar (limit fruit sugars)
#### GOOD TO ADD
• Apple cider vinegar
• Avocado
• Coconut oil
• Fermented foods
• Greens
• Nut butters
• Olive oil
• Seeds
**The goopified Menu**
**MONDAY**
**BREAKFAST:** Veggie Scramble (here)
**LUNCH:** Nori Salad Roll (here)
**DINNER:** Sheet Pan Chicken Curry (here)
**TUESDAY**
**BREAKFAST:** Seed Cracker with Egg & Avocado (here)
**LUNCH:** Teriyaki Bowl (here)
**DINNER:** Peruvian Chicken Cauli Rice Soup (here)
**WEDNESDAY**
**BREAKFAST:** Sweet Buckwheat Porridge (here)
**LUNCH:** Miso Soup (here)
**DINNER:** Fish Tacos on Jicama "Tortillas" (here)
**THURSDAY**
**BREAKFAST:** Quinoa Cereal with Freeze-Dried Berries (here)
**LUNCH:** Kimchi Chicken Lettuce Cups (here)
**DINNER:** Black Rice with Braised Chicken Thighs (here)
**FRIDAY**
**BREAKFAST:** Kale Kuku Soccata (here)
**LUNCH:** Crunchy Summer Rolls (here)
**DINNER:** White Bean & Zucchini Burgers (here)
**SATURDAY**
**BREAKFAST:** Breakfast Dal (here)
**LUNCH:** Garden Salad with Aquafaba Ranch Dressing (here)
**DINNER:** Chicken & Cabbage Dim Sum (here)
**SUNDAY**
**BREAKFAST:** Seed Cracker with Smoked Salmon & Avocado (here)
**LUNCH:** Coconut Chicken Soup (here)
**DINNER:** Mediterranean Salmon en Papillote (here)
#### A FEW GO-TO SNACKS:
• Seed Cracker (here) with almond butter
• Seed Cracker with avocado and sauerkraut
• Half an avocado with sea salt and lime juice
# HEAVY METAL DETOX
Southern California–based Dr. James Novak sees a number of patients concerned with heavy metal toxicity (among other hard-to-solve symptoms). There's still a lot that science doesn't know about heavy metals. Novak explains that we're all exposed to these toxins on a daily basis, but very few people require aggressive treatment because of it. With that in mind, he shares tips for decreasing your toxic load in the first place and for supporting the body's natural mechanisms for getting rid of heavy metals.
# A Q&A WITH JAMES NOVAK, MD
Q: What are the major sources of heavy metal exposure?
**A:** It is impossible to live in the modern world without exposure to a multitude of toxins. Toxins are ubiquitous in the air we breathe, the water we drink, and the food we eat. Fortunately for us, the mind/body is a self-regulating, self-healing system that is well-equipped with multiple homeostatic systems to deal with this problem constantly (assuming it is not completely overloaded).
Heavy metal exposure, more specifically, is ubiquitous in our everyday environment. People are exposed regularly to nano-sized aluminum from geoengineering aerosols, as well as from cooking pots and antiperspirants. The primary exposure sources of inorganic mercury are mercury amalgams in teeth, organic mercury in fish high in the food chain (shark, tuna, swordfish), and high-fructose corn syrup (which may be contaminated with mercury and is a staple in processed foods). People can be exposed to arsenic in rice and chicken. The amount of arsenic ingested from rice may be minimized by cooking with lots of water (a ratio of about 6 to 10 parts water to 1 part rice). The amount of arsenic ingested with chicken can be minimized by consuming organic chicken. Shellfish can contain cadmium, as well as other toxic compounds, depending on the watershed from which it is harvested. Lead can get stored in our bones from previous exposure to fumes from leaded gasoline.
Q: Why do you consider heavy metals a health concern?
**A:** Heavy metals can denature enzymes, disrupt normal cellular metabolism, and cause oxidative stress. For example, mercury molecules in the mitochondria can cause free radical reactions, which may put stress on cellular antioxidant defenses and weaken resistance to other cellular stressors such as bacteria, viruses, and other toxins.
Q: When do heavy metals become an issue? What are the related symptoms, and can you test for toxicity?
**A:** Research suggests that newborn babies actually inherit a toxic load from their mothers, so heavy metal toxicity is a health concern that everyone should be aware of. But until people get to a point where their compensation mechanisms—their cellular detoxification enzymes—are saturated, they usually do not experience overt symptoms.
Because many heavy metals are chronic neurologic, endocrine, and immune disruptors that can undermine the mechanisms by which these systems self-regulate and heal, there is a wide range of potential symptoms. Symptoms of advanced heavy metal toxicity can include fatigue, cognitive impairment, palpitations, muscle weakness, and dizziness, depending on which tissue compartment the heavy metals have accumulated in in the body.
Probably the best test for heavy metal toxicity is provided by Quicksilver Scientific. It reveals normal mineral metabolism along with the standard toxic elements, e.g., levels of lead, cadmium, mercury, arsenic, etc. Additionally, the test speciates the mercury into organic and inorganic components, which is helpful because knowing the relative amount of each form gives you a better idea of the degree of toxicity (more organic mercury is worse because it absorbs into body tissues more readily). Since it simultaneously tests hair, urine, and blood samples, it can also help show whether detox systems in the body are working to eliminate toxins properly, or whether accumulation is progressing due to faulty elimination.
Q: What does your typical heavy metal detox protocol look like?
**A:** I think everyone needs to approach supporting the body's heavy metal detoxification as an ongoing lifestyle. A minority of severely affected people may need aggressive treatment with intravenous chelation under the supervision of a doctor. The word "chelation" is derived from the Greek word "Chele," meaning "claw." Intravenous administration of a chelator such as EDTA or DMPS two to three times a week grabs heavy metals electromagnetically between the molecular "claws" of the chelator, and transports them to the liver and kidneys for elimination. But the vast majority of people can benefit greatly from a simpler lifestyle approach.
For more support, a gentle detox protocol over a 3- to 6-month period can be helpful:
**STEP ONE: REMOVE THE SOURCE**
Remove or minimize ongoing sources of exposure. This means avoiding processed foods, and fruits and vegetables with pesticide and herbicide contamination; fresh, organic fruits, vegetables, and herbs are best. High-mercury fish (like shark, tuna, swordfish), as well as shellfish, should be avoided. Meat and fowl should be organic, preferably pasture-fed. If you're eating rice, I recommend basmati, jasmine, and Himalayan.
Unfiltered tap water should be avoided. The best sources of water—free of fluoride, heavy metals, and organic pollutants—are obtained from spring water (I like Mountain Valley Spring Water, which comes in glass bottles) or home reverse osmosis filters, distillers, or atmospheric water generators.
Aluminum cooking vessels should be avoided—stainless steel is okay—as well as antiperspirants containing aluminum. Glass containers should be used for drinking water.
Some people may remove mercury amalgams gradually. Use a biological dentist trained in safe mercury removal.
**STEP TWO: SUPPORT DETOXIFICATION**
In general, we can support the body's detox mechanisms in the skin, kidneys, and gastrointestinal tract: Sweating, either by exercise or far infrared sauna, can help remove toxins through the skin. Adequate hydration can enhance renal clearance of toxins. And proper nutrition can support the removal of toxins from the tissues, the processing of toxins in the liver, and the binding of toxins in the intestinal tract for ultimate excretion in the stool.
More specifically, heavy metals such as lead, mercury, and arsenic are excreted from the cells by special protein structures called metallothionein enzymes. These structures require minerals such as zinc, copper, selenium, and magnesium to function properly. Eating foods rich in these minerals can aid these enzyme systems in releasing tissue toxins into the blood. Good sources include nuts and seeds, legumes, Atlantic seaweeds, leafy green vegetables, fulvic and humic mineral supplements, sea salt, and herbs such as burdock root, horsetail, and red clover. Regular consumption of cilantro can also aid in the mobilization of heavy metals from tissue compartments.
Once in the blood, the toxic substances are carried to the gastrointestinal tract, where they undergo a two-phase process for transformation and eventual elimination from the body. The initial process, phase one, involves oxidation and hydrolysis of the toxic substance via the p450 enzymes in the liver. This process converts the toxic substance into a form that will eventually be easier to excrete. Cruciferous vegetables—such as broccoli, cabbage, cauliflower, and Brussels sprouts—can support this process.
The second phase of detoxification involves a process called conjugation, where another compound is attached to the toxin to make it easier to excrete. A variety of biochemical mechanisms can be involved at this point—sulfation, glucuronidation, acetylation, methylation, glutathione conjugation. Sulfur-rich foods such as eggs, garlic, onions, leeks, shallots, and cruciferous vegetables may all help facilitate this process.
When the conjugated toxins are ready for excretion, they flow with the bile into the intestines. Cholagogues, such as curcumin, can help stimulate the flow of bile into the intestines.
Once the toxins enter the intestinal tract, we don't want them to be reabsorbed into the bloodstream. Eating bulking foods such as citrus pectin, seaweed alginates, and mucilaginous foods (like flax and chia seeds) may help prevent reabsorption of metals that have already been eliminated as they pass down the intestinal tract.
In terms of supplements, some people may take iodine supplements to support heavy metal excretion.
Q: What do you recommend for patients if this doesn't work?
**A:** Depending on the patient, I may incorporate additional elements into the protocol, like a multimineral, cilantro tincture, or fulvic acid (to promote tissue release); n-acetyl cysteine (to promote liver conjugation/excretion); and PectaClear (modified citrus pectin/seaweed alginate) with chlorella (as intestinal binders).
For those individuals who need a more aggressive approach, I may prescribe a cycle of oral chelation using DMSA (dimercaptosuccinic acid) capsules, 3 days on, 11 days off, until retesting demonstrates the desired improvement. I usually will retest for heavy metals after 3 to 4 months of treatment.
A potent oral chelator, Irminix (emeramide) by EmeraMed, is undergoing regulatory approval in Europe and will likely be available as a doctor prescription in the next year or two; it will also likely become the gold standard of care. This chelator is absorbed orally and crosses the blood-brain barrier. A 2-week course of 300mg per day has been shown to eliminate mercury residues from the body. Once mercury residues are reduced, Irminix can then start to eliminate arsenic, lead, and other toxic metals based on the degree of electromagnetic attraction it has for each element.
**The Food Shortlist**
#### BEST TO AVOID
• Added sugars
• Grains with mycotoxin residue
• High-mercury fish/shellfish
• Non-organic fruits and vegetables
• Non-organic meat/chicken
• Processed foods
• Toxic oils
• Unfiltered tap water
#### GOOD TO ADD
• Adequate spring/reverse osmosis/ distilled water
• Alliums (garlic, onions, leeks, shallots)
• Bitter greens (dandelion, beet greens, chicory, turmeric)
• Cilantro
• Cruciferous vegetables
• Eggs
• Leafy green vegetables
• Nuts and seeds
• Seaweeds
• Sea salts
**The goopified Menu**
**MONDAY**
**BREAKFAST:** Chlorella Smoothie (here)
**LUNCH:** Curry Chicken Lettuce Cups (here)
**DINNER:** Chickpea & Escarole Soup (here)
**TUESDAY**
**BREAKFAST:** Cauliflower, Pea & Turmeric Soccata (here)
**LUNCH:** Clean Carrot Soup (here)
**DINNER:** Zoodle Chow Mein (here)
**WEDNESDAY**
**BREAKFAST:** Kale Kuku Soccata (here)
**LUNCH:** Miso Soup (here)
**DINNER:** Tex-Mex Bowl (here)
**THURSDAY**
**BREAKFAST:** Poached Eggs over Sautéed Greens (here)
**LUNCH:** Tex-Mex Bowl (here)
**DINNER:** Peruvian Chicken Cauli Rice Soup (here)
**FRIDAY**
**BREAKFAST:** Easy Frittata (here)
**LUNCH:** Broccoli-Parsnip Soup (here)
**DINNER:** Chicken & Zucchini Kefta (here)
**SATURDAY**
**BREAKFAST:** Strawberry Cauliflower Smoothie (here)
**LUNCH:** Kimchi Chicken Lettuce Cups (here)
**DINNER:** Faux Meat Beet Tacos (here)
**SUNDAY**
**BREAKFAST:** Breakfast Dal (here)
**LUNCH:** Beet Gazpacho (here)
**DINNER:** Chicken & Cabbage Dim Sum (here)
#### A FEW GO-TO SNACKS:
• Hard-boiled egg
• Handful of walnuts
• Toasted nori
# ADRENAL SUPPORT
"Why am I so effing tired?" is a mantra widespread well beyond goop. The answer may be apparent in some instances (not enough rest, too much doing), but the effects on our bodies can be complex and sometimes masquerade as other health issues, explains LA-based cardiologist and functional medicine physician Alejandro Junger. A longtime friend and mentor of mine, Dr. Junger studies the phenomenon of adrenal fatigue (which is unrecognized by Western medicine). Junger likens the adrenals, the two small glands that sit on top of our kidneys and regulate the fight-or-flight reaction, to the power strip into which our organs are plugged for energy. When they are overworked (because of high stress and nonstop fight-or-flight reactions), we can feel tapped out. His tips for recharging follow.
# A Q&A WITH ALEJANDRO JUNGER, MD
Q: Can you explain adrenal fatigue? What causes it?
**A:** To understand adrenal fatigue, first imagine this: You wake up early in the morning and have a good sweat working out. Then you prepare breakfast for the kids and take them to school. You go to work and have a busy day. Back to school to get the kids. Drive them around for their activities. Back home. Dinner. Bath and put them to sleep. You're exhausted—you can barely think or walk around. If at that very moment you went to the doctor and got a full physical and a set of blood tests, everything would be normal. But you know that everything is not "normal." You are exhausted. And the solution is so simple: a good night's sleep.
Like so many things in modern Western medicine that are not measurable and quantifiable, adrenal fatigue is rarely recognized (or even considered an issue). What Western medicine does recognize is the complete shutdown of the adrenal glands, called Addison's disease, and the other extreme, Cushing's disease, which presents as a hyperactive adrenal system. But I've seen a whole spectrum in the middle with patients. It's on the hypoactive side of the spectrum that we talk about adrenal fatigue.
The adrenal glands are small glands that sit on top of the kidneys, but their function in the body is anything but small. They are most famous for being responsible for creating the necessary inner conditions for a fight-or-flight response, which is a primitive way (physiologically speaking) of animals responding to danger, and therefore essential for survival. In natural conditions, the fight-or-flight response is activated only every now and then, depending on the surroundings. Between dangerous moments, there is time to recover, then to just function at a baseline level. But modern life is very different from what nature designed. There are mini or major fight-or-flight responses triggered all day long, week after week, month after month, year after year. Anything that your mind interprets as threatening can get the adrenals firing adrenaline. In this constant state of stress, the adrenals must continuously attempt to adapt by releasing cortisol.
This chronic increase in adrenal activity can start taking a toll, as it would with any organ that is overused. Cortisol may be manufactured slower at times, faster at others, not following the normal circadian rhythm that allows for optimal functioning of the body. Instead of being released on awakening to help get us ready for a day of activities, cortisol may be low and slow early in the day. Instead of winding down at night to get the body ready for sleep, higher levels of cortisol may be released, making it difficult to get a good night's rest. This is one of the ways to indirectly "measure" a fatigued adrenal system—by testing cortisol levels, best done in saliva, at different times of the day and seeing the pattern.
Q: What are the primary symptoms?
**A:** The adrenal glands are involved in much more than just the fight-or-flight reactions. One could argue that directly or indirectly, they affect every cell in the body: They have an effect on the sex hormone system, on the way we absorb and assimilate minerals, on the way we process and utilize sugar, on the health of our blood vessels, and even on our mood and chain of thoughts. So when the adrenal system gets "fatigued," there are a lot of potential ways that imbalance or dysfunction may manifest in the body.
Some of the most typical symptoms are tiredness, mental fog, inability to get a good night's sleep and wake up rested, difficulty losing weight, hair loss, and irritability. But the picture may be much more complicated.
As adrenal function is connected with the sex hormones, adrenal fatigue may present as hormonal imbalances—irregular periods or no periods at all, infertility, lack of libido, difficulty gaining muscle mass, sugar cravings.
Because of the adrenal relationship with insulin, glucagon, and the intestines, adrenal fatigue can present as insulin resistance, diabetes, high blood pressure, and other symptoms related to the mismanagement of sugar by the cells.
Since adrenal function and mineral absorption are related, the fatigue can present as osteoporosis.
Because of the intimate relationship with the gut, adrenal fatigue can present as digestive issues, and potentially even depression, as the gut manufactures 90 percent of the serotonin in our body.
We could go system by system, organ by organ, examining ways in which fatigued adrenal glands may hinder optimal function elsewhere in the body. I see and hear from people every day who feel and know there is something wrong, but who cannot get an answer from a traditional medical test. If this sounds like you, your best bet is to find a functional medicine doctor who can take a holistic look at your symptoms and figure out if your adrenals may be fatigued.
Q: What are the indirect ways of testing for adrenal fatigue?
**A:** As mentioned, testing cortisol levels throughout the day can provide hints at how the adrenals are functioning. Low levels of DHEA also suggest that the adrenals may be fatigued. A doctor who has experience with adrenal fatigue will do a full assessment to make sure it is not actually a different disease or issue being mimicked by adrenal fatigue. The best way, though, is often to see how you respond to the ways in which the adrenals can be "rested and recovered."
Q: What do you recommend for resetting the adrenals? What's possible in terms of rebalancing, and how long does it typically take?
**A:** Resting is essential. The adrenal system is perfectly designed to act in bursts when needed and then recover pretty fast as well. If we lived as nature designed us to, a good rest would be enough to rebalance. But with our stressful lifestyles, we are constantly activating the adrenal response, so it may take more time to recharge. Depending on the severity of the adrenal fatigue, it may take a few weeks to a few months, or even up to a couple of years in severe cases, to completely rebalance. The good news is that while full recovery may take longer than we'd like, people typically begin to feel some benefits almost immediately.
As with any health problem, when the root of it is uncovered and corrected, everything tends to correct itself, and health is restored. This is what the body is constantly trying to do. The problem with adrenal fatigue is that our identities are often wrapped up with the root cause of it (e.g., being busy, hardworking, always on the go) so it may be difficult to make the necessary lifestyle changes (e.g., resting more). We may think that if we slow down, our work will suffer or we won't be able to be there for our kids and their busy schedules, but there are health consequences when we never stop to recharge. (Also, if we made our kids' lives less busy from a young age, could we potentially prevent them from having to drastically change their lifestyle in the future to recover from adrenal fatigue?)
It's important to not only rest your body by decreasing muscular activity, but also to rest from the conditions in your life that are triggering a fight-or-flight response. For most people, sleeping more and sleeping better is a good start. There are entire books written about this subject; some of the tried-and-true tips are to avoid electronics prior to bedtime, try a warm bath or shower, don't eat immediately before bed, and make your room as dark as possible.
Relaxing more and having more fun is essential. Laughing, spending time with family and friends, and helping others can all be forms of medicine (cheesy, but true) that allow for adrenal recovery, or at least slow the excessive consumption of adrenaline and other adrenal stress hormones. Both yoga and meditation have been scientifically proven to benefit overall health in so many ways.
Sometimes it is necessary to make a drastic change and leave a toxic job, or a toxic relationship. Resting from a stressful life may also be more complicated, and involve learning new ways to respond to the same situations if they are impossible to change, at least for now.
Q: Which foods do you recommend eliminating from and adding to the diet?
**A:** Just as it is recommended to eliminate or reduce toxic exposures and situations, it is the same inside—and nothing affects us inside more intimately and powerfully than food. To allow the adrenals to recover, you should avoid all foods that make more digestive work for the body, that carry toxins (e.g., processed foods), or that trigger an allergy or sensitivity in some way (a food allergy can cause a small fight-or-flight reaction of sorts).
If you don't know which foods negatively affect you, a good place to start is avoiding any foods with chemicals and sticking to a basic elimination diet—no sugar, dairy, alcohol, coffee, or gluten.
Good foods to include are ones rich in nutrients and not hard to digest, like olive oil and avocados, coconut and coconut oil, apple cider vinegar, green veggies, bone broth, free-range meats if you eat animal products, nuts, fermented foods, and fruits like berries and açai.
Replacing some meals with supercharged smoothies is a good practice. This means less digestive work, but plenty of available nutrients to absorb. I like to add spirulina, Celtic salt, raw almond butter, E3 (marine algae), and bee pollen to my smoothies, no matter what else I put in them.
Q: Do you recommend supplements?
**A:** There are many supplements that can potentially help your adrenals recover, depending on your individual nutrient deficiencies and needs. In general, these include vitamins B, C, D; minerals such as selenium and magnesium; and fish oils (e.g., DHEA). But the royal family of adrenal helpers are the adaptogenic herbs—such as ashwagandha, rhodiola, holy basil, licorice—that help the body adapt to stress. Moringa (from the moringa tree) can be another helpful adaptogen and contains most micronutrients needed for optimal body physiology; it can be taken as pills or in powder form (added to smoothies).
**The Food Shortlist**
#### BEST TO AVOID
• Alcohol
• Coffee
• Dairy
• Gluten
• Processed foods
• Red meat and shellfish
• Strawberries, bananas, grapefruit, and oranges
• Sugar
#### GOOD TO ADD
• Açai
• Apple cider vinegar
• Avocados
• Berries
• Fermented foods
• Green veggies
• Olive oil
• Seaweed
**The goopified Menu**
**MONDAY**
**BREAKFAST:** Black Rice Pudding with Coconut Milk & Mango (here)
**LUNCH:** Nori Salad Roll (here)
**DINNER:** Faux Meat Beet Tacos (here)
**TUESDAY**
**BREAKFAST:** Blueberry Cauliflower Smoothie (here)
**LUNCH:** Beet Gazpacho (here)
**DINNER:** Kimchi Lettuce Cups (here)
**WEDNESDAY**
**BREAKFAST:** Quinoa Cereal with Freeze-Dried Berries (here)
**LUNCH:** Quinoa, Sweet Potato & Tahini Grain Bowl (here)
**DINNER:** Halibut en Papillote with Lemon, Mushrooms & Toasted Sesame Oil (here)
**THURSDAY**
**BREAKFAST:** Chocolate Chia Pudding (here)
**LUNCH:** Grilled Chicken Salad with Miso Dressing (here)
**DINNER:** Braised Chicken Tacos on Butternut Squash "Tortillas" (here)
**FRIDAY**
**BREAKFAST:** Beet Açai Blueberry Smoothie Bowl (here)
**LUNCH:** Curry Chicken Lettuce Cups (here)
**DINNER:** Teriyaki Bowl (here)
**SATURDAY**
**BREAKFAST:** Sweet Buckwheat Porridge (here)
**LUNCH:** Garden Salad with Aquafaba Ranch Dressing (here)
**DINNER:** Herby Meatballs with Nomato Sauce (here)
**SUNDAY**
**BREAKFAST:** Chlorella Smoothie (here)
**LUNCH:** Crunchy Spring Veggie Grain Bowl (here)
**DINNER:** Turkey Burgers (here)
#### A FEW GO-TO SNACKS:
• Toasted nori
• Seed Cracker (here) with avocado
• Handful of blueberries
# CANDIDA RESET
Functional medicine expert Dr. Amy Myers (author of _The Autoimmune Solution_ and _The Thyroid Connection_ ) has become an indispensable resource on women's health issues that are oft misunderstood, from thyroid dysfunction to gut imbalances like SIBO (small intestinal bacterial overgrowth). One of her specialties is helping people heal from an overgrowth of Candida, which is a form of yeast. Myers's primer on yeast overgrowth and how it might subtly affect the body (e.g., fatigue, bloating, eczema) is regularly referenced on goop.com, and many have leaned on her online Candida Breakthrough Program. A lot of her work and theories around health are based on her clinical experience and you won't necessarily see them reflected in classic scientific literature, or at least not yet. Her most recent book, _The Autoimmunity Cookbook_ , is full of recipes designed to bring balance back to the gut. (Note: If you already follow Dr. Myers's self-designed protocols, you'll know that they are typically grain- and legume-free. She primarily works with patients who have autoimmunity and has found cutting out these foods to be helpful for them.)
# A Q&A WITH AMY MYERS, MD
Q: What is Candida—where does it live in the body, and how does it differ from other fungal and yeast infections?
**A:** Candida is a fungus. A lot of people use the terms "yeast overgrowth" and "Candida" interchangeably, and there are hundreds of different types of yeast, but the most common form of yeast infection is known as _Candida albicans_.
Candida lives throughout our bodies in small amounts: in the oral cavity, digestive tract, gut microbiome, and vaginal tract. Its job is to aid with digestion and nutrient absorption—which it does when it's in balance with the good bacteria in your microbiome. I think of the microbiome (clusters of mainly bacteria, plus other organisms, found in our skin, nose, mouth, gut, urinary tract) as a rain forest: When everything is in balance, the body is in harmony and runs smoothly.
Problems occur when there is too much Candida in relation to your body's good bacteria, and it overpowers the bacteria, which can lead to leaky gut and a host of other digestive issues, as well as fungal infections, mood swings, and brain fog. (Leaky gut is when the junctions in the intestinal lining break apart, and particles including toxins escape from your intestines and travel throughout your body via your bloodstream.) People generally equate Candida with a systemic overgrowth—e.g., a vaginal yeast infection in a woman, or a nail fungus. But the signs of Candida overgrowth can be subtler. Conventional medicine only recognizes the systemic and often fatal form of Candida overgrowth known as candidemia, which is when Candida invades the blood. About 90 percent of the patients I see (people who are sick, have autoimmune disorders, leaky gut, etc.) have Candida overgrowth that, while not fatal, is extremely disruptive to their health. Like, say, adrenal fatigue, which also has pervasive, seemingly vague symptoms, this level of Candida overgrowth is not really recognized by conventional medicine.
The symptoms of different kinds of yeast infections overlap greatly (although some lead to infections in different parts of the body), and the vast majority of treatment is the same. Lab work can distinguish which type of yeast infection you have.
Q: What causes an overgrowth of Candida?
**A:** There are a number of factors that can contribute to Candida—the major ones are:
**Diet:** A diet high in sugar, refined carbohydrates, and processed foods makes it easy for yeast to multiply and thrive—these are the foods yeast lives off of. Alcohol, which tends to involve a lot of yeast, sugar, and carbs (e.g., beer and wine), is also problematic.
**Antibiotics & Other Medications:** Taking even one round of antibiotics can kill too much of your body's good bacteria and throw off the balance of your microbiome. A mom's microbiome also affects her baby's developing microbiome—so if a mother takes antibiotics while pregnant, or had yeast infections, that can contribute to yeast overgrowth in the child. As can C-sections, which affect a baby's microbiome. Steroids can also cause a yeast overgrowth, as can acid-blocking pills (you need enough acid to kill bacteria and parasites on your food, some yeasts, as well as viruses).
**Oral Contraceptives:** Yeast likes high-estrogen conditions, so we see a correlation between birth control use and yeast overgrowth.
**Stress:** A high-stress lifestyle may also lead Candida to overpower the good bacteria in your microbiome.
Q: What symptoms are typically associated with Candida overgrowth?
**A:** When the body overproduces Candida, it may break down the wall of the intestine, causing leaky gut and releasing toxic by-products into your body. Leaky gut disrupts your body's ability to digest and absorb nutrients (causing nutrition deficiencies), and can lead to health issues beyond digestive concerns, including autoimmunity and thyroid dysfunction.
In addition to leaky gut, the other overarching problem associated with Candida is a suppressed immune system. About 60 to 80 percent of our immune system lives in our gut. Yeast overgrowth is associated with a suppressed production of IgA—the antibody immunoglobulin A, which is vital to immunity. Most of the patients I see with Candida overgrowth also suffer immunity issues.
The common signs of Candida overgrowth are:
• Brain fog, poor memory, ADHD
• Fatigue and/or fibromyalgia
• Autoimmune diseases connected to leaky gut (as mentioned above)
• Digestive issues—gas, bloating, constipation
• Skin issues, including eczema, hives, rosacea, rashes
• Seasonal allergies/chronic sinus infections
• Skin and nail fungal infections (ringworm, athlete's foot, tinea versicolor—when you get white spots in the sun): An external fungus can be an isolated issue, but is often a sign that the rest of the body is imbalanced.
• Vaginal infections, UTIs
• Sugar cravings: Sugar is food for yeast.
• Mercury overload: Some alternative medicine experts think yeast overgrowths can manifest to surround and protect mercury in the body.
Q: How do you test for Candida?
**A:** The tests I use to diagnose Candida are:
**Antibodies:** I check for total IgG, IgM, and IgA antibodies to see if your immune system is mounting a response to an infection—i.e., if your levels are high. A low level of IgA, however, could indicate that you have a suppressed immune system and that your body is not able to mount a response. I also check for IgG, IgA, and IgM Candida antibodies in your blood—high levels of these antibodies may indicate that you have a Candida overgrowth that your immune system is responding to. (Any lab can order these blood tests.)
**Complete Blood Count (CBC):** A low white blood cell count (WBC) has been associated with yeast overgrowth, as well as a high neutrophil and low lymphocyte count. Although not specific to yeast, I see this pattern frequently in patients with Candida overgrowth.
**Stool Test:** You'll need to seek out a functional medicine doctor and ask for a comprehensive (rather than standard) stool test, which will include a check for Candida in your colon/large intestines. (It will also check your level of IgA in stool.) From a stool test, the lab can usually identify the type of yeast (if it is not Candida) and the most effective treatment path.
**Organix Dysbiosis Urine Test:** This test looks at a marker of the Candida waste product (like anything, yeast excretes waste) called D-arabinitol. A high level may indicate that there is yeast overgrowth in the upper gut/small intestines.
**Infection:** A swab of a yeast infection can be sent off to the lab for analysis to determine which type of yeast you have.
There is a self-spit test (find it with a simple Google search)—which doesn't have a lot of scientific data around it—that I know many of my patients have done on their own before coming into the office. Most of the time, I find that the other tests I do confirm that the patient has an overgrowth, but again, the spit test is not as exact as these medical tests.
Q: What's the best treatment plan?
**A:** The best way to treat Candida is with a three-step approach:
**1. STARVE THE YEAST**
The first step is to eliminate foods that have yeast in them and foods that yeast likes to eat.
This means cutting out vinegar, beer, wine, mushrooms (as part of the fungi family, they can cross-react with Candida), sugar, refined carbs, and processed foods.
But you also want to limit healthy carbs like legumes, grains, and starchy veggies to 1 cup a day, and fruit to a single piece a day—because, unfortunately, even good carbs feed yeast.
Along the same lines, I tell people to hold off on good fermented foods (not something all doctors agree on)—e.g., sauerkraut, pickles, kimchi—until they've killed off the yeast. While these foods are beneficial for the good bacteria in your microbiome, they also are good for yeast (which isn't helpful if you have an overgrowth).
**2. OVERPOWER THE YEAST**
Some patients need a prescription antifungal (like Diflucan or Nystatin).
Antifungal supplements can be effective, too: My two go-to supplements are caprylic acid (naturally found in coconut oil) and Candifense (which contains enzymes that break down parasitic and fungal cell walls). Some people take oil of oregano, which is broad spectrum, meaning that it will kill good and bad organisms in the microbiome, but I try to stick with more targeted supplements that really only kill yeast.
**3. REPLENISH GOOD BACTERIA**
During treatment, take high-quality probiotic supplements, which help protect your body against future infections. You don't want to take prebiotics—which feed good bacteria and yeast—while you're trying to get rid of Candida, but you can add them in, along with fermented foods, down the line once your Candida is under control.
Q: Are there ways to get rid of Candida without going on such a restrictive diet? Are there beneficial foods you can add to your diet to combat Candida?
**A:** It's really hard to get rid of Candida without adjusting your diet—even if you're on an antifungal prescription, you need to take away the foods that are contributing to the overgrowth.
Foods you want to add to your diet are:
**Coconut Oil:** Contains caprylic acid (mentioned above), which kills yeast cells.
**Olive Oil:** The antioxidants in olive oil help your body get rid of Candida.
**Garlic:** Contains allicin, a sulfur-containing compound with specific-to-Candida antifungal properties.
**Cinnamon:** Has antifungal and anti-inflammatory benefits.
**Apple Cider Vinegar:** This is the only vinegar I recommend consuming while you're treating a Candida overgrowth—its enzymes may help break down Candida.
**Lemon:** Has some antifungal properties, and helps your liver detox.
**Ginger:** Has anti-inflammatory and antifungal properties—plus, it supports your liver.
**Cloves:** Very effective (internal) antifungal. Clove oil can also be used as a topical aid for infections.
**Cruciferous Veggies:** Broccoli, radishes, Brussels sprouts, cabbage, etc., have sulfur- and nitrogen-containing compounds that attack Candida.
**Wild Salmon:** Omega-3 fatty acids help fight fungal infections.
Q: How long does it typically take to get rid of a Candida overgrowth?
**A:** It largely depends on what caused the Candida overgrowth. Let's say it was a one-off scenario: You had bronchitis, went through two rounds of antibiotics, and got Candida. After a few weeks of a Candida cleanse (i.e., following the diet guidelines here), you can likely get rid of the overgrowth, restore your gut microbiome, and move on. I have a thirty-day program online at amymyersmd.com that can get you started.
If it wasn't a one-off situation, it likely won't be a quick fix. While this doesn't mean that you can't ever have a glass of wine or a slice of cake again, you might find that you feel your best with longer-term adjustments to your diet.
**The Food Shortlist**
#### BEST TO AVOID
• Beer and wine
• Fermented foods
• Healthy carbs like legumes, grains, and starchy veggies (limit to 1 cup a day) and fruit (limit to 1 piece a day)
• Mushrooms
• Sugar, refined carbs, processed foods
• Vinegar (except apple cider vinegar)
#### GOOD TO ADD
• Apple cider vinegar
• Cinnamon
• Cloves
• Coconut oil
• Cruciferous vegetables
• Garlic
• Ginger
• Lemons
• Olive oil
• Wild salmon
**The goopified Menu**
**MONDAY**
**BREAKFAST:** Strawberry Cauliflower Smoothie (here)
**LUNCH:** Tarragon Chicken Lettuce Cups (here)
**DINNER:** Coconut Chicken Soup (here)
**TUESDAY**
**BREAKFAST:** Seed Cracker with Smoked Salmon & Avocado (here)
**LUNCH:** Kale, Carrot & Avo Salad with Tahini Dressing (here)
**DINNER:** Black Rice with Braised Chicken Thighs (here)
**WEDNESDAY**
**BREAKFAST:** Blueberry Cauliflower Smoothie (here)
**LUNCH:** Crunchy Summer Rolls (here)
**DINNER:** Five-Spice Salmon Burgers (here)
**THURSDAY**
**BREAKFAST:** Zucchini & Lemon Soccata (here)
**LUNCH:** Roasted Kabocha Soup (here)
**DINNER:** Sheet Pan Chicken with Broccolini & Radicchio (here)
**FRIDAY**
**BREAKFAST:** Seed Cracker with Egg & Avocado (here)
**LUNCH:** Brown Rice Grain Bowl with Kale, Broccoli & Sesame (here)
**DINNER:** Chickpea & Kale Curry (here)
**SATURDAY**
**BREAKFAST:** Sweet Buckwheat Porridge (here)
**LUNCH:** Cucumber & Avocado Gazpacho (here)
**DINNER:** Beet Falafel Sliders (here)
**SUNDAY**
**BREAKFAST:** Kale Kuku Soccata (here)
**LUNCH:** Grilled Chicken Salad with Miso Dressing (here)
**DINNER:** Italian Braised Chicken (here)
#### A FEW GO-TO SNACKS:
• Raw broccoli and cauliflower with Aquafaba Ranch Dressing (here)
• Hot water with ginger and lemon
• Half an avocado with lemon, sea salt, grated fresh ginger, and toasted sesame oil
# HEART HEALTH
"If I ate bread at every meal, for every day of my life, I'd be a happy but miserable guy," Dr. Steven Gundry begins our conversation on his food philosophy. Gundry, a cardiologist by training, served as chairman and head of cardiothoracic surgery at Loma Linda University School of Medicine for sixteen years. In 2001, he met a so-called hopeless patient with heart disease who ended up turning around his health by making some surprising diet adjustments. This experience changed the trajectory of Dr. Gundry's career, who went on to explore the power of nutrition, and search for cures for notoriously difficult-to-treat diseases and disorders. As the founder and director of the Center for Restorative Medicine, based in Palm Springs, California, he's helped his patients to reverse significant cardiovascular damage—in part by removing lectins, a plant-based protein, from their diet. For much more on his dietary research, see his books _The Plant Paradox_ , _The Plant Paradox Cookbook_ , and _Dr. Gundry's Diet Evolution_.
# A Q&A WITH STEVEN GUNDRY, MD
Q: What's the greatest cardiovascular health concern?
**A:** Heart disease, in both men and women, is the number one killer in the United States. While there's certainly no need for fearmongering, we should look at it seriously. Women's complaints around their heart and chest are not always taken seriously enough. Women often don't have the same symptoms that we teach men about heart disease, such as the feeling of an elephant sitting on the chest, or pain going down the left arm or into the jaw. With women, signs of heart disease, coronary artery disease, narrowing in the arteries, can be as simple as fatigue, or shortness of breath, or nausea, or feeling sick after meals. Women should be empowered to take their health concerns seriously, and as doctors, we can't just dismiss these symptoms as "female complaints," because too many women are not getting the care they should.
Q: Which risk factors should people be proactively aware of?
**A:** Cholesterol levels in general don't have a lot to do with developing heart disease. Around half the people who present to the ER with chest pain or a heart attack have normal levels of LDL, the so-called bad cholesterol. But there is one genetic risk related to a type of cholesterol that most people are unaware of: It's an inherited gene that tells the body to make the cholesterol lipoprotein(a), or Lp(a). The gene is very common in northern Europeans, the English, Irish, and Scottish; and is typically present in families with strong histories of heart disease. I ask everyone to have their doctor measure their Lp(a) levels, which is an $8 test. While statin drugs are generally not effective if you have this gene, it turns out that taking niacin (a form of vitamin B3) is extremely beneficial.
Triglycerides (a type of fat in the blood) are one of the biggest mischief makers contributing to heart disease. The body converts calories it doesn't need immediately into triglycerides, to be used for energy later. When we overeat sugar (even in the form of fruit), we make excess triglycerides. One of the best predictors for _avoiding_ heart disease is to have HDL ("good cholesterol") levels that are higher than your triglyceride levels. I've read several papers that suggest that cutting back on sugars, including natural sugars found in fruit and seeded vegetables, can help improve the ratio of HDL to triglycerides.
Q: What's possible when it comes to reversing damage already done to the heart?
**A:** At our clinic, we help people reverse severe blockages in their coronary arteries largely through diet and lifestyle changes. We've collected data from ten thousand patients who we have followed for fifteen years. Every three months, we use a number of sophisticated blood tests (that Medicare and insurance generally pay for) to test how well patients are doing. When people follow the protocol, we see their good cholesterol go up and their bad cholesterol go down. But, most important, we see the amount of inflammation on the inside of their blood vessels decrease, or even completely resolve itself.
Q: Can you explain the food philosophy behind your cardiovascular-focused program?
**A:** One of the principles of the program is to lessen the amount of lectin-containing foods that people eat. Lectins are proteins found in certain plants that are meant to protect the plants from predators. Lectins are sometimes called sticky proteins, because they seek out particular sugar molecules on cells in our blood, the lining of our gut, and on our nerves. Lectins can prompt attacks on the lining of our blood vessels, too. While some people seem to react more vigorously to lectins than others, we've found that eliminating them from the diet helps to reverse cardiovascular blockages across our patient group. We've also seen markers of inflammation go up again when lectin-heavy foods have been reintroduced to our patients' diet.
The main lectin-containing foods are grains, seeded vegetables/fruits like nightshades and squashes, and legumes. In plants, the lectins are found in the peels and seeds, so you can get rid of them by, for instance, peeling and seeding tomatoes before making sauce (which is how Italians have traditionally cooked). Pressure cooking beans, tomatoes, potatoes, and grains is also an effective way to get rid of the lectins before you eat these foods. (Note that this doesn't work for wheat, oats, rye, barley, or spelt.)
In general, fruits and seeded vegetables—which are technically fruits—should have a limited place in a heart-healthy diet. Our modern fruits have been bred for sugar content, and while we now have fruit 365 days a year, just 50 years ago, our fruit consumption was seasonal.
I have no issues with colleagues who say that some of the healthiest people in the world traditionally eat grain and beans, but people who follow that diet in these longevity hot spots tend to have vastly different immune systems than the average American. They also haven't been exposed to our lifestyle, the toxins in our environment, the number of antibiotics we take, the amount of antibiotics fed to the animals we eat, anti-inflammatory drugs like Advil and Aleve that can damage the gut lining, pesticides like glyphosate that change our gut bacteria, and so on.
I do believe, though, that we can learn and adopt other habits from the world's healthiest people. For example, I think a secret of extreme longevity is limiting animal protein so that it is more of a garnishment, or flavoring, and not the main focus of every meal.
Q: Which foods should we be eating more of?
**A:** One of my favorite sayings is that the only purpose of food is to get olive oil into your mouth. Several of the world's longest-lived populations use a liter of olive oil per week. The only reasonable way to eat close to that much olive oil is to pour it on your food. Think of broccoli, or a couple of poached eggs, or a piece of fish, as a delivery device for olive oil. Whatever you're eating, bring your olive oil to the table, too.
Q: Can any supplements make a difference in cardiovascular health?
**A:** We have a big vitamin D deficiency in this country. For my average patient, I recommend taking 5,000 IU/day. Vitamin E is essential for heart health. Fish oil, particularly the DHA component, is important for overall health and for keeping the gut wall intact.
Polyphenols are plant compounds that are primarily found in dark fruit and dark leafy greens. They interact with gut bacteria and are turned into compounds that have been shown to make our blood vessels more flexible and make the inside lining of the blood vessels slippery. Which is a good thing. Cholesterol is not interested in your blood vessel lining unless there's inflammation or "stickiness" on the wall. We've found that after introducing polyphenols into people's diet, the stickiness goes away. (If they stop taking polyphenols, that stickiness comes back.) Common polyphenols include grapeseed extract, green tea, black coffee, and cacao (dark chocolate is good for this reason). I make a polyphenol-based product called Vital Reds, but you can get grapeseed extract nearly anywhere, including at Costco.
Q: What about exercise, stress, and other lifestyle factors?
**A:** Exercise definitely has a place in heart health. I try to have my patients spend less time working on aerobic exercises like running and more time on strength training like yoga and Pilates. You can think of it this way: Our muscles are the customers that buy sugars from a hormone called insulin. The more muscle mass you carry, and the hungrier muscles are, the easier it is to get sugar out of the bloodstream. Chronically elevated insulin, or prediabetes, is prevalent in the vast majority of people who have heart disease, so you want your insulin level to be as normal or low as possible.
The old idea that chronic stress—like what you might expect someone with a type A personality to experience—causes heart disease has pretty much gone by the wayside. I don't typically see stress having a significant effect on heart disease.
Q: Why do you recommend fasting? Who is it not for?
**A:** We are realizing that a twenty-four-hour period of fasting may have the potential to reset the message in our stem cells in the lining of our gut, of all things, to begin to grow and divide. All you might need is a twenty-four-hour period of time without eating to really reset the stem cells in your gut, which, in simplistic terms, helps heal the gut. I believe that almost all chronic disease is the result of gut issues, and that longevity begins in the gut. A lot of ancient wisdom recognized the importance of fasting in optimal health and almost every great religious group practiced a period of fasting.
For a full-on fast, you just drink water for twenty-four hours; most people don't really need anything else but consult your doctor around any health concerns. People who do poorly with fasting, in general, have high-insulin levels. People who are prediabetic or diabetic can't use fat stores to fuel their brain. I have these patients take MCT oil or coconut oil during that fasting period. Pregnant mothers should not fast.
**The Food Shortlist**
#### BEST TO AVOID
• Grains (sorghum and millet are okay because they don't have a hull and thus don't contain lectins)
• Polyunsaturated oils like canola and corn, safflower oil
• Seeded vegetables/fruits (okay if you seed and peel)
**Note:** Pressure cook your legumes (or buy them already pressure cooked from the brand Eden Foods) to get rid of the lectins first.
#### GOOD TO ADD
• Avocado
• Cruciferous vegetables
• Leafy greens
• Olive oil
• Turmeric
**The goopified Menu**
**MONDAY**
**BREAKFAST:** Easy Frittata (here)
**LUNCH:** Chicken Larb Lettuce Cups (here)
**DINNER:** Fish Tacos on Jicama "Tortillas" (here)
**TUESDAY**
**BREAKFAST:** Blueberry Cauliflower Smoothie (here)
**LUNCH:** Clean Carrot Soup (here)
**DINNER:** Chicken & Cabbage Dim Sum (here)
**WEDNESDAY**
**BREAKFAST:** Beet Açai Blueberry Smoothie Bowl (here)
**LUNCH:** Italian Chicken Salad with Grilled Asparagus (here)
**DINNER:** Halibut en Papillote with Lemon, Mushrooms & Toasted Sesame Oil (here)
**THURSDAY**
**BREAKFAST:** Fast
**LUNCH:** Fast
**DINNER:** Fast
**FRIDAY**
**BREAKFAST:** Poached Eggs over Sautéed Greens (here)
**LUNCH:** Kimchi Chicken Lettuce Cups (here)
**DINNER:** Za'atar Chicken Bowl (here)
**SATURDAY**
**BREAKFAST:** Veggie Scramble (here)
**LUNCH:** Kale & Sweet Potato Salad with Miso (here)
**DINNER:** Za'atar Chicken Bowl (here)
**SUNDAY**
**BREAKFAST:** Blueberry Cauliflower Smoothie (here)
**LUNCH:** Cucumber & Avocado Gazpacho (here)
**DINNER:** Sheet Pan Chicken Curry (here)
#### A FEW GO-TO SNACKS:
• Hot water with lemon
• Cucumber (peeled and seeded) and avocado with lime and salt
• Hard-boiled egg
# VEG-FRIENDLY AYURVEDA
There are seemingly few nutritional tenets that everyone agrees on but most health experts I talk to lean into plant-heavy diets. Of course, this isn't just a modern trend—some of the most well-known vegetable-centric cleanses stem from ancient Ayurveda.
An expert in Ayurvedic and holistic health, Aruna Viswanathan is the head of the ear, nose, and throat department and integrative medicine director at Vikram Hospital and Research Institute in Coimbatore, India. She lives in Goa, where she currently practices out of a center called the Renewal Point. As the medical advisor of the brand ORGANIC INDIA, Viswanathan is adept at making Ayurvedic principles of cleansing approachable and accessible to the novice.
# A Q&A WITH ARUNA VISWANATHAN, MBBS, MS (ENT), FAGE
Q: Who can benefit from an Ayurvedic cleanse? What kind of results do people typically feel/see, and at what point?
**A:** Anyone who wants to be well or healthier; almost everyone is a candidate with the exception of children, pregnant/nursing mothers, and those with conflicting health conditions. Benefits often include: improved energy, digestion, skin and hair health. Most people report "glowing" or "shining" after an Ayurvedic cleanse. Sugar and food cravings may go away as early as the first or second week.
The typical cleanse is twenty-one days because we generally need at least three weeks to form lifelong healthy habits that can truly be a gateway to a new life of radiant health. But people usually see results in even just one week, feeling lighter as energy levels rise. (A lot of people are amazed that they don't need more food to feel so energized.) Mental clarity and increased energy are tangible for the first time.
After a twenty-one-day cleanse, each week, you can reintroduce a new food group to your diet, and really experience how it affects you, as your body will be like a "clean slate." This knowledge, and being able to understand your own body, is another benefit of the cleanse.
Q: What do people typically eat (and not eat) during an Ayurvedic cleanse?
**A:** An Ayurvedic protocol consists of ancient cleansing herbs that support detoxification and nutritionally complete meals.
The diet is rich in beans, greens, and other vegetables (all organic). Celery, parsley, cilantro, amaranth, and spinach are all known as cleansing vegetables. All lentils and legumes may be used, unless you have trouble digesting them (for instance, chickpeas can be difficult for some to digest).
It's also important to drink plenty of water.
Local, organic, seasonal fruits are preferred. Lemons, grapefruits, and limes can all aid in cleansing. Astringent fruits like pomegranate, cranberries, and jamun (a type of plum) are also beneficial. Certain sweet fruits (like persimmons and grapes) and packaged fruit juices should be avoided.
The main ingredients to avoid are animal products (including seafood), white sugar, refined white flour, and dairy. These ingredients are known as "clogging foods" in Ayurveda, and do not support cleansing.
Preferably also avoid bread made with yeast and nightshades (known as energetically blocking in Ayurveda), including potatoes (which can be difficult to digest). Mushrooms should not be eaten often.
You can follow the individual foods that are recommended/discouraged for your particular _dosha_ —there are three mind-body types or energies. But cleansing is vital irrespective of _dosha_ , and set Ayurvedic cleanses like ORGANIC INDIA's REVIVE kit can be used by people of all _dosha_ s. (ORGANIC INDIA's kit is a collaborative creation by many experts across the world, and the most friendly way I know of to do an Ayurvedic cleanse at home. It includes complete protein, fiber, superfoods, greens, and probiotics that are instrumental in nourishment and detoxification.)
Q: What are some of the best foods and herbs?
**A:** Kitchari (also spelled khichri or khichdi), a combination of steamed beans and rice sautéed with ghee and cumin seeds, is a staple of Ayurvedic detoxes. _Moong dal khichri_ is made with mung beans—considered the lightest of all legumes and _sattvic_ food. In Ayurveda, _sattvic_ foods are fresh, whole, nutrient dense, and high in prana (meaning vital energy), as they are cooked with love. They are meant to balance all the elements and nourish consciousness, mind, and body.
_Kanji_ , or rice porridge, can be made with gluten-free grains like millet, buckwheat, or brown, white, black, or wild rice. Cassava flour is very light and great for cooking during a detox. While dairy should be avoided, ghee is great, but should be organic A2 cow ghee.
_Amla_ is the best fruit, along with _haritaki_ and _bibhitaki_. These fruits are not easy to source fresh in most parts of the world but are available together as capsules or in powder form called triphala from ORGANIC INDIA and others. I describe triphala as a gentle, effective scrub for the insides, designed to support the health of the GI tract, digestion, and elimination. It contains vitamin C and other antioxidants, promotes good health, and has long been used to balance all three _dosha_ s. I use triphala extensively in my practice and recommend it with warm water when you wake up and before bed (not with food).
Other important herbs include tulsi (holy basil), _bhumyamalaki_ , neem, and _punarnava_ , which are part of the ORGANIC INDIA REVIVE kit. The precise combination of herbs have been specially formulated to enhance the cleansing process in the comfort of your own home, but it's always best to consult a physician, particularly if you have any specific health issues.
Q: Are there any other important guidelines?
• As mentioned, stay well hydrated. Warm water and water with lemon can help flush toxins out. But avoid drinking water during meals and right after eating, as this can impair digestion and "kill the digestive fire."
• All food that reduces _ama_ , or toxic buildup, is great during a cleanse. Old food from the refrigerator increases _ama_ , and you also want to steer clear of fried, greasy, and heavy food during cleansing.
• The prana (energy) of food is highest when eaten in the first 3-4 hours after preparing it.
• Eat your biggest meal at noon, when the sun is up, as that is when the _agni_ , or digestive fire, is at its peak.
• Avoid fruits and salads in the evening, as they aren't digested quickly.
• Spend an hour in the sun and stand barefoot on the earth for a few minutes every day, if you can.
Q: What do you suggest beyond diet and supplementation?
**A:** Exercise, meditation—and fun—are integral parts of cleansing. Exercise until you break into a sweat and then shower right after to help your body remove toxins. Oil pulling and dry brushing are also bonus techniques used to enhance cleansing and detoxification.
**The Food Shortlist**
#### BEST TO AVOID
• Dairy (ghee is okay)
• Eggs
• Gluten
• Meat, poultry, and seafood
• Nightshades
• Refined sugar (a little stevia, monk fruit, or honey is okay)
#### GOOD TO ADD
• Celery
• Cilantro
• Ginger
• Lemons, grapefruits, and limes
• Lentils
• Parsley
• Pomegranate
• Rice
• Spinach
• Turmeric
**The goopified Menu**
**MONDAY**
**BREAKFAST:** Beet Açai Blueberry Smoothie Bowl (here)
**LUNCH:** Crunchy Summer Rolls (here)
**DINNER:** Brown Rice, Turmeric & Spinach Porridge (here)
**TUESDAY**
**BREAKFAST:** Quinoa Cereal with Freeze-Dried Berries (here)
**LUNCH:** Miso Soup (here)
**DINNER:** White Bean & Zucchini Burgers (here)
**WEDNESDAY**
**BREAKFAST:** Sweet Buckwheat Porridge (here)
**LUNCH:** Kitchari (here)
**DINNER:** Kale Aglio e Olio (here)
**THURSDAY**
**BREAKFAST:** Breakfast Dal (here)
**LUNCH:** Crunchy Spring Veggie Grain Bowl (here)
**DINNER:** Faux Meat Beet Tacos (here)
**FRIDAY**
**BREAKFAST:** Black Rice Pudding with Coconut Milk & Mango (here)
**LUNCH:** Clean Carrot Soup (here)
**DINNER:** Chickpea & Kale Curry (here)
**SATURDAY**
**BREAKFAST:** Chlorella Smoothie (here)
**LUNCH:** Garden Salad with Aquafaba Ranch Dressing (here)
**DINNER:** Broccoli-Parsnip Soup (here)
**SUNDAY**
**BREAKFAST:** Chocolate Chia Pudding (here)
**LUNCH:** Kale & Sweet Potato Salad with Miso (here)
**DINNER:** Beet Falafel Sliders (here)
#### A FEW GO-TO SNACKS:
• Hot water with ginger, lemon, and turmeric
• Handful of pomegranate seeds
• Brown rice with furikake
• Half a grapefruit
# ACKNOWLEDGMENTS
**TEAM GOOP**
Thea Baumann
Ana Hito
Kevin Keating
Kiki Koroshetz
Elise Loehnen
Kelly Martin
Caitlin O'Malley
**PHOTOGRAPHER**
Ditte Isager
ASSISTED BY:
Graham Dalton
Conner Hughes
Rasmus Jensen
**PROP STYLIST**
Kate Martindale
ASSISTED BY:
Ian Hartman
Ali Summers
**FOOD STYLIST**
Susie Theodorou
ASSISTED BY:
Thea Baumann
Ana Hito
**CLOTHING STYLIST**
Ali Pew
ASSISTED BY:
Amy Wilbraham
**PRODUCER**
Kendall Stewart
ASSISTED BY:
Jacob Arzola
Ross Gardner
Vinnie Maggio
Alex Rubenstein
**ART DIRECTION AND DESIGN**
Michelle Park
Shubhani Sarkar
**GRAND CENTRAL LIFE & STYLE**
Jimmy Franco
Morgan Hedden
Brittany McInerney
Karen Murgolo
Amanda Pritzker
Kallie Shimek
**THANKS, TOO**
Terry Abbott
Jeffery Mahlick
Victoria Ortiz
Veronica Rivera
Maria Rosales
# ABOUT THE AUTHOR
**Gwyneth Paltrow** is an Oscar-winning actress and author of the _New York Times_ –bestselling cookbooks _My Father's Daughter_ , _It's All Good_ , and _It's All Easy_. She is also the founder and CEO of GOOP, a modern lifestyle brand with roots in food, wellness, travel, beauty, style, and work.
# [ALSO BY
GWYNETH PALTROW](toc.xhtml#toc-ad-card)
_It's All Good_
_My Father's Daughter_
_It's All Easy_
### Thank you for buying this ebook, published by Hachette Digital.
To receive special offers, bonus content, and news about our latest ebooks and apps, sign up for our newsletters.
Sign Up
Or visit us at hachettebookgroup.com/newsletters
# Contents
1. COVER
2. TITLE PAGE
3. COPYRIGHT
4. TABLE OF CONTENTS
5. DEDICATION
6. INTRODUCTION
7. COOKING WITH THIS BOOK
8. [PART I
THE RECIPES](part001.xhtml)
1. Breakfast
2. Soups
3. Salads, Bowls & Rolls
4. A Little More Filling
5. Drinks & Snacks
6. Basics
9. [PART II
HEALING CLEANSES](part002.xhtml)
1. Fat Flush
2. Heavy Metal Detox
3. Adrenal Support
4. Candida Reset
5. Heart Health
6. Veg-Friendly Ayurveda
10. ACKNOWLEDGMENTS
11. ABOUT THE AUTHOR
12. ALSO BY GWYNETH PALTROW
13. NEWSLETTERS
# Navigation
1. Begin Reading
2. Table of Contents
| {
"redpajama_set_name": "RedPajamaBook"
} | 5,955 |
Active Music Distribution 7 Goose Green Trading Estate 47 East Dulwich Road London SE22 9BN
Tel: +44 (0)20 8693 5678 Email:
info@activemusic.co.uk
Copyright © 2019 Active Music Distribution. All Rights Reserved
All Information Correct at time of Publishing / EO&E
Endorsees
BIOGRAPHY : Ollie Harding is a Welsh session drummer currently based in Manchester. Ollie has worked with The Shires, Prose, Milly Pye, and AJ Brown, to name a few. He has also performed alongside the likes of Ward Thomas, Sam Palladio, Rhod Gilbert, Scott Henderson, Rob Brydon & Russell Watson. Ollie has toured the UK & Europe playing venues such as the O2 Arena in London, Mercedes-Benz Arena in Berlin, Glastonbury and Isle of Wight festival. Ollie's TV/Radio appearances include; The One Show, The Chris Evans Show, The Graham Norton Show, The Andrew Marr Show, Vintage TV & ZDF TV in Germany.
OLLIE HARDING
BAND/ARTIST : The Shires
KIT SPECIFICATION :
Series : Classic Maple
Finish : Black Oyster Pearl/Black Galaxy
Sizes : 20x14, 12x8, 14x14 /22x14, 12x8, 14x14, 16x16
WEBSITE : www.ollieharding.co.uk | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,459 |
{"url":"http:\/\/www.segh.net\/articles\/The_nano_way_to_cleaner_water\/","text":"SEGH Articles\n\n# The nano way to cleaner water\n\n04 April 2012\nNanomaterials provide potential for waste water remediation and metal removal and recycling. We envisage that this composite can cheaply and effectively be incorporated into a variety of configurations to improve water treatment.\n\nDecontaminating polluted waste water costs millions but a new discovery by scientists at the University of Brighton could result in huge savings as well as delivering safer, cleaner water. The research, recently published in the journal Angewandte Chemie International Edition, represents a significant shift in our understanding of (nano)chemistry. Mercury is a serious contaminant so this breakthrough could save millions of pounds.\n\nIt is generally accepted that when silver is reduced to nano-sized particles, it can only extract a certain amount of mercury. However, Dr Kseniia Katok, working in the Nanoscience and Nanotechnology Group at Brighton, was able to reduce nanoparticles of silver to below 35 nano-metres in diameter (the equivalent of splitting a single human hair into 3,000 separate strands) and found that this allowed almost twice as much mercury to be removed from water.\n\nThe team's breakthrough opens the way for more effective, cheaper ways of cleaning mercury-contaminated water. Existing clean-up methods for mercury-contaminated water have either low mercury removal capabilities, leave a large chemical waste footprint or are not energy efficient.\n\nMercury is found naturally in the environment, but levels of inorganic mercury have increased significantly in recent decades as a result of industrial processes, and mining activities. If mercury contamination occurs, a hugely expensive decontamination process is required, as occurred in Squamish in Canada where the whole of the waterfront was subject to a huge clean-up starting in the 1990s. The seafront town had been subjected to years of industrial pollution because of its forestry industry which began in the early 20th century. Just the chemicals used to clean the water cost around \\$50,000,000. The Brighton scientists say their research shows that using silver nanoparticles would cost a few thousand rather than tens of millions of pounds for the materials, although a device containing the silver nanoparticles capable of processing large quantities of water would need to be developed.\n\nDr Raymond Whitby, head of the Nanoscience and Nanotechnology Group, said: \"The amount of mercury taken into silver nanoparticles defies our current understanding and promises a number of exciting developments. For example, it should lead to improved water treatment, removing greater quantities of selected heavy metals more quickly and perhaps more cheaply than before.\"\n\nOne key element in Dr Katok's discovery is her use of chemically-modified quartz sand, which reduces silver particles to a nanoscale with a high degree of purity. Sergey Mikhalovsky, the university's Professor of Materials Chemistry and Dr Katok's co-supervisor, said: \"This is the biggest difference between our silver and that prepared by other commonly-used methods such as citrate reduction, which typically leaves residual chemical groups on the surface of the silver nanoparticles. These can cause unwanted side reactions that may have limited its effectiveness.\" He anticipates that modified quartz could be used in other chemical groupings and might, in the future, aid the extraction and recycling of precious metals such as platinum, palladium and gold.\n\nAndy Cundy, the university's Professor of Applied Geochemistry and Dr Katok's lead supervisor, said: \"These findings enable a major shift towards the use of nanomaterials for waste water remediation and metal removal and recycling. We envisage that this composite can cheaply and effectively be incorporated into a variety of configurations to improve water treatment, initially targeting mercury, which remains one of the key environmental contaminants globally.\"\n\nProfessor\u00a0Andy Cundy, University of Brighton. a.cundy@brighton.ac.uk\n\nKeep up to date\n\nEldoret, Kenya\n\n06 July 2020\n\n## 37th SEGH International Conference: Geochemistry for Sustainable Development\n\nNanjing City, China\n\n09 November 2020\n\n30 December 2020\n\n## SubmitContent\n\nMembers can keep in touch with their colleagues through short news and events articles of interest to the SEGH community.\n\n## Science in theNews\n\nLatest on-line papers from the SEGH journal: Environmental Geochemistry and Health\n\n\u2022 Distribution of metal(loid)s in particle size fraction in urban soil and street dust: influence of population density 2020-01-18\n\n### Abstract\n\nAssessment of street dust is an invaluable approach for monitoring atmospheric pollution. Little information is available on the size distribution of contaminants in street dusts and urban soils, and it is not known how the population density would influence them. This research was carried out to assess the size distribution of trace metal(loid)s in street dust and urban soil, and to understand how population density might influence the size-resolved concentration of metal(loid)s. Three urban areas with a high, medium and low population density and a natural area were selected and urban soil and street dust sampled. They were fractionated into 8 size fractions: 2000\u2013850, 850\u2013180, 180\u2013106, 106\u201350, 50\u201320, 20\u201310, 10\u20132, and\u2009<\u20092\u00a0\u00b5m. The concentration of Pb, Zn, Cu, Cd, Cr, Ni, As, and Fe was determined, and enrichment factor and grain size fraction loadings were computed. The results indicated that the concentration of Pb, Zn, Cu, Cd, and Cr was highly size dependent, particularly for particles\u2009<\u2009100\u00a0\u00b5m, especially for street dust. Low concentrations of Ni and As in street dust and urban soil were size and population density independent. Higher size dependency of the metals concentration and the higher degree of elemental enrichment in the street dust fractions than the urban soils indicate higher contribution of human-induced pollution to the dust. Findings also confirm the inevitability of size fractionation when soils or dusts are environmentally assessed, particularly in moderately to highly polluted areas. Otherwise, higher concentrations of certain pollutants in fine-sized particles might be overlooked leading to inappropriate decisions for environmental remediation.\n\n\u2022 Soil\u2013plant system and potential human health risk of Chinese cabbage and oregano growing in soils from Mn- and Fe-abandoned mines: microcosm assay 2020-01-17\n\n### Abstract\n\nIn Portugal, many abandoned mines are often close to agricultural areas and might be used for plant food cultivation. Soils in the vicinity of two Mn- and Fe-abandoned mines (Ferragudo and Rosalgar, SW of Portugal) were collected to cultivate two different food species (Brassica rapa subsp. pekinensis (Lour.) Hanelt and Origanum vulgare L.). Chemical characterization of the soil\u2013plant system and potential risk of adverse effects for human health posed by plants associated with soil contamination, based on the estimation of hazard quotient (HQ), were assessed in a microcosm assay under greenhouse conditions. In both soils, the average total concentrations of Fe and Mn were above the normal values for soils in the region and their concentration in shoots of both species was very high. Brassica rapa subsp. pekinensis grew better in Ferragudo than in Rosalgar soils, and it behaved as an excluder of Cu, Mn, Fe, S and Zn in both soils. The HQ for Cu, Fe, Mn and Zn in the studied species grown on both soils was lower than unit indicating that its consumption is safe. The high Mn tolerance found in both species might be due in part to the high contents of Fe in the soil available fraction that might contribute to an antagonism effect in the uptake and translocation of Mn. The obtained results emphasize the need of further studies with different food crops before cultivation in the studied soils to assess health risks associated with high metal intake.\n\n\u2022 Concentration, fractionation, and ecological risk assessment of heavy metals and phosphorus in surface sediments from lakes in N. Greece 2020-01-13\n\n### Abstract\n\nThe presence of phosphorus (P) and heavy metals (HMs) in surface sediments originating from lakes Volvi, Kerkini, and Doirani (N. Greece), as well as their fractionation patterns, were\u00a0investigated. No statistically significant differences in total P content were observed among the studied lakes, but notable differences were observed among sampling periods. HM contents in all lakes presented a consistent trend, i.e., Mn\u2009>\u2009Cr\u2009>\u2009Zn\u2009>\u2009Pb\u2009>\u2009Ni\u2009>\u2009Cu\u2009>\u2009Cd, while the highest concentrations were recorded in Lake Kerkini. Most of the HMs exceeded probable effect level value indicating a probable biological effect, while Ni in many cases even exceeded threshold effects level, suggesting severe toxic effects. P was dominantly bound to metal oxides, while a significant shift toward the labile fractions was observed during the spring period. The sum of potentially bioavailable HM fractions followed a downward trend of Mn\u2009>\u2009Cr\u2009>\u2009Pb\u2009>\u2009Zn\u2009>\u2009Cu\u2009>\u2009Ni\u2009>\u2009Cd for most lakes. The geoaccumulation index Igeo values of Cr, Cu, Mn, Ni, and Zn in all lakes characterized the sediments as \u201cunpolluted,\u201d while many sediments in lakes Volvi and Kerkini were characterized as \u201cmoderately to heavily polluted\u201d with regard to Cd. The descending order of potential ecological risk $$E_{\\text{r}}^{i}$$ was Cd\u2009>\u2009Pb\u2009>\u2009Cu\u2009>\u2009Ni\u2009>\u2009Cr\u2009>\u2009Zn\u2009>\u2009Mn for all the studied lakes. Ni and Cr presented the highest toxic risk index values in all lake sediments. Finally, the role of mineralogical divergences among lake sediments on the contamination degree was signified.","date":"2020-01-23 00:37:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.24479277431964874, \"perplexity\": 5473.1119572516245}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579250607596.34\/warc\/CC-MAIN-20200122221541-20200123010541-00309.warc.gz\"}"} | null | null |
The NCAA Has Never Seen A Sprinter Like Hannah Cunliffe
Lincoln Shryack
Story by Lincoln Shryack for FloTrack
The idea of "range" in track and field tends to bias in favor of distance events. And that makes sense theoretically -- the 1500/mile/3K/5K/10K are all substantially different events rarely mastered by one person.
But when someone does come around and is near his/her very best in all of those distances, we quickly recognize their greatness. Because elite mile speed and elite 10K strength are usually mutually exclusive.
But it's not unprecedented in collegiate circles. Versatile stars like that -- think American Jenny Simpson and Kenyan Lawi Lalang -- do seem to pop up ever so often. In the NCAA right now, the University of Oregon's Edward Cheserek is the undisputed king of range and probably the greatest college distance runner ever. His bag of tricks has made him a 15-time NCAA champion.
Still, what impresses most about King Ches --say, going scorched earth on the DMR after winning the 5K just minutes prior -- has been done before.
But what about the sprints? Does the fact that the 60m, 100m and 200m are all very short and seemingly similar races mean that less credit should be given to a master of all three?
The NCAA record books say no.
No man or woman is currently in the all-time top 10 in three separate individual flat sprint distances -- American hurdler Keni Harrison has all-time marks in the 60m, 100m, and 400m hurdles.
Meanwhile, distance stars Simpson (five events) and Lalang (four) have easily surpassed this criterium in their disciplines. This is not counting duplicate indoor/outdoor events, as Lalang is in the indoor and outdoor 5K top 10.
But in the NCAA history up to this point, sprinters haven't come close -- no one has more than two top 10 all-time marks between the 60m, 100m, 200m and 400m. Due to a much more limited selection of events to choose from, a propensity for top sprinters to turn pro early and an always crazy deep pool of talent to compete against, there is simply no historical NCAA comparison in the mold of a Simpson, Lalang or Cheserek in track's speediest events.
But that could soon change. The current collegiate king of range may just have to pass the versatility crown to a new queen -- Ches' Oregon teammate Hannah Cunliffe.
The latest Ducks sprint superstar has a skill set that we've never seen in collegiate history.
Last weekend in Albuquerque, New Mexico, the junior broke the all-conditions 60m NCAA record with her blistering 7.07. That's remarkable on its own, but Cunliffe isn't your typical elite 60m runner. She is now the only woman in NCAA history to run sub-7.10 and sub-11.00 in the 100m.
Oh, and she can run the 200m, too. Her 22.60 on February 11 is currently the fastest time in the world in 2017.
So, she now has the a collegiate record in the 60m, the ninth-fastest 100m in history (10.99), and is nearing the top 10 in the 200m. If Cunliffe runs 22.51 or faster indoors (22.60 PR) or 22.25 outdoors (22.49 PR) she'll have reason to call herself the rangiest sprinter in college history.
Just for fun, let's look at some other U.S. women who have run 7.10, 11.00, and 22.50 and faster...
Marion Jones 6.95 10.65 21.62
Me'Lisa Barber 7.01 10.95 22.37
Lauryn Williams 7.01 10.88 22.27
Gwen Torrence 7.02 10.82 21.72
Carmelita Jeter 7.02 10.64 22.11
Tianna Bartoletta 7.02 10.78 22.37
Carlette Guidry 7.04 10.94 22.14
Hannah Cunliffe
Allyson Felix
It's not a long list! Everyone on it has either won an Olympic or world title. This isn't to say that Cunliffe is the next Allyson Felix -- Cunliffe has yet to win an NCAA title -- but the tools are absolutely there to make her a superstar at the next level.
She was close to an NCAA title last year -- second in the 60m and third in the 200m indoors -- before a hamstring injury in the 100m prelims forced her to end her 2016 campaign. That's probably why she's been overshadowed thus far by teammates Ariana Washington -- 2016 100m and 200m NCAA champ -- and Deajah Stevens -- 2016 Olympic finalist 200m.
The Ducks are so good that someone of Cunliffe's ability can be underrated.
But she likely won't go home empty-handed again this year. And if she does in fact break through with a title, perhaps the uniqueness of her ability will be justifiably highlighted.
Tianna Bartoletta
Carmelita Jeter
Tagged Teams
Hanford boys, Southridge girls take team titles at Connell Invitational Sep 27, 2006 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,450 |
layout: post
title: TLSH, Hamming codes, hashes, and S3 buckets
summary: How to efficiently find things that are close to one another.
categories: math
tags: distance metric hash
---
We have a dataset that requires a bit of exploratory analysis at scale in order
to determine meaningful questions to ask of it. Essentially, we realize that
there is a wealth of information contained in the dataset, but we are unsure
what sorts of analysis to perform on it and what sorts of questions might be
interesting to ask.
Consequently, what would be really nice would be to "slice" the data along
several axes and look at data that is "grouped" together. The typical
approach for something like this is $$k$$-means, which unfortunately is a bit
lacking in interpretability and requires choosing the parameter $$k$$.
Furthermore, I'd like to support the following use case: a data "explorer"
receives a new record containing fields such as geolocation, phone number,
date, and most recent Twitter post.
The explorer (dropping the quotes, now) wants to find other records that occur
in the same geographic region; that share the same phone number; that occur
within 5 days of the record; or that share similar text. It is also possible
that the explorer wants to find records that share similar text occuring
within 5 days of the record as well (so conjunctions across fields are also a
possibility). The point of this exercise is to locate similar records (along
one or more axes) in order for a *human* to make informed decisions about what
to do next.
While $$k$$-means would let the explorer look at record groupings, it does
little to empower the explorer to formulate informed or meaningful questions;
it is much more ideal for the human to slice the data along different axes in
order to determine more meaningful questions to ask.
The key to solving this problem is to embed each field into a metric space. A
*metric space* is a set $$S$$ along with a function $$d(x,y)$$ that computes a
*distance* between any two points $$x, y \in S$$. The function $$d : S \times S
\to \mathbf{R}$$ has several properties:
* it is nonnegative for all $$x, y \in S$$,
* it is symmetric $$d(x,y) = d(y,x)$$,
* it yields zero if and only if $$x = y$$, and, finally,
* it satisfies the triangle inequality $$d(x,y) + d(y,z) \geq d(x,z)$$.
By embedding each field into the metric space, we can arbitrarily pick an
"origin" and create a total ordering on our fields. This total ordering allows
us to sort our data, and it's much easier to search over sorted data than over
unsorted data.
Searching for "similar" items for a specific field, then, is just a matter of
finding the items location on a number line and (linearly) retrieving items
"nearby." To do this efficiently, we might bin each axis according to some
efficiently compute key, for instance, the leading digits of $$d(x,0)$$. Then,
finding similar items amounts to looking up the bin and performing a (linear)
search over the items in that bin.
When searching over multiple fields, we would look at two different buckets and
take the intersection of the buckets and perform a (linear) search over the
remaining items. This allows for efficient retrieval of similar rectangles.
### For numerical data
The distance metric is pretty easy, $$d(x,y) = \| x - y \|_2$$.
### For categorical data
Distance metrics probably don't make sense in this space. It might just be
$$
d(x,y) = \begin{cases}
0 & x = y \\
1 & \mbox{otherwise}
\end{cases}
$$
(You'll have to convince yourself that this is actually a metric.)
### For text data
Text data is the most interesting one since there isn't a natural, meaningful
metric for it. We're going to go with the Hamming distance of TLSH. In other
words, we perform an extra step where text is mapped to a set of hashes and the
Hamming distance between the hashes is used in the metric space.
## So have you tried this?
No. But I want to.
Concentration of Gaussian measures http://www.math.lsa.umich.edu/~barvinok/total710.pdf
prob (|d(Pi x, Pi y) - d(x,y)| < epsilon)
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,889 |
We're moving. Come visit. Book first.
LALAContemporary's Dundas West location is temporarily closed and in the process of relocation. Stay tuned for new address details coming soon.
For questions or info please email: contact@lalacontemporary.com or click here to book your private viewing appointment.
Please visit our online gallery store to shop art collections from previous exhibitions.
Maria Hupfield (Anishinaabe-kwe of
Wasauksing First Nation, Canada), is a transdisciplinary artist working in performance and media arts, she uses performance to indicate how objects contain and acquire knowledge throughout contact with the body and lived experience.
Hupfield is Anishinaabe and an off-rez citizen of Wasauksing First Nation, Ontario, and now lives in Toronto via Brooklyn, New York.
More about María
Sofia Medici (Buenos Aires, 1974) has a degree in Communication Sciences works as a director, performer, playwright and producer.
Her work explores the possibilities of contemporary art and life as a trigger for critical thinking and as tool to de-naturalize aesthetic and ideological conventions and social behaviours.
More about Sofía
Vivian Galban is a Photographer, contemporary artist and architect. Professor at the Austral University. She has been invited to the XVII Biennial of Visual Arts in Santa Cruz de la Sierra, Bolivia. She participated in numerous solo and group exhibitions in galleries, as well as in private and public exhibition spaces and institutions. She has participated in international art fairs in Argentina, Peru, Colombia, Mexico and Bolivia. She lives and works in Buenos Aires, Argentina.
More about Vivian
Ana Casal is a Photographer and video artist. Born in Punta Alta Argentina. She lives and works in Buenos Aires. Studied Art History in Museo del Prado and Museo Lázaro Galdeano in Madrid, Spain.
Her works are part of private collections.
More about Ana
Patricia Linenberg (Buenos Aires, 1954). Lives and works in Buenos Aires.
Artist invited to the Bienal of Mercosur 2017 & 2018 in Porto Alegre, Brazil.
Graduated as Clinical Psychologist at Belgrano University in 1976. Regular Member of the Argentine Psychoanalytic Association. Member of the International Psychoanalytic Association. Her works are part of private collections in: USA, Brazil, Germany, England, France, Italy, Mexico, Uruguay and Argentina.
More about Patricia
Liliana Golubisnky (Buenos Aires, 1954) attended the Academy of Fine Arts Augusto Bolognini and the Fine Arts School of Prilidiano Pueyrredon until 1978. She subsequently studied at the studio of Miguel Davila.
Golubinsky has been the recipient of more than forty prizes at the national and international level. She has had numerous one-person and group shows in prestigious galleries, museums and institutions, both in Argentina and abroad.
More about Liliana
Marcolina Dipierro was born in Buenos Aires, Argentina, obtained her Bachelor of Fine Arts from the National School of Fine Arts "Prilidiano Pueyrredón". Her works draw compositions that ref or remit some formal aspects of modern architecture and design in combination and contrast with diverse forms and materialities. At present, its aesthetic search, in the form of sculptural or joint installations, is based largely on the investigation of functionalism and the fusion of the artistic and the decorative into a total art that has so radically elevated the Bauhaus to our contemporaneity.
More about Marcolina
Camila Salcedo was born in Caracas, Venezuela. Has a Bachelor of Fine Arts Interdisciplinary from NSCAD University in Halifax, Nova Scotia. She is an artist, curator and community organizer who explores art-making via performance, video, and textiles. She makes art because she is interested in questioning systems and institutions created to define us such as nations, identities, politics, and migration. She has exhibited nationally in several galleries, artist-run centres and festivals.
More about Camila
Paulo Nazareth's performance and installation-based work often draw upon his joint African and indigenous heritage. His ongoing work Cadernos de Africa [Africa Notebooks] is presented as part of Journal: a five-year walk he began in 2013 from his home in a favela near Belo Horizonte, throughout Brazil and eventually northwards across the entirety of the African continent from Cape Town.
More about Paulo
Holds a degree in Visual Arts from National University of Tucumán. In 2009 he was awarded with a scholarship by Fundación YPF which allowed him to be a part of the first edition of the Artists Program launched by University Torcuato Di Tella. In 2010 he was selected to participate of Lipac, an art program by Centro Cultural Ricardo Rojas.
More about Gabriel
Mariel is dedicated to contributing to the growth of interdisciplinary arts as a method to engage Indigenous community, language, culture and act as a bridge to society telling stories of our time. As artist scholar, her research is about Identity through the lens of Indigenous Ways of Knowing and Being, Customary Law, Indigenous Feminism and Performance Theory, exploring how cultural identity is rebuilt through oral history and performance practice. www.sqilxw.com
More about Mariel
Aner's work is marked by elements of nature, leaves, roots, plants, tree bark, they dialogue with characters with native features, generating dream stories, where perspectives, elements and meanings are transposed.
Along the path of mural painting, he has been able to leave his traces in different latitudes such as Brazil, Argentina, Paraguay, Bolivia, Colombia, Canada, Mexico and France.
More about Aner
Julia C. Parodi is a Industrial Design Engineer and Editor. She investigates and experiments different and more accessible broadcasting means of knowldege. She is co-founder at Oblicuas a Design and Gender Collective, together they conducted workshops in many Universities, Galleries and Cultural Spaces. Currently, her work focuses on the production of video-essays using reappropriated images from national and digital archives.
More about Julia
Jarus is a Toronto-based artist and muralist inspired by the visual human experience. International-celebrated as both a contemporary muralist and figurative painter, his work reimagines how art can exist in public spaces. For the past decade, Jarus has been working with communities across Canada and around the globe to produce large-scale portraits and figures among other images on wall surfaces. His works can be found within major cities as well as across rural settings.
More about Emmanuel
Mark Bland was born in Kingston, Ontario Canada in 1982. He moved to Toronto in 2001 to begin his studies in Integrated Media at the Ontario College of Art and Design:University (OCAD:U). Graduating in 2005, his thesis revolving around wearable art and technology as an extension of our bodies, titled 'The Self Viewing Backpack', was selected by a jury of established artists to be shown on display at Gallery 61 for Toronto's Go-West exhibition.
More about Mark
Paulo Nazareth & Gabriel Chaile's Dual Show
Underscore Projects | 1468 Dundas St W, Toronto ON
A collaborative exhibition with Underscore Projects
CURATED by Claudia Lala, Gabriela Jurevicius & Lucrecia Urbano
Date: Opening Oct 6th 2021
Fractal Totems & Arrow of Mind
Mark Bland's Solo Show
LalaContemporary
CURATED by Claudia Lala & Mark Bland
Date: Opening July 2nd 2021
About the Exhibit
Virtual Room View
CURATED by Claudia Lala and Mariel Belanger. Produced by Rodrigo Ardiles.
Date: Opening December 2020
When we Return Online Platform
Multi-faceted online Art & Cultural Project
Hosts: Canada & Argentina
Date: Starting April 2020
My Country is the Entire World
Exhibition - Thursday March 12, 2020, 6PM
1756 Dundas St West, Toronto
Group Exhibition featuring Ana Casal, Marcolina Dipierro, Vivian Galban, Lialiana Golubinsky, Maria Hupfield, Patricia Linenberg, Sofia Medi...
LALAContemporary is an Independent Art Project space based in Toronto, Canada. A platform for emerging artists from North and South America.
Be the first to hear about upcoming exhibitions, articles about our artists, and special events.
Appointments & Info at:
contact@lalacontemporary.com
We sincerely apologize for any inconveniences this may cause but we are taking security measures to ensure the health and safety of the community. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,277 |
<p>Scenario simulations using Gov-Test-Scenario headers is only available in sandbox environment</p>
<table>
<thead>
<tr>
<th>Header Value (Gov-Test-Scenario)</th>
<th>Scenario</th>
</tr>
</thead>
<tbody>
<tr>
<td><p>Default (No header value)</p></td>
<td><p>Returns a Crystallisation obligation for 2017-18 taxyear as not met</p></td>
</tr>
<tr>
<td><p>MET</p></td>
<td><p>Returns a Crystallisation obligation for 2017-18 taxyear as met</p></td>
</tr>
<tr>
<tr>
<td><p>AGENT_NOT_SUBSCRIBED</p></td>
<td><p>Simulates the scenario where the agent is not subscribed to Agent Services</p></td>
</tr>
<tr>
<td><p>AGENT_NOT_AUTHORIZED</p></td>
<td><p>Simulates the scenario where the agent is not authorized by the client to act on their behalf</p></td>
</tr>
<tr>
<td><p>CLIENT_NOT_SUBSCRIBED</p></td>
<td><p>Simulates the scenario where the client is not subscribed to Making Tax Digital</p></td>
</tr>
</tbody>
</table> | {
"redpajama_set_name": "RedPajamaGithub"
} | 1,175 |
package com.intellij.psi.impl.file;
import com.intellij.openapi.components.ServiceManager;
import com.intellij.openapi.project.Project;
import com.intellij.openapi.util.NlsSafe;
import com.intellij.openapi.vfs.VirtualFile;
import com.intellij.psi.PsiDirectory;
import com.intellij.psi.PsiDirectoryContainer;
import org.jetbrains.annotations.NotNull;
import org.jetbrains.annotations.Nullable;
/**
* @author yole
*/
public abstract class PsiDirectoryFactory {
public static PsiDirectoryFactory getInstance(Project project) {
return ServiceManager.getService(project, PsiDirectoryFactory.class);
}
@NotNull
public abstract PsiDirectory createDirectory(@NotNull VirtualFile file);
@NotNull
public abstract @NlsSafe String getQualifiedName(@NotNull PsiDirectory directory, final boolean presentable);
@Nullable
public abstract PsiDirectoryContainer getDirectoryContainer(@NotNull PsiDirectory directory);
public abstract boolean isPackage(@NotNull PsiDirectory directory);
public abstract boolean isValidPackageName(@Nullable String name);
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,527 |
For eight days in July, the streets of Pamplona, Spain, are wild and crazy. Wild because that is when the annual Running of the Bulls takes place, with long-horned bulls let loose to herd through the streets of the city. Crazy because many people—locals and tourists—join the bulls, running alongside them or away from them through the narrow roads. Featured in Ernest Hemingway's The Sun Also Rises, the Running of the Bulls in Pamplona is part of the San Fermin Festival, honoring the town's patron saint. Just how did this daring event originate? Let's look at the odd history of the Running of the Bulls.
Saint Fermin was born in the 3rd century in Pamplona. The son of a Roman politician, he was destined for a life in the clergy. The first bishop of Toulouse, Saint Saturninus, baptized Saint Fermin in the church well of the St. Sernin church. Saint Fermin later became the patron saint of Navarre and Pamplona. Here is where the history of Saint Fermin gets murky.
Several years after baptizing Saint Fermin, Saint Saturninus was murdered. The form of murder was particularly gruesome, but it connects to the story of Pamplona's Running of the Bulls. Saint Saturninus' feet were tied to a bull and the bull was whipped so it would run through the streets, dragging Saint Saturninus to his death. Saint Fermin was named his replacement as the bishop of Toulouse, but he also met a gruesome death. He was in nearby Amiens, France, trying to convert the locals to Christianity when he was arrested and beheaded. Somewhere along the line, the tales of both men's deaths got mixed up until people started to believe that Saint Fermin was martyred by a bull.
The festival honoring Saint Fermin began in Pamplona sometime in the 13th century. It was traditionally held in September but it was moved to July in 1592. Bullfighting was always associated with the Fiesta de San Fermin…perhaps because people mistakenly thought Saint Fermin was killed by a bull or simply because people in Spain really like bullfighting.
The bullfighting at the Fiesta de San Fermin grew in popularity. But from the start, the festival organizers had to overcome one logistical issue…how to get the bulls from their pen to the bullfighting arena. The most cost-effective solution was to simply release the bulls and run them, cattle drive style, the short distance through the streets of Pamplona to the arena. Butchers and herders ran ahead of the bulls to guide them to the bullfighting arena, so the charging bulls wouldn't become distracted by the goings-on in the town and run the wrong way. They wore all white clothing with bright red scarves so they could use the scarves to get the bulls' attention.
For years, only men trained to handle aggressive bulls, like cattle herders and butchers, ran with the bulls through the Pamplona streets. Then some brash young men from the town decided to join in, perhaps as a way to show off their bravado to the young ladies of the region. When the bulls were released, these young men jumped into action, sprinting ahead of the charging bulls or running alongside them, careful to stay clear of the horns. It was thrilling and exhilarating, no doubt, and the young men were treated like heroes. Soon, more and more people were joining in the fun.
Naturally, a herd of stampeding bulls can cause some property damage. In the late 1700s, the people of Pamplona set up a running course for the bulls that still took the animals down the town's cobblestone streets but kept them more contained so they couldn't escape into unsuspecting parts of the town.
Yes, running with the bulls is really, really dangerous…but that is part of the attraction. Since 1910, when official record-keeping began, there have been 15 confirmed deaths during the event. Most of these people were gored to death by the bulls, though one was suffocated by a bull. Many more people are injured during their time with the bulls, requiring various degrees of medical attention. In recent decades, medical first aid stations and waiting ambulances stand ready to help in case of an accident. Still, participants are warned of the risk. There may even be a waiver they need to sign.
The Running of the Bulls at the Fiesta de San Fermin was locally known for centuries, but it was the writing of Ernest Hemingway. His 1926 novel, The Sun Also Rises, features bullfighters and the Pamplona Running of the Bulls. Hemingway wrote about the strange and dangerous event in such a way that it seemed romantic, exotic, and attractive to outsiders. His inclusion of the Running of the Bulls in his novel is credited with introducing the world to the Fiesta de San Fermin and the tradition of bullfighting in Spain.
The Running of the Bulls is still a popular attraction and bucket list item. Every year, more than 20,000 runners join the bulls in the sprint to the arena over the course of the eight-day event. Many more come to watch. The event has caused controversy and outcry from animal rights groups. The town of Pamplona cashes in on its fame by hosting concerts, activities, open-air markets, dancing, drinking, and dining events during the Fiesta de San Fermin. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,568 |
\section{Introduction}
\label{sec:intro}
Semantic segmentation is a task which requires a lot of pixel level annotations and obtaining these annotations is expensive and time consuming.
To overcome this issue, one solution is to obtain annotations from synthetic data such as Games (e.g. GTA5) and train models on these synthetic data for semantic segmentation.
However, the problem is that even if modern synthetic data is near photo realistic, still there is a distribution mismatch between the synthetic data and real images.
One solution is to develop models that can overcome this distribution mismatch between models that are trained on synthetic data and real data which is the topic of unsupervised domain adaptation (UDA)~\cite{Fernando2017,fernando2013unsupervised,herath2019min,ganin2015unsupervised,pan2010domain,ben2006analysis,tzeng2017adversarial,DBLP:journals/corr/FernandoHST14}.
UDA for semantic segmentation has made significant progress in recent years.
One of the most recent method called DAFormer~\cite{DBLP:journals/corr/abs-2111-14887} obtained massive improvement over prior methods by using a Transformer architecture and self-training. However one of the challenges in self-training is that generated pseudo labels can be wrong and that may result in poor transfer of information from source domain to the target domain. Therefore, it is needed to further regularize the self-training learning process.
In this work, we present a new consistency regularization method based on correlation between pixel-wise class predictions.
We enforce two models (teacher and student) to have similar inter-pixel similarity structure and by doing so we regularize the self-training process.
This helps to improve the generalization of the student network as well as the teacher network allowing better transfer of information from the source domain to the target domain. We demonstrate its effectiveness by applying it to DAFormer and improving mIoU19 performance on the GTA5 to Cityscapes benchmark by 0.8 and mIou16 performance on the SYNTHIA to Cityscapes benchmark by 1.2.
Implementation of our proposed method is available at our GitHub repository\footnote{\url{https://github.com/kw01sg/CRDA}}.
\section{Related Work}
\subsection{Unsupervised Domain Adaptation}
Domain adaptation is a field of techniques that aims to solve the domain shift problem, when data distributions experience change between datasets. UDA is a subset of the domain adaptation field that aims to utilize a labeled source domain to learn a model that performs well on an unlabeled target domain.
Recent UDA methods can be grouped into either adversarial training or self-supervised learning (SSL) approaches. Adversarial training methods aim to reduce source and target distribution mismatch by aligning distributions at either the pixel \cite{Hoffman_cycada2017,8953759,DBLP:journals/corr/BousmalisSDEK16} or intermediate feature level \cite{DBLP:journals/corr/TzengHSD17,hoffman2016fcns} using a generative adversarial network (GAN).
SSL methods allow models to be trained directly on the target domain by generating pseudo labels from the target domain. Recent advances focuses on improving the quality of pseudo labels using various approaches, such as using representative prototypes \cite{zhang2021prototypical} or using more complex, Transformer-based network architecture \cite{DBLP:journals/corr/abs-2111-14887}.
It is also possible for methods to adopt a hybrid approach and use both adversarial training and SSL. Li et al. does so in their bidirectional learning framework \cite{8954260}. Adversarial training is first used to obtain an image-to-image translation model and a segmentation model. Target domain pseudo labels are then generated from high confidence predictions, which are then subsequently used to fine tune the segmentation model. The improved segmentation model can then be used in the first adversarial stage to form a close loop.
\subsection{Semantic Segmentation}
Early methods on semantic segmentation problems were largely based on Fully Convolutional Network (FCN) \cite{7478072}, which typically follows an encoder-decoder architecture \cite{Badrinarayanan2017SegNetAD,10.1007/978-3-319-24574-4_28}. Further improvements were made by using dilated convolutions to overcome the loss of spatial resolution \cite{YuKoltun2016}, and pyramid pooling \cite{8100143,7913730} to enhance capturing of contextual information.
Recent success of attention-based Transformers \cite{NIPS2017_3f5ee243} in natural language processing has seen adaptations of Transformers for image segmentation \cite{liu2021Swin,xie2021segformer} that were able to obtain state-of-the-art results.
\subsection{Consistency Regularization}
Consistency regularization is a regularization technique used to encourage networks to make consistent predictions that are invariant to perturbations. Tarvainen and Valpola improved model performance on the image classification problem by using a student and teacher network pair in their Mean Teacher model \cite{DBLP:journals/corr/TarvainenV17}, where the weights of the teacher network are an exponential moving average (EMA) of the student network. Consistent predictions between the two networks are then promoted by optimizing a consistency loss between their predictions.
Interpolation consistency training by Verma et al. \cite{ijcai2019-504} combines mixup \cite{zhang2018mixup} and the Mean Teacher model \cite{DBLP:journals/corr/TarvainenV17} to implement consistency regularization. During training, unlabelled samples are interpolated to create an augmented sample. Predictions by the student network on the augmented sample are then optimized to be consistent with interpolated predictions by the teacher network on the original non-interpolated samples.
Kim et al. \cite{DBLP:journals/corr/abs-2001-04647} uses cosine similarity in their consistency regularization method for semantic segmentation. They propose a structured consistency loss that optimizes predictions to be consistent in not only pixel-wise classification, but also inter-pixel relationship.
\section{Our Method}
Given source domain images $x_S \in X_S$ with their annotations (labels) $y_S \in Y_S$ and target domain images $x_T \in X_T$ without annotations (labels), we want to learn a network $h$ that can correctly predict the annotations for target images $X_T$ denoted by $\hat{Y}_T$. Typically, there is a mismatch in the joint probability distributions of source domain data $P(X_S,Y_S)$ and the target domain data $P(X_T,Y_T)$.
Due to this mismatch or the gap between source and target domains, an image segmentation model $h$ that is trained on the source data usually results in a low performance on target images.
One common solution to address this issue is to use self-training as also done in the prior works such as DAFormer \cite{DBLP:journals/corr/abs-2111-14887}.
However, semi-supervised self-training methods could easily over-fit to source distribution and could generate inconsistent or wrong pseudo labels for the target domain images.
To overcome this limitation, we propose the addition of consistency regularization to the DAFormer \cite{DBLP:journals/corr/abs-2111-14887} framework during model training to further improve model performance.
Next, we explain the overall training framework.
\subsection{Overall Training}
Overall training of the network is composed of three components: supervised training using source images, self-training using target images, and consistency regularization. Total loss $\mathcal{L}_{total}$ is given as
\begin{equation}
\mathcal{L}_{total} = \mathcal{L}_S + \mathcal{L}_T + \lambda_c \mathcal{L}_C
\end{equation}
where $\mathcal{L}_S$ is supervised cross entropy loss using source images, $\mathcal{L}_T$ is self-trained cross entropy loss using pseudo labels, $\mathcal{L}_C$ is our consistency regularization term, and $\lambda_c$ is a parameter we use to weigh $\mathcal{L}_C$. The following sections will present each of the losses in detail.
\subsubsection{Supervised Training}
Supervised training on the source domain is conducted using cross entropy loss for semantic segmentation. For a source image $x_S$ and its annotation $y_S$, $\mathcal{L}_S$ can be defined as
\begin{equation}
\mathcal{L}_S(x_S, y_S) = - \frac{1}{H W} \sum_{j=1}^{H \times W} \sum_{c=1}^C y_S^{(j,c)} \log h(x_S)^{(j,c)}
\end{equation}
where $C$ is the number of classes and $H$ and $W$ are the height and width of the segmentation output. The notation $y_S^{(j,c)}$ denotes the presence of class $c$ at pixel location $j$ (1 if present and 0 if not). Similarly, $h(x_S)^{(j,c)}$ denotes the predicted score for class $c$ at pixel location $j$ using model $h$ for image $x_S$.
\subsubsection{Self-Training}
Self-training uses a teacher network $f(;\phi)$ to produce pseudo labels on which the student network $h(;\theta)$ will be trained on. For a target image $x_T$, its pseudo label $p_T$ is formally defined as
\begin{equation}
p^{(j,c)}_T = \llbracket c = \argmax_{c'} f(x_T;\phi)^{(j,c')} \rrbracket
\label{eq.label}
\end{equation}
where $\llbracket \cdot \rrbracket$ denotes the Iverson bracket.
We follow the Mean Teacher model \cite{DBLP:journals/corr/TarvainenV17} where the weights of the teacher network $f(;\phi)$ are the EMA of the weights of the student network $h(;\theta)$ after each training step $t$. The EMA weights used by the teacher model at training step $t$ is formally defined as
\begin{equation}
\phi_{t+1} = \alpha \phi_{t} + (1 - \alpha) \theta_t
\end{equation}
where $\phi_{t+1}$ is the EMA of successive weights and $\alpha$ is a smoothing coefficient hyperparameter. It should also be noted that no gradients will be backpropagated into the teacher network from the student network.
A confidence estimate for the pseudo labels, defined as the ratio of pixels with maximum softmax probability exceeding a pre-defined threshold $\tau$, is also used in the self-training loss. For a target image $x_T$, its confidence estimate $q_T$ is formally defined as
\begin{equation}
q_T = \frac{\sum^{H \times W}_{j=1}[\max_{c'}f(x_T; \phi)^{j,c'} > \tau]}{HW}
\end{equation}
Self-training loss of the student network $\mathcal{L}_T$ for a target image $x_T$ can thus be defined as
\begin{equation} \label{eq:self-training}
\mathcal{L}_T(x_T) = - \frac{1}{H W} \sum_{j=1}^{H \times W} \sum_{c=1}^C q_T \times p^{(j,c)}_T \times \log h(x_T; \theta)^{(j,c)}
\end{equation}
We follow DAFormer's \cite{DBLP:journals/corr/abs-2111-14887} method of using non-augmented target images for the teacher network $f$ to generate pseudo labels and augmented targeted images to train the student network $h$ using Equation~\ref{eq:self-training}. We also follow their usage of color jitter, Gaussian blur, and ClassMix \cite{9423297} as data augmentations in our training process.
\subsubsection{Consistency Regularization} \label{consistency-regularization}
As mentioned in Mean Teacher \cite{DBLP:journals/corr/TarvainenV17}, cross entropy loss in Equation \ref{eq:self-training} between predictions of the student model and pseudo labels (which are predictions from the teacher model) can be considered as a form of consistency regularization.
However, different from classification problems, semantic segmentation problems have a property where pixel-wise class predictions are correlated with each other. Thus, we propose to further enhance consistency regularization by focusing on this inter-pixel relationship.
Inspired by the method of Kim et al. \cite{DBLP:journals/corr/abs-2001-04647}, we use the inter-pixel cosine similarity of networks' predictions on target images to regularize the model. Formally, we define the similarity between pixel $i$ and $j$ class predictions on a target image $x_T$ as
\begin{equation} \label{eq:similarity}
a_{i,j} = \frac{\textbf{p}^T_i \textbf{p}_j}{\norm{\textbf{p}_i} \cdot \norm{\textbf{p}_j}}
\end{equation}
where $a_{i,j}$ represents the cosine similarity between the prediction vector of the \textit{i}th pixel and the prediction vector of the \textit{j}th pixel.
Note that the similarity between the probability vector $\textbf{p}_i$ and $\textbf{p}_j$ can also be computed using Kullback-Leibler (KL) divergence and cross entropy.
We investigate these options in Section \ref{inter-pixel-similarity}.
The consistency regularization term, $\mathcal{L}_C$ can then be defined as the mean squared error (MSE) between the student network's similarity matrix and the teacher network's similarity matrix
\begin{equation} \label{eq:consistency-regularization}
\mathcal{L}_C = \frac{1}{(HW)^2} \sum^{H \times W}_{i=1} \sum^{H \times W}_{j=1} \norm{a^s_{i,j} - a^t_{i,j}}^2
\end{equation}
where $a^s_{i,j}$ is the similarity obtained from the student network and $a^t_{i,j}$ is the similarity obtained from the teacher network.
We also follow the method of Kim et al. \cite{DBLP:journals/corr/abs-2001-04647} to restrict the number of pixels used in the calculation of similarity matrices by performing a random sample of $N_{pair}$ pixels for comparison. Thus, the consistency regularization in Equation \ref{eq:consistency-regularization} is updated to the following equation
\begin{equation} \label{eq:consistency-regularization-with-n-pairs}
\mathcal{L}_C = \frac{1}{(N_{pair})^2} \sum^{N_{pair}}_{i=1} \sum^{N_{pair}}_{j=1} \norm{a^s_{i,j} - a^t_{i,j}}^2
\end{equation}
This term $\mathcal{L}_C$ is particularly useful for domain adaptation as it helps to minimize the divergence between the source representation and the target representation by enforcing a structural consistency in the image segmentation task.
\section{Experiments}
\subsection{Implementation Details}
\subsubsection{Datasets}
We use the Cityscapes street scenes dataset \cite{Cordts2016Cityscapes} as our target domain. Cityscapes contains 2975 training and 500 validation images with resolution of 2048$\times$1024, and is labelled with 19 classes.
For our source domain, we use the synthetic datasets GTA5 \cite{Richter_2016_ECCV} and SYNTHIA \cite{7780721}. GTA5 contains 24,966 images with resolution of 1914$\times$1052, and is labelled with the same 19 classes as Cityscapes. For compatibility, we use a variant of SYNTHIA that is labelled with 16 of the 19 Cityscapes classes. It contains 9,400 images with resolution of 1280$\times$760.
Following DAFormer \cite{DBLP:journals/corr/abs-2111-14887}, we resize images from Cityscapes to 1024$\times$512 pixels and images from GTA5 to 1280$\times$720 pixels before training.
\subsubsection{Network Architecture}
Our implementation is based on DAFormer \cite{DBLP:journals/corr/abs-2111-14887}.
Previous UDA methods mostly used DeepLabV2 \cite{deeplab} or FCN8s \cite{7478072} network architecture with ResNet \cite{7780459} or VGG \cite{simonyan2014} backbone as their segmentation model. DAFormer proposes an updated UDA network architecture based on Transformers that was able to achieve state-of-the-art performance. They hypothesized that self-attention is more effective than convolutions in fostering the learning of domain-invariant features.
\subsubsection{Training}
We follow DAFormer \cite{DBLP:journals/corr/abs-2111-14887} and train the network with AdamW \cite{loshchilov2018decoupled}, a learning rate of $\eta_{base} = 6 \times 10^{-5}$ for the encoder and $6 \times 10^{-4}$ for the decoder, a weight decay of 0.01, linear learning rate warmup with $t_{warm} = 1500$, and linear decay. Images are randomly cropped to $512 \times 512$ and trained for 40,000 iterations on a batch size of 2 on a NVIDIA GeForce RTX 3090.
We also adopt DAFormer's training strategy of rare class sampling and thing-class ImageNet feature distance to further improve results.
For hyperparameters used in self-training, we follow DAFormer and set $\alpha=0.99$ and $\tau=0.968$.
For hyperparameters used in consistency regularization, we set $N_{pair} = 512$, $\lambda_c = 1.0$ when calculating similarity using cosine similarity and $\lambda_c = 0.8 \times 10^{-3}$ when calculating similarity using KL divergence.
\subsection{Results}
\begin{table}
\centering
\caption{Comparison with other UDA methods on GTA5 to Cityscapes. Results for DAFormer and our method using cosine similarity are averaged over 6 random runs, while results for our method using KL Divergence are averaged over 3 random runs}
\resizebox{\textwidth}{!}{%
\begin{tabular}{l|ccccccccccccccccccc|cl}
Method & Road & Sidewalk & Build. & Wall & Fence & Pole & Tr.Light & Sign & Veget. & Terrain & Sky & Person & Rider & Car & Truck & Bus & Train & M.bike & Bike & mIoU19 & \\
\hline
BDL & 91.0 & 44.7 & 84.2 & 34.6 & 27.6 & 30.2 & 36.0 & 36.0 & 85.0 & 43.6 & 83.0 & 58.6 & 31.6 & 83.3 & 35.3 & 49.7 & 3.3 & 28.8 & 35.6 & 48.5 & \\
ProDA & 87.8 & 56.0 & 79.7 & 46.3 & 44.8 & 45.6 & 53.5 & 53.5 & 88.6 & 45.2 & 82.1 & 70.7 & 39.2 & 88.8 & 45.5 & 59.4 & 1.0 & 48.9 & 56.4 & 57.5 & \\
DAFormer & 95.5 & 68.9 & 89.3 & \textbf{53.2} & 49.3 & 47.8 & \textbf{55.5} & 61.2 & 89.5 & 47.7 & 91.6 & 71.1 & 43.3 & 91.3 & 67.5 & 77.6 & 65.5 & 53.6 & 61.2 & 67.4 \\
Ours (Cosine) & 95.5 & 69.2 & \textbf{89.5} & 52.1 & \textbf{49.6} & 48.9 & 55.2 & \textbf{62.1} & 89.8 & 49.0 & 91.1 & \textbf{71.7} & \textbf{45.1} & \textbf{91.7} & \textbf{70.0} & 77.6 & 65.2 & \textbf{56.6} & 62.8 & 68.0 \\
Ours (KL) & \textbf{96.1} & \textbf{71.6} & \textbf{89.5} & \textbf{53.2} & 48.6 & \textbf{49.5} & 54.7 & 61.1 & \textbf{90.0} & \textbf{49.4} & \textbf{91.7} & 70.7 & 44.0 & 91.6 & \textbf{70.0} & \textbf{78.1} & \textbf{68.9} & 55.1 & \textbf{62.9} & \textbf{68.2}
\end{tabular}%
}
\label{tab:gta_results}
\end{table}
\begin{table}
\centering
\caption{Comparison with other UDA methods on SYNTHIA to Cityscapes. Results for DAFormer and our method using cosine similarity are averaged over 6 random runs, while results for our method using KL Divergence are averaged over 3 random runs}
\resizebox{\textwidth}{!}{%
\begin{tabular}{l|cccccccccccccccc|cc}
Method & Road & Sidewalk & Build. & Wall & Fence & Pole & Tr.Light & Sign & Veget. & Sky & Person & Rider & Car & Bus & M.bike & Bike & mIoU16 & mIoU13 \\
\hline
BDL & 86.0 & 46.7 & 80.3 & - & - & - & 14.1 & 11.6 & 79.2 & 81.3 & 54.1 & 27.9 & 73.7 & 42.2 & 25.7 & 45.3 & - & 51.4 \\
ProDA & 87.8 & 45.7 & 84.6 & 37.1 & 0.6 & 44.0 & 54.6 & 37.0 & 88.1 & 84.4 & 74.2 & 24.3 & 88.2 & 51.1 & 40.5 & 40.5 & 55.5 & 62.0 \\
DAFormer & 80.5 & 37.6 & 87.9 & \textbf{40.3} & \textbf{9.1} & \textbf{49.9} & \textbf{55.0} & 51.8 & 85.9 & 88.4 & 73.7 & \textbf{47.3} & 87.1 & 58.1 & 53.0 & 61.0 & 60.4 & 66.7 \\
Ours (Cosine) & 86.3 & 44.2 & \textbf{88.3} & 39.2 & 7.5 & 49.2 & 54.7 & \textbf{54.7} & \textbf{87.2} & \textbf{90.7} & \textbf{73.8} & \textbf{47.3} & \textbf{87.4} & 55.9 & 53.7 & 60.7 & 61.3 & 68.1 \\
Ours (KL) & \textbf{89.0} & \textbf{49.6} & 88.1 & \textbf{40.3} & 7.3 & 49.2 & 53.5 & 52.1 & 87.0 & 88.0 & \textbf{73.8} & 46.4 & 87.1 & \textbf{58.7} & \textbf{53.9} & \textbf{61.7} & \textbf{61.6} & \textbf{68.4}
\end{tabular}%
}
\label{tab:synthia_results}
\end{table}
\begin{figure}
\begin{center}
\begin{tabular}{cccc}
Image & DAFormer & Ours & Ground Truth \\
\includegraphics[width=2.8cm]{images/preds_qualitative_analysis/raw_image/munster_000000_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/daformer/daformer_munster_000000_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ours/ours_munster_000000_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ground_truth/munster_000000_000019_gtFine_color.png} \\
\includegraphics[width=2.8cm]{images/preds_qualitative_analysis/raw_image/munster_000010_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/daformer/daformer_munster_000010_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ours/ours_munster_000010_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ground_truth/munster_000010_000019_gtFine_color.png} \\
\includegraphics[width=2.8cm]{images/preds_qualitative_analysis/raw_image/frankfurt_000000_002963_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/daformer/daformer_frankfurt_000000_002963_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ours/ours_frankfurt_000000_002963_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ground_truth/frankfurt_000000_002963_gtFine_color.png} \\
\includegraphics[width=2.8cm]{images/preds_qualitative_analysis/raw_image/frankfurt_000000_010763_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/daformer/daformer_frankfurt_000000_010763_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ours/ours_frankfurt_000000_010763_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ground_truth/frankfurt_000000_010763_gtFine_color.png} \\
\includegraphics[width=2.8cm]{images/preds_qualitative_analysis/raw_image/lindau_000013_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/daformer/daformer_lindau_000013_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ours/ours_lindau_000013_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ground_truth/lindau_000013_000019_gtFine_color.png} \\
\includegraphics[width=2.8cm]{images/preds_qualitative_analysis/raw_image/lindau_000023_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/daformer/daformer_lindau_000023_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ours/ours_lindau_000023_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ground_truth/lindau_000023_000019_gtFine_color.png} \\
\includegraphics[width=2.8cm]{images/preds_qualitative_analysis/raw_image/munster_000130_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/daformer/daformer_munster_000130_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ours/ours_munster_000130_000019_leftImg8bit.png} & \includegraphics[width=2.8cm]{images/preds_qualitative_analysis/ground_truth/munster_000130_000019_gtFine_color.png}
\end{tabular}
\end{center}
\caption{Qualitative results comparing predictions on validation data of Cityscapes. From left: input image, predictions by DAFormer, predictions by our method, and the ground truth. The last row provides an example where DAFormer performed better compared to our method as it was able to correctly predict the sidewalks}
\label{fig:predictions}
\end{figure}
We compare the results of our method against other state-of-the-art UDA segmentation methods such as BDL~\cite{8954260}, ProDA~\cite{zhang2021prototypical} and DAFormer~\cite{DBLP:journals/corr/abs-2111-14887}.
In table \ref{tab:gta_results}, we present our experimental results on the GTA5 to Cityscapes problem. It can be observed that our method improved UDA performance from an mIoU19 of 67.4 to 68.0 when cosine similarity is used in Equation~\ref{eq:similarity} and 68.2 when KL divergence is used.
Table \ref{tab:synthia_results} shows our experimental results on the SYNTHIA to Cityscapes problem. Similarly, our method improved performance from an mIoU16 of 60.4 to 61.3 with cosine similarity and 61.6 with KL divergence.
We also observed that our method was able to make notable improvements on the "Road" and "Sidewalk" categories. This is especially so on the SYNTHIA to Cityscapes problem, where we improved UDA performance on "Road" from 80.5 to 89.0 and "Sidewalk" from 37.6 to 49.6. We further verify this improvement in our qualitative analysis presented in Figure \ref{fig:predictions}, where we observed that our method had better recognition on the "Sidewalk" and "Road" categories.
We attribute this improvement to our method's effectiveness in generating more accurate pseudo labels. We present pseudo labels generated during the training process in Figure \ref{fig:pseudo_labels}, where we observed more accurate pseudo labels for the "Road" and "Sidewalk" categories.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
Target Image & DAFormer & Ours \\
\includegraphics[width=3.8cm]{images/pseudo_labels_analysis/original_033001_1.png} & \includegraphics[width=3.8cm]{images/pseudo_labels_analysis/daformer_033001_1.png} & \includegraphics[width=3.8cm]{images/pseudo_labels_analysis/ours_033001_1.png} \\
\includegraphics[width=3.8cm]{images/pseudo_labels_analysis/original_035001_1.png} & \includegraphics[width=3.8cm]{images/pseudo_labels_analysis/daformer_035001_1.png} & \includegraphics[width=3.8cm]{images/pseudo_labels_analysis/ours_035001_1.png} \\
\end{tabular}
\end{center}
\caption{Qualitative results on pseudo labels generated from the training data of Cityscapes. From left: target image, pseudo labels generated by DAFormer, and pseudo labels generated by our method using cosine similarity}
\label{fig:pseudo_labels}
\end{figure}
It should be noted that experimental results obtained using the DAFormer method in Tables \ref{tab:gta_results} and \ref{tab:synthia_results} were obtained by averaging 6 random runs using the official DAFormer implementation\footnote{\url{https://github.com/lhoyer/DAFormer}}. Even though we were unable to reproduce the exact numbers published in the DAFormer paper, we believe our experimental results for DAFormer are comparable.
\subsection{Ablation Study}
\subsubsection{Number of Pixels Sampled}
\begin{table}
\caption{Influence of $N_{pair}$ on UDA performance. Results for all experiments were averaged over 3 random runs except for $N_{pair} = 512$, which was an average over 6 runs}
\label{tab:n_pair_results}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
$N_{pair}$ & 4 & 16 & 64 & 256 & 512 & 1024 \\ \hline
mIoU16 & \textbf{61.4} & 60.8 & \textbf{61.4} & 61.1 & 61.3 & 60.6 \\
\hline
\end{tabular}
\end{center}
\end{table}
We conducted additional experiments on SYNTHIA to Cityscapes to observe the effect of $N_{pair}$ (from Equation \ref{eq:consistency-regularization-with-n-pairs}) on model performance. Theoretically, sampling more pixels for similarity calculation (i.e. a larger $N_{pair}$) allows us to have a more complete model of the inter-pixel relationship between predictions. However, empirical results in table \ref{tab:n_pair_results} suggests that $N_{pair}$ does not have significant influence on UDA performance. We observe that very small samples, such as $N_{pair} = 4$, were able to obtain comparable results with larger sample sizes.
Additional experiments using $N_{pair} = 4$ were conducted to observe the locations of sampled pixels.
Visualization of our ablation study is presented in Figure \ref{fig:n_pairs_ablation}.
We found that after 40,000 training iterations, sampled pixels covered approximately 45.73\% of the 512$\times$512 images the network was trained on despite the small sampling size. This suggests that if a reasonable image coverage can be obtained during the training process, a small $N_{pair}$ is sufficient to model the inter-pixel relationship between predictions, allowing us to minimize computational cost of our consistency regularization method.
The influence of sampling coverage and sampling distribution on the effectiveness of consistency regularization is an interesting study that can be explored in the future.
\begin{figure}
\begin{center}
\begin{tabular}{cccc}
\includegraphics[width=2.8cm]{images/sample_image.png} & \includegraphics[width=2.8cm]{images/selected_index_0.png} & \includegraphics[width=2.8cm]{images/selected_index_1.png} & \includegraphics[width=2.8cm]{images/selected_index_2.png} \\
(a) & (b) & (c) & (d)
\end{tabular}
\end{center}
\caption{Visualization of pixels sampled in experiments using $N_{pair} = 4$. (a) Target image cropped to $512 \times 512$; (b), (c) and (d) visualizes sampled pixels in three separate runs}
\label{fig:n_pairs_ablation}
\end{figure}
\subsubsection{Proximity of Sampled Pixels}
Kim et al. adopted cutmix augmentation \cite{Yun_2019_ICCV} in their consistency regularization method \cite{DBLP:journals/corr/abs-2001-04647} to limit sampled pixel pairs to within a local region. They theorized that pixel pairs that are in close proximity to each other have high correlation, and hence have more effect on UDA performance. We tested this theory on SYNTHIA to Cityscapes by performing $N_{box}$ crops and sampling $N_{pair}$ pixels from each crop. This localizes sampled pixels and restricts them to have closer proximity. Sampled pixels are then used to compute inter-pixel similarity to obtain a $N_{box} \times N_{pair} \times N_{pair}$ similarity matrix which is used for loss calculation in Equation \ref{eq:consistency-regularization}.
\begin{table}
\centering
\caption{Influence of $N_{box}$ and $N_{pair}$ on UDA performance. Total number of sampled pixels i.e. $N_{box} \times N_{pair}$ is kept at 512 for a fair comparison}
\label{tab:n_box}
\begin{tabular}{ccc|c}
Crop Size & $N_{box}$ & $N_{pair}$ & mIoU16 \\
\hline
256 & 32 & 16 & \textbf{61.2} \\
128 & 32 & 16 & 61.1 \\
64 & 32 & 16 & 60.2
\end{tabular}
\end{table}
We present the experimental results in Table \ref{tab:n_box} where three different crop sizes were varied to restrict the proximity of sampled pixels. We did not observe an improvement in UDA performance compared to results presented in Table \ref{tab:synthia_results}, suggesting that proximity of sampled pixels perhaps may not be that influential for consistency regularization.
\subsubsection{Measuring Inter-Pixel Similarity} \label{inter-pixel-similarity}
In Section \ref{consistency-regularization}, we adopted the method of Kim et al. to use cosine similarity in the measure of inter-pixel similarity \cite{DBLP:journals/corr/abs-2001-04647}. In this section, we conducted additional experiments on SYNTHIA to Cityscapes to observe the influence different methods of measuring inter-pixel similarity have on UDA performance.
\begin{table}
\centering
\caption{Comparison of UDA performance using different methods to calculate inter-pixel similarity. We also provide the optimal $\lambda_c$ obtained using hyperparameter tuning}
\label{tab:similarity}
\begin{tabular}{lc|c}
Method & $\lambda_c$ & mIoU16 \\
\hline
Cosine Similarity & 1.0 & 61.3 \\
Cross Entropy & $1.0 \times 10^{-3}$ & 61.2 \\
KL Divergence & $0.8 \times 10^{-3}$ & \textbf{61.6}
\end{tabular}
\end{table}
We tested the usage of cross entropy and KL divergence to measure inter-pixel similarity instead of cosine similarity in Equation \ref{eq:similarity}. Results from our empirical experiments are presented in Table \ref{tab:similarity}. We observed that all three methods provided comparable results with each other, with KL divergence providing slightly better results.
\section{Conclusion}
In this work we presented a new consistency regularization method for UDA based on relationships between pixel-wise class predictions from semantic segmentation models. Using this technique we were able to improve the performance of the state-of-the-art DAFormer method. We also observed that even with smaller number of sampled pixel pairs $N_{pair}$, this regularization method was still able to be effective. Therefore, with minimal computational cost, we are able to improve the results of self-training methods for unsupervised domain adaptation.
\noindent
\textbf{Acknowledgment}
This research is supported by the Centre for Frontier AI Research (CFAR) and Robotics-HTPO seed fund C211518008.
\bibliographystyle{splncs04}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,715 |
Rails.application.routes.draw do
mount Openportal::Engine => "/openportal"
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,688 |
'use strict';
const RelayObservable = require('RelayObservable');
const invariant = require('invariant');
const {convertFetch, convertSubscribe} = require('ConvertToObserveFunction');
import type {CacheConfig} from 'RelayCombinedEnvironmentTypes';
import type {ConcreteBatch} from 'RelayConcreteNode';
import type {
FetchFunction,
Network,
QueryPayload,
SubscribeFunction,
UploadableMap,
} from 'RelayNetworkTypes';
import type {Variables} from 'RelayTypes';
/**
* Creates an implementation of the `Network` interface defined in
* `RelayNetworkTypes` given `fetch` and `subscribe` functions.
*/
function create(
fetchFn: FetchFunction,
subscribeFn?: SubscribeFunction,
): Network {
// Convert to functions that returns RelayObservable.
const observeFetch = convertFetch(fetchFn);
const observeSubscribe = subscribeFn
? convertSubscribe(subscribeFn)
: undefined;
function observe(
operation: ConcreteBatch,
variables: Variables,
cacheConfig: CacheConfig,
uploadables?: ?UploadableMap,
): RelayObservable<QueryPayload> {
if (operation.query.operation === 'subscription') {
invariant(
observeSubscribe,
'RelayNetwork: This network layer does not support Subscriptions. ' +
'To use Subscriptions, provide a custom network layer.',
);
invariant(
!uploadables,
'RelayNetwork: Cannot provide uploadables while subscribing.',
);
return observeSubscribe(operation, variables, cacheConfig);
}
const pollInterval = cacheConfig.poll;
if (pollInterval != null) {
invariant(
!uploadables,
'RelayNetwork: Cannot provide uploadables while polling.',
);
return observeFetch(operation, variables, {force: true}).poll(
pollInterval,
);
}
return observeFetch(operation, variables, cacheConfig, uploadables);
}
return {observe};
}
module.exports = {create};
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,068 |
Bernard has recently appeared in Big Sky and Young Drunk Punks. He has recently come home to TsuuT'ina Nation after taking a year long course at Circle in the Square Theatre school in New York. His other television credits include Heartland and Blackstone for which he was nominated for the second time for an AMPIA Award for Best Actor. Bernard has also been nominated for an AMPIA Award (Best Actor) and a Genie Award (Best Performance by an Actor in a Supporting Role) for his work in Hank Williams First Nation. | {
"redpajama_set_name": "RedPajamaC4"
} | 248 |
» F » FINDEISEN HOLDINGS INC.
This is a QR Code for FINDEISEN HOLDINGS INC.. If you have a QR Code scanning app, you can simply scan this image to have this business's information displayed on your phone.
FULLNESS OF JOY CHURCH OF GOD INC.
FERN E. CYTRYN, D.D.S., P.C.
This profile was last registered on Aug 11, 2016 and contains information from public web pages.
Company Name FINDEISEN HOLDINGS INC.
FINDEISEN HOLDINGS INC. is a Stock Company in Connecticut and its company number is 1214107. FINDEISEN HOLDINGS INC. was registered on Aug 11, 2016. The company's status is listed as Active.
This is Google map of FINDEISEN HOLDINGS INC. address: C/O S & E AZRILIANT, P.C., 501 5TH AVENUE, 15TH FLR, NEW YORK, NY, 10017. If you find error address, please submit another address using the form in the map, then search again. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,060 |
"""This module contains an object that represents a Telegram InlineKeyboardMarkup."""
from telegram import ReplyMarkup
class InlineKeyboardMarkup(ReplyMarkup):
"""
This object represents an inline keyboard that appears right next to the message it belongs to.
Attributes:
inline_keyboard (List[List[:class:`telegram.InlineKeyboardButton`]]): Array of button rows,
each represented by an Array of InlineKeyboardButton objects.
Args:
inline_keyboard (List[List[:class:`telegram.InlineKeyboardButton`]]): Array of button rows,
each represented by an Array of InlineKeyboardButton objects.
**kwargs (:obj:`dict`): Arbitrary keyword arguments.
"""
def __init__(self, inline_keyboard, **kwargs):
# Required
self.inline_keyboard = inline_keyboard
def to_dict(self):
data = super(InlineKeyboardMarkup, self).to_dict()
data['inline_keyboard'] = []
for inline_keyboard in self.inline_keyboard:
data['inline_keyboard'].append([x.to_dict() for x in inline_keyboard])
return data
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,384 |
package promotionSystem;
import javax.swing.*;
public class ServidorTest {
public static void main(String[] args){
try {
Servidor server = new Servidor();
} catch (Exception e) {
JOptionPane.showMessageDialog(null, "Error al Inicial el Servidor", "Error",JOptionPane.ERROR_MESSAGE);
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,803 |
\section{Introduction}
Text-to-Image Synthesis, also called Conditional Image Generation, is a process that consists in generating a photo-realistic image given a textual description. It is a challenging task and it is revolutionizing many real-world applications. For example, starting from a Digital Library of adventure books it could be possible to enrich the reading experience with computer-generated images of the locations explored in the story, while a Digital Library of recipe books may be enriched with images representing the steps involved in a given recipe. In addition, such images may be used to exploit Information Retrieval systems based on visual similarity. Due to its great potentiality and usefulness, it raised a lot of interest in the research fields of Computer Vision, Natural Language Processing, and Digital Libraries.
One of the main approaches used for the text-to-image task involves the use of Generative Adversarial Networks (GAN) \cite{GANs}: starting from a given textual description, GANs can be conditioned on text \cite{Reed2016}, \cite{Reed2016LearningDraw}, \cite{StackGAN++} in order generate high-quality images that are highly related to the text meaning.
To condition a GAN on text, captioned images datasets are needed, meaning that one (or more) captions must be associated to each image. Despite the large amount of uncaptioned images datasets, the number of captioned datasets is limited. For example, the LSUN-bedroom dataset contains $\sim3,000,000$ images \cite{LSUN}, but it does not contain the associated captions. This may lead to a difficulty in training a conditional GAN to generate bedroom images related to a given textual description, such as ``a bedroom with blue walls, white furniture and a large bed''.
In this paper we propose an innovative, though quite simple approach to address this issue. First of all, a captioning system (that we call Image Captioning Module) is trained on a generic captioned dataset and used to generate a caption for the uncaptioned images. Then, the conditional GAN (that we call GAN Module) is trained on both the input image and the ``machine-generated'' caption. A high-level representation of the architecture is shown in Figure \ref{pipeline_illustrator_3}. To evaluate the results, the performance of the GAN using ``machine-generated'' captions are compared with the results obtained by the unconditional GAN. To test and evaluate our pipeline, we are using the LSUN-bedroom \cite{LSUN} dataset.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{Figures/pipeline_illustrator_3.png}
\caption{Pipeline: images are fed to a captioning system that outputs its captions. The generated captions and the images are then given as input for training the conditional GAN.}
\label{pipeline_illustrator_3}
\end{figure}
The results obtained in the experiments are very preliminary yet very promising. According to our study, the GAN Module does not learn how to produce meaningful images, with respect to the caption meaning, and we hypothesize that this is due to the ``machine-generated'' captions we use to condition the GAN Module. The Image Captioning module is trained on the COCO dataset \cite{MSCOCO}, which contains captioned images for many different classes of objects and intuitively this should lead the Image Captioning Module to learn how to produce captions for bedroom images as well. Despite being able to produce the desired captions, we notice that the ``machine-generated'' captions are often too similar and not detailed for different bedroom images. The last section of the paper proposes some approaches that can deal with these problems.
\section{Related Work}
In 2014, Goodfellow et al.\ introduced Generative Adversarial Networks (GAN) \cite{GANs}, a generative model framework that consists in training simultaneously two models: a generator network and a discriminator one. The generator network has the task of generating images as real as possible, while the discriminator network has to distinguish the generated images from the real ones. Generative models are trained to implicitly capture the statistical distribution of training data; once trained, they can synthesize novel data samples, which can be used in the tasks of semantic image editing \cite{generative-visual}, data augmentation \cite{data-augmentation} and style transfer \cite{CycleGAN}.
GANs can be trained to sample from a given data distribution, in such case a random vector is provided as input to the generator. Otherwise, as in the case of text-to-image synthesis, they can be trained conditionally, meaning that an additional variable is provided as input to control the generator output. In certain formulations the discriminator observes the conditioning variable too, during training. In the literature, several possibilities were tested for the variables used to condition a GAN: attributes or class labels (e.g.\ \cite{Chen2016}, \cite{Odena2016}), images (e.g.\ for the tasks of photo editing \cite{generative-visual} and domain transfer \cite{Isola2016}).
Several methods have been developed to generate images conditioned on text. Mansimov et al.\ \cite{Mansimov2015} built an AlignDRAW model trained to learn the correspondence between text and generated images. Reed et al.\ in \cite{Reed2017} used PixelCNN to generate images using both text descriptions and object location constraints. Nguyen et al.\ \cite{Nguyen2016} used an approximate Langevin sampling approach to generate images conditioned on text, but it required an inefficient iterative optimization process. In \cite{Reed2016}, Reed et al.\ successfully generated $64 \times 64$ images for birds and flowers conditioning on text descriptions. In their follow-up work \cite{Reed2016LearningDraw}, they were able to generate $128 \times 128$ images by using additional annotations on object part locations. Denton et al.\ in \cite{Denton2015} proposed the Laplacian pyramid framework (LAPGANs), which is composed of a series of GANs. A residual image is conditioned at each level of the pyramid on the image of the previous stage to produce an image for the next stage. Also in \cite{Karras2017}, Kerras et al.\ use a similar approach by incrementally adding more layers in the generator and in the discriminator. \cite{StackGAN} and \cite{StackGAN++} suggest the use of a so-called sketch-refinement process, where the images are first generated at low resolutions using a GAN conditioned over the textual description, and then refined with another GAN conditioned on both the image generated at the previous step and the input textual description. \cite{hong2018} and \cite{li2019} infer a semantic label map by predicting bounding boxes and object shapes from the text, and then synthesize an image conditioned on the layout and the text description. A recent work by Qiao et al.\ \cite{qiao2019} uses a three-step approach where it first computes word- and sentence-level embedding from the given textual description, then it uses the embeddings to generate images in a cascaded architecture, and finally starting from the image generated at the previous step it tries to regenerate the original textual description, in order to semantically align with it. Although several different state-of-the-art architectures may be chosen for the task, such as HDGAN \cite{HDGAN}, AttGAN \cite{AttGAN} and BigGAN \cite{BigGAN}, in our pipeline we decided to use StackGAN-v2 \cite{StackGAN++} as the conditional GAN component, given the availability of its code on GitHub.
Recently, several impressive results \cite{ren2017}, \cite{Zhang2017RL}, \cite{self-critical} were obtained for the Image Captioning (or image-to-text) task, which deals with the generation of a caption describing the given image and the objects taking part to it. It is an important task that raises a lot of interest in the Computer Vision and Natural Language Processing research fields. A recent and comprehensive survey about the task is provided by Hossain et al.\ in \cite{captioning_survey}. Some of the approaches used for this task involve the use of Encoder/Decoder networks and Reinforcement learning techniques.
The encoder/decoder paradigm for machine translation using recurrent neural networks (RNNs) \cite{encoder-decoder-RNN} inspired \cite{Karpathy2017}, \cite{show-and-tell} to use a deep convolutional neural network to encode the input image, and a Long Short-Term Memory (LSTM) \cite{long-short-memory} RNN decoder to generate the output caption.
Given the unavailability of labeled data, recent approaches to the image captioning task involve the use of reinforcement learning and unsupervised learning-based techniques. \cite{ren2017} and \cite{Zhang2017RL} use actor-critic reinforcement learning methods, where a ``policy network'' (the actor) is trained to predict the next word based on the current state, whereas a ``value network'' (the critic) is trained to estimate the reward of each generated word. These techniques overcome the need to sample from the policy (actors) action space, which can be enormous, at the expense of estimating future rewards.
Another approach, used by Ranzato et al.\ in \cite{teacher-forcing}, consists in applying the REINFORCE algorithm \cite{statistical-gradient}. A limitation of this algorithm consists in the requirement of a context-dependent normalization to tackle the high variance encountered when using mini-batches.
The approach we are following uses Self-Critical Sequence Training (SCST) \cite{self-critical} which is a REINFORCE algorithm that utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences: doing so, it does not need neither to estimate the reward signal nor the normalization.
\section{Our Approach}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{Figures/arch4.png}
\caption{Our pipeline: captioned images are used to train the Image Captioning Module; uncaptioned images are then captioned through the Trained Image Captioning Module and both the image and the generated captions are used to train the GAN Module; finally, the Trained GAN Module is used to generate an image based on an input caption.}
\label{pipeline}
\end{figure}
We propose a pipeline whose goal is to generate images by conditioning on ``machine-generated'' captions. This is fundamental when image captions are not available for a specific domain of interest. Thus, the proposed solution involves the use of a generic captioned dataset, such as the COCO dataset, to make the Image Captioning Module capable of generating captions for a specific domain.
To do so, we want to explore the possibility of using an automatic system to generate textual captions for the images and use them for the training of a Generative Adversarial Network.
For achieving our goal, we built a pipeline composed by an Image Captioning Module and a GAN Module, as shown in Figure \ref{pipeline}. First of all, the Image Captioning Module is trained over a generic captioned dataset to generate multiple captions for the input image. Then, real images are given as input to the Trained Image Captioning Module, which outputs multiple captions for each image. The generated captions together with the images are then fed to the GAN Module, which learns to generate images conditioned on the ``machine-generated'' captions. By feeding the GAN with multiple captions for each image, the GAN can better learn the correspondence between images and captions.
In the following sections,
we detail the two modules used in our pipeline: the Image Captioning Module and the GAN Module.
\subsection{Image Captioning Module}
The goal of the Image Captioning Module is to generate a natural language description of an image. Good performance in this task are obtained by learning a model which is able to first understand the scene described in the image, the objects taking part to it and the relationships between them, and then to compose a natual language sentence describing the whole picture. Given the complexity of such a task, it is still an open challenge in the fields of Natural Language Processing and Computer Vision.
In our pipeline, we are implementing our Image Captioning Module in a similar way as the one proposed in \cite{self-critical}, meaning that we also use a captioning system based on FC models, which is then optimized through Self-Critical Sequence Training (SCST).
Typical deep learning models used for the Image Captioning task are trained with the ``teacher-forcing'' technique, which consists in maximizing the likelihood of the next ground-truth word given the previous ground-truth word. This has been shown to generate some mismatches between the training and the inference phase, knows as ``exposure bias''. Moreover, the metrics used during the testing phase are non-differentiable (such as BLEU and CIDEr), meaning that the captioning model can not be trained to directly optimize them. To overcome these problems, Reinforcement Learning techniques such as the REINFORCE algorithm have been used. SCST is a variation and an improvement of the popular REINFORCE algorithm that, rather than estimating a baseline to normalize the rewards and reduce variance, utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. This means that it is forced to improve the performance of the model under the inference algorithm used at test time. Practically, SCST has much lower variance than REINFORCE and can be more effectively trained on mini-batches of samples using SGD. Moreover, it has been shown that SCST system has achieved state-of-the-art performance by optimizing their system using the test metrics of the MSCOCO task.
\subsection{GAN Module}
The GAN Module has the major role of learning to generate images by conditioning on the ``machine-generated'' captions. In particular, we are using StackGAN-v2 \cite{StackGAN++} as our GAN Module.
StackGAN-v2 consists of a multiple stage generation process, where high-resolution images are obtained by initially generating low-resolution images which are then refined in multiple steps. It consists in a single end-to-end network composed by multiple generators and discriminators in a tree-like structure. Different branches of the tree generate images of different resolutions: at branch $i$, the generator $G_i$ learns the image distribution $p_{G_i}$ at that scale, while the discriminator $D_i$ estimates the probability of a sample being real.\\
Since we are more interested in the conditional case, we are not reporting the loss function used by the generator and the discriminator in the unconditional setting, for which more details can be found in \cite{StackGAN++}.\\
The discriminator $D_i$ takes a real image $x_i$ or a fake sample $s_i$ as input and is trained to classify them as real or fake by minimizing the cross entropy loss:
\begin{equation}
\label{loss_d}
\begin{aligned}
\mathcal{L}_{D_i} &= \underbrace{-\mathbb{E}_{x_i}\sim p_{data_i}[logD_i(x_i)] - \mathbb{E}_{x_i} \sim p_{G_i}[log(1-D_i(s_i))] + }_{\textrm{unconditional loss}} \\
&+ \underbrace{-\mathbb{E}_{x_i}\sim p_{data_i}[logD_i(x_i,c)] - \mathbb{E}_{x_i} \sim p_{G_i}[log(1-D_i(s_i,c))]}_{\textrm{conditional loss}}
\end{aligned}
\end{equation}
where $x_i$ is an image from the true image distribution $p_{data_i}$ at the $i^{th}$ scale, $s_i$ is from the model distribution $p_{G_i}$ at the same scale.
While StackGAN-v2 \cite{StackGAN++} follows the approach of Reed et al.\ \cite{learning-deep-representations} to pre-train a text encoder to extract visually-discriminative text embeddings of the given description, in our case we use Skip-Thought \cite{skip-thought}, that works at the sentence level, to generate the text embeddings ($c$ in the equations \ref{loss_d} and \ref{joint_gen_loss}). Sentences that share semantic and syntactic properties are mapped to corresponding vector representations \cite{skip-thought}.
The multiple discriminators are trained in parallel each one for a different scale, while the generator is instead optimized to jointly approximate multi-scale image distributions $p_{data_0}, p_{data_1}, ..., p_{data_{m-1}}$ by minimizing the following loss function:
\begin{equation}
\label{joint_gen_loss}
\mathcal{L}_G = \sum_{i=1}^m \mathcal{L}_{G_i}, \quad \mathcal{L}_{G_i} = \underbrace{- \mathbb{E}_{s_i}\sim p_{G_i}[logD_i(s_i)]}_{\textrm{unconditional loss}} + \underbrace{- \mathbb{E}_{s_i}\sim p_{G_i}[logD_i(s_i,c)]}_{\textrm{conditional loss}}
\end{equation}
where $L_{G_i}$ is the loss function for approximating the image distribution at the $i^{th}$ scale.
The unconditional loss is used to determine whether the image is real or fake, while the conditional loss is used to determine if the image and the condition match.
\section{Experimental Results}
In this section, we present the preliminary results of the experiments involving the proposed pipeline. The Image Captioning Module was trained on the COCO dataset \cite{MSCOCO}, which contains $120,000$ generic images tagged with categories and captioned by five different sentences each.
The uncaptioned dataset that we considered is the LSUN \cite{LSUN} dataset, which consists in around one million labeled images for each of the 10 scene categories and 20 object categories \cite{LSUN}. From the LSUN dataset, we first select the $\sim3,000,000$ images tagged with the ``bedroom'' scene category and from that set a subset of $120,000$ images is selected: $80,000$ are then used to train the GAN and $40,000$ as test set. Later on in this paper, the selection of the $\sim3,000,000$ images tagged with the ``bedroom'' scene category is called ``LSUN-bedroom''.
A typical metric used to evaluate both the quality and the diversity of generated images is the Inception Score \cite{Salimans2016}. Unfortunaly, the type of image of the LSUN dataset is very different from those used by ImageNet, therefore it has be shown that the Inception Score is not a good indicator for the quality of generated images \cite{StackGAN++}. So we decided not to report the obtained scores.
We performed three experiments over the considered dataset.
The first experiment consists in training the GAN Module on the whole LSUN-bedroom dataset ($\sim3,000,000$ images). This is done because of two reasons: first, it serves as a baseline for the next experiment; second, we compare the results obtained by our computing facilities with the results obtained in \cite{StackGAN++}, since with our graphics card we are limited to a lower batch size of 16. Figure \ref{bedroom_all} shows some examples of generated images, and it is possible to see that the quality of the generated images is similar to those reported in \cite{StackGAN++}.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{bed_all.png}
\caption{Examples of images generated by the GAN Module trained on the whole LSUN-bedroom dataset.}
\label{bedroom_all}
\end{figure}
Because of computational issues, we decided to explore and understand how the GAN Module performs with less training images. In the second experiment, the training of the GAN Module without conditioning is done on a subset of LSUN-bedroom, consisting of $120,000$ images. Some of the results obtained in this experiment are showed in Figure \ref{bedroom_split}. Although the quality of the generated images is slightly reduced, it is possible to see that the semantic content is still clear and defined.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{LSUN_all.png}
\caption{Examples of images generated by the GAN Module trained on a part of the LSUN-bedroom dataset.}
\label{bedroom_split}
\end{figure}
Finally, to test our pipeline, we used the Image Captioning Module to generate captions for the images contained in the considered subset of the LSUN-bedroom dataset. Then, the GAN Module was trained on these same images and conditioned by the ``machine-generated'' captions. About the preliminary results that we obtained, some examples are shown in Figure \ref{bedroom_split_captions}. After a few epochs, the generator produces images that start to look more and more like Fig.\ (b). We suspect the problem is due to the similarity of the ``machine-generated'' captions: the LSUN-bedroom dataset does not come with captions and thus the Image Captioning Module is trained on a generic dataset (COCO) and not for that specific dataset. Because of this, the Image Captioning Module is unable to produce detailed and varied captions for different bedroom images. Then, the captions are used to yield the embeddings, which are also used as noise by the generator. The fact that the noise is almost always the same could be the cause of the observed problem.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Figures/condexp1.png}
\caption{Examples of images generated by the GAN Module trained on a part of the LSUN-bedroom dataset and conditioned on ``machine-generated'' captions.}
\label{bedroom_split_captions}
\end{figure}
\section{Conclusion}
We explored the problem of conditional image generation using Generative Adversarial Networks with machine-generated captions. For this task, we built a pipeline to first generate captions for uncaptioned datasets and then to use the ``machine-generated'' captions to condition a GAN. To test our pipeline, we run experiments on the LSUN-bedroom dataset, which is a subset of the LSUN dataset containing uncaptioned images of bedrooms, and then compare the generated images in the unconditional setting and in the conditional setting where ``machine-generated'' captions are used.
The results observed in the experiments do not achieve success in the task of conditioning with ``machine-generated'' captions. So we identify, analyze, and propose possible solutions to the obstacles that need to be overcome.
The Image Captioning Module that we trained on the COCO dataset seems to generate captions too similar to each other. This is probably related to the fact that more diverse and detailed captions are needed during training in order to achieve significant improvements. During a subsequent review of works on captioning, we found a work from Shetty et al.\ \cite{Shetty2017}, that promises to generate more different captions, instead of variations of the same caption. This result is achieved by using GANs for image captioning instead of other traditional methods. An open question is whether with a bigger dataset the GAN could learn the image-captions correspondence, even when captions are very similar for each image.
An hybrid approach could make our proposed method work by making humans write captions on a subset of the dataset, then use the obtained captions to train a captioning system. For generating human captions, crowdsourcing platforms like Amazon Mechanical Turk could be used.
\bibliographystyle{splncs03}
\section{Introduction}
\label{sec:intro}
Keyphrases (herein KPs) are phrases that ``capture the main topic discussed on a given document'' \cite{Turney2000}.
More specifically, KPs are phrases typically one to five words long that appear verbatim in a document, and can be used to briefly summarize its content.
The task of finding such KPs is called Automatic Keyphrase Extraction (herein AKE). It it has received a lot of attention in the last two decades \cite{hasan2014} and recently it has been successfully used in many Natural Language Processing (hence NLP) tasks, such as text summarization \cite{Zhang2004}, document clustering \cite{hammouda2005corephrase}, or non-NLP tasks such as social network analysis \cite{nart2015content} or user modeling \cite{nart2015modelling}. Automatic Keyphrase Extraction approaches have been also applied in Information Retrieval of relevant documents in digital document archives which can contain heterogeneous types of items, such as books articles, papers etc \cite{Staveley1999}.
The first approaches to solve Automatic Keyphrase Extraction were based on supervised machine learning (herein ML) algorithms, like Naive Bayes \cite{Witten:1999:KPA:313238.313437} or C4.5 decision trees \cite{Turney2000}. Since then, several researchers explored different ML techniques such as Multilayer Perceptrons \cite{lopez2010humb,basaldella2016}, Support Vector Machines \cite{lopez2010humb}, Logistic Regression \cite{basaldella2016,haddoud2014}, and Bagging \cite{Hulth:2003}. Since no algorithm stands out as the ``best'' ML algorithm, often authors test many techniques in a single experiment, and then they choose as best ML algorithm the best performing one \cite{basaldella2016,haddoud2014} and/or even the least computationally expensive one \cite{lopez2010humb}.
However, AKE algorithms based on unsupervised approaches have been developed over the years as well. For example, Tomokiyo \emph{et al.}~ \cite{tomokiyo2003language} proposed to use a language model approach to extract KPs, and Mihalcea \emph{et al.}~ \cite{mihalcea-tarau:2004:EMNLP} presented a graph-based ranking algorithm to find keyphrases. Nevertheless, supervised approaches have been the best performing ones in challenges: for example, \cite{lopez2010humb}, a supervised approach, was the best performing algorithm in the SEMVAL 2010 Keyphrase Extraction Task \cite{kim2010semeval}.
In the last years, most attention is devoted to the \emph{features} used in these supervised algorithms. The numbers of features used can range from just two \cite{Witten:1999:KPA:313238.313437} to more than 20 \cite{haddoud2014}. These features can be divided in categories identified with different kinds of knowledge they encode into the model:
\begin{itemize}
\item \emph{statistical knowledge}. number of appearances of the KP in the document, TF-IDF, number of sentences containing the KP, etc.;
\item \emph{positional knowledge}. position of the first occurrence of the KP in the document, position of the last occurrence, appearance in the title, appearance in specific sections (abstract, conclusions), etc.;
\item \emph{linguistic knowledge}: part-of-speech tags of the KP \cite{Hulth:2003}, anaphoras pointing to the KP \cite{basaldella2016}, etc.;
\item \emph{external knowledge}: presence of the KP as a page on Wikipedia \cite{degl2014new} or in specialized domain ontologies \cite{lopez2010humb}, etc.
\end{itemize}
However, given the wide variety of lexical, linguistic and semantic aspects that can contribute to define a keyphrase, it difficult to design hand-crafted feature, and even the best performing algorithms hardly reach F1-Scores of 50\% on the most common evaluation sets \cite{Hulth:2003,kim2010semeval}. For this reason, AKE is still far from being a solved problem in the NLP community.
In recent years, Deep Learning techniques have shown impressive results in many Natural Language Processing tasks, e.g., Named Entity Recognition, Automatic Summarization, Question Answering, and so on \cite{Lample2016,rush2015neural,palangi2016deep,tan2015lstm}.
In Named Entity Recognition, for example, researchers have proposed several Neural Network Architectures
To best of our knowledge, only recently some first attempts to address AKE task with Deep Learning techniques, has been presented \cite{ZhangWGH16,meng2017deep}. In \cite{ZhangWGH16}, the authors present an approach based on Recurrent Neural Networks, specifically designed for a particular domain, i.e., Twitter data.
On the other hand, in \cite{meng2017deep} the authors use more datasets to evaluate their RNN for keyphrase extraction, and they propose a study of the keyphrases \emph{generated} by their network as well.
In this paper, we present a Deep Learning architecture for AKE. In particular, we investigate an approach based on based on Bidirectional Long Short-Term Memory RNN (hence Bi-LSTM), which is able to exploit previous and future context of a given word. Our system, since it does not require specific features carefully optimized for a specific domain, can be applied to a wide range of scenarios.
To evaluate the proposed method, we conduct experiments on the well-known INSPEC dataset \cite{Hulth:2003}. The experimental result showed the proposed solution performs significantly better than competitive methods.
\section{Proposed Approach}
To extract KPs we implemented the following steps, as presented in Figure \ref{fig:model}.
First, we split the document into sentences, and then we tokenize the sentences in words using NLTK \cite{bird2009natural}.
Then, we associate a word embedding representation that maps each input word into a continuous vector representation.
Finally, we feed our word embeddings into a Bi-LSTM units, which it can effectively deal with the variable lengths of sentences and it is able to analyze word features and their context (for example, distant relation between words).
The Bi-LSTM is connected to a fully connected hidden layer, which in turn is connected to a softmax output layer with three neurons for each word.
Between the Bi-LSTM layer and the hidden layer, and between the hidden layer and the output layer, we use dropout \cite{srivastava2014dropout} to prevent overfitting.
As in the techniques used for Named Entity Recognition, the three neurons are mapped to three possible output classes: \texttt{NO\_KP}, \texttt{BEGIN\_KP}, \texttt{INSIDE\_KP}, which respectively mark tokens that are \emph{not} keyphrases, the \emph{first} token of a keyphrase, and the other tokens of a keyphrase.
For example, if our input sentence is ``\textit{We train a neural network using Keras}'', and the keyphrases in that sentence are ``\textit{neural network}'' and ``\textit{Keras}'', the tokens' classes will be \texttt{We/NO\_KP train/NO\_KP a/NO\_KP neural/BEGIN\_KP network/INSIDE\_KP using/NO\_KP Keras/BEGIN\_KP'}.
\begin{figure*}[t!]
\centering
\includegraphics[width=1.0\textwidth]{./images/Framework.pdf}
\caption{Overview of the proposed system.}
\label{fig:model}
\end{figure*}
\subsection{Word Embeddings}
The input layer of our model is a vector representation of the individual words contained in input document.
Several recent studies \cite{collobert2011natural,mikolov2013distributed} showed that such representations, called word embeddings, are able to represent the semantics of words better than an ``one hot'' encoding word representation, when trained on large corpus. However, the datasets for AKE are relatively small, therefore it is difficult to train word embeddings to capture the word semantics. Hence, we adopt Stanford's GloVe Embeddings, which are trained on 6 billion words extracted from Wikipedia and Web texts \cite{pennington2014glove}.
\subsection{Model Architecture}
Let $\{x_1, \dots, x_n\}$ the word embeddings representing the input tokens, a Recurrent Neural Network (hence RNN) computes the output vector $y_t$ of each token $x_t$ by iterating the following equations from $t=1$ to $n$:
\begin{align}
\label{eq:recurrent}
h_t &= H(W_{xh} x_t + W_{hh} h_{t-1} + b_h) \\
y_t &= W_{hy} h_t + b_y
\end{align}
where $h_t$ is the hidden vector sequence, $W$ denotes weight matrices (for example $W_{xh}$ is the matrix of the weights connecting the input layer and the hidden layer), $b$ denotes bias vectors, and $H$ is activation function of the hidden layer. Equation \ref{eq:recurrent} represents the connection between the previous and the current hidden states, thus RNNs can make use of previous context.
In practice however, the RNN is not able to use effectively the all input history due to the \emph{vanishing gradient} problem \cite{Hochreiter01gradientflow}. Hence, a better solution to exploit long range context is the Long Short-Term Memory (LSTM) architecture \cite{hochreiter1997long}.
The LSTM is conceptually defined like an RNN, but hidden layer updates are replaced by specific units called memory cells.
Specifically, a LSTM is implemented by the following functions \cite{gers2002learning}:
\begin{align}
i_t &= \sigma (W_{xi} x_t + W_{hi} h_{t-1} + W_{ci} c_{t-1} + b_i) \\
f_t &= \sigma (W_{xf} x_t + W_{hf} h_{t-1} + W_{cf} c_{t-1} + b_f) \\
c_t &= f_tc_{t-1} + i_t \tanh(W_{xc} x_t + W_{hc} h_{t-1} + b_c) \\
o_t &= \sigma(W_{xo} x_t + W_{ho} h_{t-1} + W_{co} c_t + b_o) \\
h_t &= o_t \tanh(c_t)
\end{align}
where $\sigma$ is the logistic sigmoid function, $i$, $f$, $o$, and $c$ are the input gate, forget gate, output gate and cell activation vectors, and all $b$ are learned biases.
Another shortcoming of RNNs is that they consider only previous context, but in AKE we want to exploit future context as well. For example, consider the phrase ``John Doe is a lawyer; he likes fast cars''. When we first encounter ``\textit{John Doe}'' in the phrase, we still don't know whether he's going to be an important entity; then, we find the word ``\textit{lawyer}'' and the pronoun ``\textit{he}'', which clearly refer to him, stressing his importance in the context. ``\textit{Lawyer}'' and ``\textit{he}'' are called \emph{anaphoras} and the technique to find this contextual information is called \emph{anaphora resolution}, which has been exploited to perform keyphrase extraction in \cite{basaldella2016}.
In order to use future context, in our approach we adopt a Bidirectional LSTM network \cite{graves2005framewise}. In fact, with this architecture we are able to make use of both past context and future context of a specific word. It consists of two separate hidden layers; it first computes the forward hidden sequence $\overrightarrow{h_t}$; then, it computes the backward hidden sequence $\overleftarrow{h_t}$; finally, it combines $\overrightarrow{h_t}$ and $\overleftarrow{h_t}$ to generate the output $y_t$. Let the hidden states $h$ be LSTM blocks, a Bi-LSTM is implemented by the following functions:
\begin{align}
\overrightarrow{h_t} &= H(W_{x\overrightarrow{h}} x_t + W_{\overrightarrow{h} \overrightarrow{h}} \overrightarrow{h}_{t-1}+ b_{\overrightarrow{h}}) \\
\overleftarrow{h_t} &= H(W_{x\overleftarrow{h}} x_t + W_{\overleftarrow{h} \overleftarrow{h}} \overleftarrow{h}_{t-1}+ b_{\overleftarrow{h}}) \\
y_t &= W_{\overrightarrow{h} y} \overrightarrow{h}_t + W_{\overleftarrow{h} y} \overleftarrow{h}_t + b_y
\end{align}
\begin{table}[h]
\centering
\caption{Performance with different vector sizes of the GloVe Word Embeddings: 50, 100, 200 and 300 (we called them GloVe-(SIZE), respectively).}
\label{table:embeddings}
\begin{tabular}{lccccc}
\hline
\textbf{Embedding} & \textbf{Size} &\textbf{Precision} & \textbf{Recall} & \textbf{F1-score} & \textbf{Epochs} \\ \hline
GloVe-50 & 50 & 0.331 & 0.518 & 0.404 & 20\\
GloVe-100 & 100 & 0.340 & \textbf{0.578} & \textbf{0.428} & 14\\
GloVe-200 & 200 & 0.352 & 0.539 & 0.426 & 18 \\
GloVe-300 & 300 & \textbf{0.364} & 0.500 & 0.421 & 8 \\ \hline
\end{tabular}
\end{table}
\section{Experimental Results}
We present experiments on a well-known keyphrase extraction dataset: the INSPEC dataset \cite{Hulth:2003}. It is composed by 2000 abstract papers in English extracted from journal papers from the disciplines Computer and Control, Information Technology. It consists of 1000 documents for training, 500 for validation and the remaining 500 for testing. We choose this dataset since it's well known in the AKE community, so there are many other available results to compare with; moreover, is much bigger than the dataset used in the SEMEVAL 2010 \cite{kim2010semeval} competition, which contains only 144 documents for training, 40 for validation, and 100 for testing.
In order to implement our approach, we used Keras
with Theano \cite{al-rfou2016theano} as
back end, which in turn allowed us to use CUDA
to train our networks using a GPU.
Experiments are run on a GeForce GTX Titan X Pascal GPU. The network is trained to minimize the Crossentropy loss. We train our network using
the Root Mean Square Propagation optimization algorithm \cite{kingma2014adam}
and batch size 32.
After trying different configurations for the network, we obtained the best results with a size of 150 neurons for the Bi-LSTM layer, 150 neurons for the hidden dense layer, and a value of 0.25 for the dropout layers in between.
To test the impact of word embeddings, we perform experiments with the pre-trained Stanford's GloVe Embeddings using all the word embedding sizes available, i.e., 50, 100, 200 and 300. The training of the network takes about 30 seconds to perform a full epoch with all the GloVe Embeddings. To stop the training, we used Keras' own embedded early stopping rule, which halts training when the training loss does not decrease for two consecutive epochs. The number of epochs requested to converge in all the four settings is displayed in Table~\ref{table:embeddings}, along with precision, recall and F1-score obtained by our system when trained using different sizes of the word embeddings. We can note that the best results are obtained with embedding size of 100; however, the embedding sizes of 200 and 300 obtain a very close result in term of F1-Score. The scores seem to show an interesting pattern: in fact, looking at the results, we see that the precision increases with embedding size, while recall decreases
from size 100 onwards.
Table~\ref{table:comparison} compares the performances in term of precision, recall, and F-score our approach with other competitive systems, based both on supervised and unsupervised machine learning techniques. The first three systems are the ones presented in \cite{Hulth:2003}, with three different candidate keyphrase generation techniques: n-grams, Noun Phrase (NP) chunking, and patterns.
The fourth system is TopicRank \cite{bougouin2013}, a graph-based keyphrase extraction method that relies on a topical representation
of the document. Our proposed solution achieves best performance in term of F1-score and Recall. Although TopicRank obtains best performance in precision, its recall results are significantly worse than the ones obtained by us; moreover, we have to stress that we're able to obtain better precision when using an embedding size of 200 and 300, albeit with a slightly lower overall F1-Score.
Finally, it's worth noting that we perform better than the results presented in \cite{meng2017deep}, which is to the best of our knowledge the only one DL AKE algorithm evaluated on the INSPEC dataset. In fact, we obtain a F1@10 score of 0.422, while the best F1@10 score obtained by \cite{meng2017deep} is 0.342.
\begin{table}[t]
\centering
\caption{Comparison results on INSPEC dataset}
\label{table:comparison}
\begin{tabular}{lccc}
\hline
\textbf{Method} & \textbf{Precision} & \textbf{Recall} & \textbf{F1-score} \\ \hline
\textbf{Proposed approach} & 0.340 & \textbf{0.578} & \textbf{0.428} \\
\textit{n}-grams with tag \cite{Hulth:2003} & 0.252 & 0.517 & 0.339 \\
NP Chunking with tag \cite{Hulth:2003} & 0.297 & 0.372 & 0.330 \\
Pattern with tag \cite{Hulth:2003} & 0.217 & 0.399 & 0.281 \\
TopicRank \cite{bougouin2013} & \textbf{0.348} & 0.404 & 0.352 \\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
In this work, we proposed a Deep Long-Short Term Memory Neural Network model to perform automatic keyphrase extraction, evaluating the proposed method on the INSPEC dataset. Since word representation is a crucial step for success, we perform experiments with different pre-trained word representations.
We show that without requiring hand-crafted features, the proposed approach is highly effective and achieves better results with respect to other competitive methods. For the future, we plan to test additional network architectures and to evaluate our algorithms on more datasets, in order to demonstrate its robustness.
\bibliographystyle{splncs03}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,026 |
Hi. On older bolt versions the app.paths was. I used it to determine the domen name. I need to use one template for default domen and another template for mobile domen. The default domen is domen.com , the mobile is m.domen.com. Both of them are linked to one folder.
How to implement the functionality (one template on default domen and another template on mobile domen) in the latest version? Or how to get the domen name (like $_SERVER['HTTP_HOST'] does) in twig template? | {
"redpajama_set_name": "RedPajamaC4"
} | 6,181 |
import { Component, createElement } from 'react';
export const isStateLess = (ReactComponent => ReactComponent.prototype && !ReactComponent.prototype.render);
export function createComponent(MaterialUIComponent, mapProps) {
class InputComponent extends Component {
getRenderedComponent() {
return this.component;
}
render() {
return createElement(MaterialUIComponent, {
...mapProps(this.props),
ref: !isStateLess(MaterialUIComponent)
? el => (this.component = el)
: null
});
}
}
InputComponent.displayName = `ReduxFormMaterialUI${MaterialUIComponent.name}`;
return InputComponent;
}
export const mapError = ({
meta: { touched, error, warning } = {},
input,
...props
}) =>
touched && (error || warning)
? {
...props,
...input,
error: Boolean(error || warning),
helperText: error || warning
}
: { ...input, ...props };
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,009 |
What's stopping Wales competing in Eurovision – and would we do better than the UK? Experts give their views
22 May 2021 5 minutes Read
Netta Barzilai at the 2019 Eurovision song contest. Picture by Martin Fjellanger, Eurovision Norway, EuroVisionary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
The BBC would need to give up being the UK's Eurovision broadcaster before Wales was allowed to compete in its own right, according to experts on the subject.
Wales has already competed in Junior Eurovision in 2018 and 2019 but has always been represented by the United Kingdom at the main event.
Phil Jackson, Associate Head of Media at Edge Hill University and expert on all things Eurovision, said that it would be difficult to see a Welsh entry while the BBC remained the main public service broadcaster in Wales.
"Wales is currently not eligible to be a competitive country as we compete as the United Kingdom," he said.
"Devolved government does not mean separate participation, as the Eurovision Song Contest is populated by the main member public service broadcaster of each country.
"For the UK it's the BBC, although S4C are members of the European Broadcasting Union."
Lynn Kenway of Eurovision Times concurred that while the BBC was in charge Wales would not be represented.
"Although we've seen Wales compete in the Junior Eurovision in 2018 and 2019, I don't see how they can possibly compete in the adult version whilst the BBC is still officially the UK's broadcaster for Eurovision," she said.
"If the BBC ever gave up being the broadcaster, then we may see the possibility of different broadcasters picking it up – as United Kingdom Independent Broadcasting did for Junior Eurovision.
"Then we could see nations entering rather than a United Kingdom entry, but whilst the publicly-funded BBC runs the show, they will want to represent the whole of the United Kingdom."
Dr. Paul Jordan, expert on the Eurovision Song Contest, said that one possible solution was for the nations of the UK to take turns in putting up an act.
"It may technically be possible to devolve the entry each year but I think it would probably still end up being a UK entry," he said.
'Substandard'
The experts also had mixed views on whether a Welsh entry would be more successful on its own than UK entries have been since they last won, in 1997.
A Welsh language entry might help the nation stand out from the crowd, Dr Paul Jordan said.
"I think they may be more interest and intrigue in Wales, particularly if there is a Welsh selection," he said.
"Ukraine won the contest in 2016 with a song in English and Crimean Tatar. It depends on the song and how it comes across on the night.
"I definitely think English language songs have an advantage, but Portugal showed in 2017 that you can still win by singing in your native language – and standing out by doing so."
Phil Jackson however wasn't sure that a Welsh entry would do any better than the UK had done.
"Well, Bonnie Tyler represented the UK in 2013, and as a Welsh performer didn't exactly catch the imagination of the voters!" he said.
"It depends. The UK have done badly as the quality of songs and performers for many years has been substandard, and at the end of the day what the UK has sent hasn't been particularly good.
"It does depend on song and performer, so if Wales could provide something of more interest to the international audience then they might do well."
'Lazy'
Lynn Kenway of Eurovision Times also said that what matters at the end of the day is the song and performer, not politics.
"The most frustrating cliché for me is that no one votes for us because they don't like us," she said. " They don't vote for us because we don't have a good enough song or performer!
"If we send something half decent people vote for it as we saw with Blue and Jade Ewan. We need a new way of doing things at the BBC on everything; the type of song, performer, staging and attitude towards Eurovision.
"Other countries 'get' Eurovision, have modernised and score highly every year. With the amount of musical talent in the UK we should too.
"If we do badly again this year it will allow the same old voices to lazily claim this time it was down to Brexit. That's just not how people vote when 220 million of them sit in front of the TV that Saturday night in May."
Phil Jackson agreed that Brexit was unlikely to be on the voters' minds and they hand out the points.
"It's not about that. It is about being entertained, and I wouldn't have thought people don't vote for us because of Brexit. When you vote you vote for the songs that you like," he said.
"In terms of politics, I think that it operates on the periphery, but the event is not political, and that's why it's light entertainment. The EBU have strict rules on song content, and they do not allow anything political.
"Armenia this year had to withdraw its entry after it emerged it had political undertones.
"The simmering tensions of the relationships and antagonisms between countries don't go away just because they are competing in the same contest, but for a few hours of light entertainment every year it's about being entertained and – cheesy though it sounds – being united through song and forgetting what's going on outside the euro-bubble."
lol just break up with uk and you will be able to take part LOL
Reply to PHIL
If we have our own national rugby and soccer teams we can have our own Eurovision entry – sorry but I no longer count UK as 'us'.
Hannercylch
Reply to Dafydd Evans
Separate representation in rugby & football is a consequence of those sports having been codified when the British Empire was at its zenith — so not necessarily a good precedent!
simon david perkins
Not political? Hmm, how else can we explain explain the historical voting of Cyprus and Greece, to choose just one example?
Reply to simon david perkins
The Cyprus and Greece example is such a lazy one. Cyprus and Greece almost always know each other's artists every year as their charts are almost identical. They play each other's songs on their radio stations prior to the contest and there is a lot of interest in each other's entries every year. Not to mention quite often the Cypriot entry is from Greece. Unfair? To a union dead set on making enemies with its neighbours perhaps. Political? No.
Last edited 7 months ago by Marios
What the hell is this crap? Armenia was forced to withdraw after being invaded by the illegal state of azerbaijan with the aid of its illegal state best buddies turkey and russia. Get you facts straight before spreading such lies disguised as journalism (if you can call this journalism). | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 108 |
Q: Is it possible to use RAID-Z/Z2 and iSCSI on FreeNAS 8 I am trying to configure FreeNas 8 to expose an RAID-Z or RAID-Z2 volume via iSCSI. I have had limited success. If I delete the RAID-Z volume and simply expose a "physical" disk in the iSCSI setup then everything works no problem. I have 4 * 2TB disks that I wanted to add to the RAID-Z volume.
With the amount of success I'm starting to think that maybe it's not possible to do this (or least not possible from the web GUI). Has anyone else done this before? Is it possible? Is there a trick I haven't discovered?
A: I guess you can't do an Auto size when you create the extent. When I specified a size everything worked as expected. Turning on the log on the web was extremely helpful!
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,184 |
import sys
import os
import shutil
from io import StringIO
from distutils.core import Distribution
from distutils.command.build_ext import build_ext
from distutils import sysconfig
from distutils.tests.support import TempdirManager
from distutils.tests.support import LoggingSilencer
from distutils.extension import Extension
from distutils.errors import (
CompileError, DistutilsSetupError, UnknownFileError)
import unittest
from test import support
from test.support import run_unittest
# http://bugs.python.org/issue4373
# Don't load the xx module more than once.
ALREADY_TESTED = False
def _get_source_filename():
srcdir = sysconfig.get_config_var('srcdir')
return os.path.join(srcdir, 'Modules', 'xxmodule.c')
class BuildExtTestCase(TempdirManager,
LoggingSilencer,
unittest.TestCase):
def setUp(self):
# Create a simple test environment
# Note that we're making changes to sys.path
super(BuildExtTestCase, self).setUp()
self.tmp_dir = self.mkdtemp()
self.sys_path = sys.path, sys.path[:]
sys.path.append(self.tmp_dir)
shutil.copy(_get_source_filename(), self.tmp_dir)
if sys.version > "2.6":
import site
self.old_user_base = site.USER_BASE
site.USER_BASE = self.mkdtemp()
from distutils.command import build_ext
build_ext.USER_BASE = site.USER_BASE
def _fixup_command(self, cmd):
# When Python was build with --enable-shared, -L. is not good enough
# to find the libpython<blah>.so. This is because regrtest runs it
# under a tempdir, not in the top level where the .so lives. By the
# time we've gotten here, Python's already been chdir'd to the
# tempdir.
#
# To further add to the fun, we can't just add library_dirs to the
# Extension() instance because that doesn't get plumbed through to the
# final compiler command.
if (sysconfig.get_config_var('Py_ENABLE_SHARED') and
not sys.platform.startswith('win')):
runshared = sysconfig.get_config_var('RUNSHARED')
if runshared is None:
cmd.library_dirs = ['.']
else:
name, equals, value = runshared.partition('=')
cmd.library_dirs = value.split(os.pathsep)
def test_build_ext(self):
global ALREADY_TESTED
xx_c = os.path.join(self.tmp_dir, 'xxmodule.c')
xx_ext = Extension('xx', [xx_c])
dist = Distribution({'name': 'xx', 'ext_modules': [xx_ext]})
dist.package_dir = self.tmp_dir
cmd = build_ext(dist)
self._fixup_command(cmd)
if os.name == "nt":
# On Windows, we must build a debug version iff running
# a debug build of Python
cmd.debug = sys.executable.endswith("_d.exe")
cmd.build_lib = self.tmp_dir
cmd.build_temp = self.tmp_dir
old_stdout = sys.stdout
if not support.verbose:
# silence compiler output
sys.stdout = StringIO()
try:
cmd.ensure_finalized()
cmd.run()
finally:
sys.stdout = old_stdout
if ALREADY_TESTED:
return
else:
ALREADY_TESTED = True
import xx
for attr in ('error', 'foo', 'new', 'roj'):
self.assertTrue(hasattr(xx, attr))
self.assertEqual(xx.foo(2, 5), 7)
self.assertEqual(xx.foo(13,15), 28)
self.assertEqual(xx.new().demo(), None)
doc = 'This is a template module just for instruction.'
self.assertEqual(xx.__doc__, doc)
self.assertTrue(isinstance(xx.Null(), xx.Null))
self.assertTrue(isinstance(xx.Str(), xx.Str))
def tearDown(self):
# Get everything back to normal
support.unload('xx')
sys.path = self.sys_path[0]
sys.path[:] = self.sys_path[1]
if sys.version > "2.6":
import site
site.USER_BASE = self.old_user_base
from distutils.command import build_ext
build_ext.USER_BASE = self.old_user_base
super(BuildExtTestCase, self).tearDown()
def test_solaris_enable_shared(self):
dist = Distribution({'name': 'xx'})
cmd = build_ext(dist)
old = sys.platform
sys.platform = 'sunos' # fooling finalize_options
from distutils.sysconfig import _config_vars
old_var = _config_vars.get('Py_ENABLE_SHARED')
_config_vars['Py_ENABLE_SHARED'] = 1
try:
cmd.ensure_finalized()
finally:
sys.platform = old
if old_var is None:
del _config_vars['Py_ENABLE_SHARED']
else:
_config_vars['Py_ENABLE_SHARED'] = old_var
# make sure we get some library dirs under solaris
self.assertTrue(len(cmd.library_dirs) > 0)
def test_user_site(self):
# site.USER_SITE was introduced in 2.6
if sys.version < '2.6':
return
import site
dist = Distribution({'name': 'xx'})
cmd = build_ext(dist)
# making sure the user option is there
options = [name for name, short, lable in
cmd.user_options]
self.assertTrue('user' in options)
# setting a value
cmd.user = 1
# setting user based lib and include
lib = os.path.join(site.USER_BASE, 'lib')
incl = os.path.join(site.USER_BASE, 'include')
os.mkdir(lib)
os.mkdir(incl)
# let's run finalize
cmd.ensure_finalized()
# see if include_dirs and library_dirs
# were set
self.assertTrue(lib in cmd.library_dirs)
self.assertTrue(lib in cmd.rpath)
self.assertTrue(incl in cmd.include_dirs)
def test_optional_extension(self):
# this extension will fail, but let's ignore this failure
# with the optional argument.
modules = [Extension('foo', ['xxx'], optional=False)]
dist = Distribution({'name': 'xx', 'ext_modules': modules})
cmd = build_ext(dist)
cmd.ensure_finalized()
self.assertRaises((UnknownFileError, CompileError),
cmd.run) # should raise an error
modules = [Extension('foo', ['xxx'], optional=True)]
dist = Distribution({'name': 'xx', 'ext_modules': modules})
cmd = build_ext(dist)
cmd.ensure_finalized()
cmd.run() # should pass
def test_finalize_options(self):
# Make sure Python's include directories (for Python.h, pyconfig.h,
# etc.) are in the include search path.
modules = [Extension('foo', ['xxx'], optional=False)]
dist = Distribution({'name': 'xx', 'ext_modules': modules})
cmd = build_ext(dist)
cmd.finalize_options()
from distutils import sysconfig
py_include = sysconfig.get_python_inc()
self.assertTrue(py_include in cmd.include_dirs)
plat_py_include = sysconfig.get_python_inc(plat_specific=1)
self.assertTrue(plat_py_include in cmd.include_dirs)
# make sure cmd.libraries is turned into a list
# if it's a string
cmd = build_ext(dist)
cmd.libraries = 'my_lib'
cmd.finalize_options()
self.assertEqual(cmd.libraries, ['my_lib'])
# make sure cmd.library_dirs is turned into a list
# if it's a string
cmd = build_ext(dist)
cmd.library_dirs = 'my_lib_dir'
cmd.finalize_options()
self.assertTrue('my_lib_dir' in cmd.library_dirs)
# make sure rpath is turned into a list
# if it's a list of os.pathsep's paths
cmd = build_ext(dist)
cmd.rpath = os.pathsep.join(['one', 'two'])
cmd.finalize_options()
self.assertEqual(cmd.rpath, ['one', 'two'])
# XXX more tests to perform for win32
# make sure define is turned into 2-tuples
# strings if they are ','-separated strings
cmd = build_ext(dist)
cmd.define = 'one,two'
cmd.finalize_options()
self.assertEqual(cmd.define, [('one', '1'), ('two', '1')])
# make sure undef is turned into a list of
# strings if they are ','-separated strings
cmd = build_ext(dist)
cmd.undef = 'one,two'
cmd.finalize_options()
self.assertEqual(cmd.undef, ['one', 'two'])
# make sure swig_opts is turned into a list
cmd = build_ext(dist)
cmd.swig_opts = None
cmd.finalize_options()
self.assertEqual(cmd.swig_opts, [])
cmd = build_ext(dist)
cmd.swig_opts = '1 2'
cmd.finalize_options()
self.assertEqual(cmd.swig_opts, ['1', '2'])
def test_check_extensions_list(self):
dist = Distribution()
cmd = build_ext(dist)
cmd.finalize_options()
#'extensions' option must be a list of Extension instances
self.assertRaises(DistutilsSetupError,
cmd.check_extensions_list, 'foo')
# each element of 'ext_modules' option must be an
# Extension instance or 2-tuple
exts = [('bar', 'foo', 'bar'), 'foo']
self.assertRaises(DistutilsSetupError, cmd.check_extensions_list, exts)
# first element of each tuple in 'ext_modules'
# must be the extension name (a string) and match
# a python dotted-separated name
exts = [('foo-bar', '')]
self.assertRaises(DistutilsSetupError, cmd.check_extensions_list, exts)
# second element of each tuple in 'ext_modules'
# must be a ary (build info)
exts = [('foo.bar', '')]
self.assertRaises(DistutilsSetupError, cmd.check_extensions_list, exts)
# ok this one should pass
exts = [('foo.bar', {'sources': [''], 'libraries': 'foo',
'some': 'bar'})]
cmd.check_extensions_list(exts)
ext = exts[0]
self.assertTrue(isinstance(ext, Extension))
# check_extensions_list adds in ext the values passed
# when they are in ('include_dirs', 'library_dirs', 'libraries'
# 'extra_objects', 'extra_compile_args', 'extra_link_args')
self.assertEqual(ext.libraries, 'foo')
self.assertTrue(not hasattr(ext, 'some'))
# 'macros' element of build info dict must be 1- or 2-tuple
exts = [('foo.bar', {'sources': [''], 'libraries': 'foo',
'some': 'bar', 'macros': [('1', '2', '3'), 'foo']})]
self.assertRaises(DistutilsSetupError, cmd.check_extensions_list, exts)
exts[0][1]['macros'] = [('1', '2'), ('3',)]
cmd.check_extensions_list(exts)
self.assertEqual(exts[0].undef_macros, ['3'])
self.assertEqual(exts[0].define_macros, [('1', '2')])
def test_get_source_files(self):
modules = [Extension('foo', ['xxx'], optional=False)]
dist = Distribution({'name': 'xx', 'ext_modules': modules})
cmd = build_ext(dist)
cmd.ensure_finalized()
self.assertEqual(cmd.get_source_files(), ['xxx'])
def test_compiler_option(self):
# cmd.compiler is an option and
# should not be overriden by a compiler instance
# when the command is run
dist = Distribution()
cmd = build_ext(dist)
cmd.compiler = 'unix'
cmd.ensure_finalized()
cmd.run()
self.assertEqual(cmd.compiler, 'unix')
def test_get_outputs(self):
tmp_dir = self.mkdtemp()
c_file = os.path.join(tmp_dir, 'foo.c')
self.write_file(c_file, 'void PyInit_foo(void) {}\n')
ext = Extension('foo', [c_file], optional=False)
dist = Distribution({'name': 'xx',
'ext_modules': [ext]})
cmd = build_ext(dist)
self._fixup_command(cmd)
cmd.ensure_finalized()
self.assertEqual(len(cmd.get_outputs()), 1)
if os.name == "nt":
cmd.debug = sys.executable.endswith("_d.exe")
cmd.build_lib = os.path.join(self.tmp_dir, 'build')
cmd.build_temp = os.path.join(self.tmp_dir, 'tempt')
# issue #5977 : distutils build_ext.get_outputs
# returns wrong result with --inplace
other_tmp_dir = os.path.realpath(self.mkdtemp())
old_wd = os.getcwd()
os.chdir(other_tmp_dir)
try:
cmd.inplace = 1
cmd.run()
so_file = cmd.get_outputs()[0]
finally:
os.chdir(old_wd)
self.assertTrue(os.path.exists(so_file))
so_ext = sysconfig.get_config_var('SO')
self.assertTrue(so_file.endswith(so_ext))
so_dir = os.path.dirname(so_file)
self.assertEqual(so_dir, other_tmp_dir)
cmd.inplace = 0
cmd.compiler = None
cmd.run()
so_file = cmd.get_outputs()[0]
self.assertTrue(os.path.exists(so_file))
self.assertTrue(so_file.endswith(so_ext))
so_dir = os.path.dirname(so_file)
self.assertEqual(so_dir, cmd.build_lib)
# inplace = 0, cmd.package = 'bar'
build_py = cmd.get_finalized_command('build_py')
build_py.package_dir = {'': 'bar'}
path = cmd.get_ext_fullpath('foo')
# checking that the last directory is the build_dir
path = os.path.split(path)[0]
self.assertEqual(path, cmd.build_lib)
# inplace = 1, cmd.package = 'bar'
cmd.inplace = 1
other_tmp_dir = os.path.realpath(self.mkdtemp())
old_wd = os.getcwd()
os.chdir(other_tmp_dir)
try:
path = cmd.get_ext_fullpath('foo')
finally:
os.chdir(old_wd)
# checking that the last directory is bar
path = os.path.split(path)[0]
lastdir = os.path.split(path)[-1]
self.assertEqual(lastdir, 'bar')
def test_ext_fullpath(self):
ext = sysconfig.get_config_vars()['SO']
# building lxml.etree inplace
#etree_c = os.path.join(self.tmp_dir, 'lxml.etree.c')
#etree_ext = Extension('lxml.etree', [etree_c])
#dist = Distribution({'name': 'lxml', 'ext_modules': [etree_ext]})
dist = Distribution()
cmd = build_ext(dist)
cmd.inplace = 1
cmd.distribution.package_dir = {'': 'src'}
cmd.distribution.packages = ['lxml', 'lxml.html']
curdir = os.getcwd()
wanted = os.path.join(curdir, 'src', 'lxml', 'etree' + ext)
path = cmd.get_ext_fullpath('lxml.etree')
self.assertEqual(wanted, path)
# building lxml.etree not inplace
cmd.inplace = 0
cmd.build_lib = os.path.join(curdir, 'tmpdir')
wanted = os.path.join(curdir, 'tmpdir', 'lxml', 'etree' + ext)
path = cmd.get_ext_fullpath('lxml.etree')
self.assertEqual(wanted, path)
# building twisted.runner.portmap not inplace
build_py = cmd.get_finalized_command('build_py')
build_py.package_dir = {}
cmd.distribution.packages = ['twisted', 'twisted.runner.portmap']
path = cmd.get_ext_fullpath('twisted.runner.portmap')
wanted = os.path.join(curdir, 'tmpdir', 'twisted', 'runner',
'portmap' + ext)
self.assertEqual(wanted, path)
# building twisted.runner.portmap inplace
cmd.inplace = 1
path = cmd.get_ext_fullpath('twisted.runner.portmap')
wanted = os.path.join(curdir, 'twisted', 'runner', 'portmap' + ext)
self.assertEqual(wanted, path)
def test_suite():
src = _get_source_filename()
if not os.path.exists(src):
if support.verbose:
print('test_build_ext: Cannot find source code (test'
' must run in python build dir)')
return unittest.TestSuite()
else: return unittest.makeSuite(BuildExtTestCase)
if __name__ == '__main__':
support.run_unittest(test_suite())
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,505 |
This firework is Category 2 . Please insure you read all instructons on the label before firing.
Fireworks cannot be sold to anyone under the age of 18. Please do not be offended if we ask for ID when purchasing or collecting.
Always follow the firework code. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,137 |
\section*{1. Introduction.} Cyclic dominance of species has e.g. been observed in lizard populations [Sin]
as well as in bacterial communities [Kerr]. Hereby, three different morphisms of lizards resp. three
strands of bacteria compete in a non-transitive way: each morphism/strand outperforms another but is
in turn dominated by the third. The resulting cyclic coevolutionary dynamics may feature
biodiversity [Kerr], which constitutes a central topic in modern ecology [May, Neal] and environmental
policy [CNRS]. Much interest and effort is therefore devoted to the theoretical understanding of
such systems.
In the context of evolutionary game theory [Hof], the paradigmatic model for cyclic coevolution is
the well-known rock-paper-scissors game, where each strategy beats another but is itself defeated by
a third. The replicator dynamics deterministically describes the system's time-evolution, yielding
oscillatory behavior. Similar results were already obtained in the pioneering work by Lotka [Lot]
and Volterra [Vol], who described fish catches in the Adriatic through nonlinear differential equations,
exhibiting oscillations as well. Despite their success and popularity, as a common shortcoming, these
approaches ignore any form of internal noise arising from
size fluctuations, which are unavoidable for finite populations,
as well as from the stochastic spatial distribution of the reactants. In this article, we will
specifically focus on the effects of finite-size fluctuations on the coevolution of a three-species
model. Actually, the deterministic approaches (tacitly) assume that any finite-size effects should disappear
in the limit of large populations, where they are expected to become valid.
Only recently, the influence of the finite character of populations, as well as the relation between
stochastic models and their deterministic counterparts have been investigated (see e.g. [Tra1, Tra2]). \\
In this article, we consider a paradigmatic microscopic model for cyclic coevolutionary dynamics and study the effects of internal stochasticity as compared to the deterministic predictions. We show that finite-size fluctuations dramatically alter the system's behavior: While the rate equations predict the existence of (neutrally) stable cycles, for any size, stochasticity leads the system to the extinction of two species. Technical details of our analysis can be found in [Rei], here we focus on a more general and intuitive discussion.
\section*{2. The model.} We consider three populations, labeled as $A$, $B$, and $C$, which mutually inhibit each other: $A$ invades $B$, $B$ outperforms $C$, and $C$ dominates over $A$, closing the cycle. For the stochastic dynamics, we consider a finite number $N$ of overall individuals in an ``urn'' [Fel], i.e. in a well-mixed environment. Two individuals may react with each other at certain rates $k_A,~k_B$ and $k_C$ according to the following reaction scheme:
\begin{equation}
A+B\stackrel{k_C}{\longrightarrow} A+A~,\quad
B+C\stackrel{k_A}{\longrightarrow} B+B ~,\quad
C+A\stackrel{k_B}{\longrightarrow} C+C~ .
\label{react}
\end{equation}
Such ``urn models'' are closely related to the Moran process [Mor], which describes stochastic evolution of finite populations with constant fitness.\\
Note that since the reactions (\ref{react}) conserve the total number $N$ of individuals, one is left only with two degrees of freedom exist.
\section*{3. Deterministic analysis.} Let us start with the deterministic approach, in which
the mean-values of the densities of species $A,~B$, $C$, denoted respectively $a,~b$, and $c$, evolve according to the so-called rate equations, which are easily obtained and read:
\begin{equation}
\dot{a}=a(k_Cb-k_Bc)~, \quad
\dot{b}=b(k_Ac-k_Ca)~,\quad
\dot{c}=c(k_Ba-k_Ab)~.
\label{RE}
\end{equation}
Due to the conservation of the total density (for commodity, we set $a+b+c=1$ ) only two of the three equations are independent. In addition, the quantity $K=a(t)^{k_A}b(t)^{k_B}c^{k_C}$ is conserved by Eqs.~(\ref{RE}) [Rei] and, as for mechanical systems with conserved energy, is responsible for cyclic orbits in the phase space.
In Fig.~\ref{simplex_ext} (a), the solutions of Eqs.~(\ref{RE}) are shown in the two-dimensional phase portrait (simplex) for different initial conditions. The flows are cyclic trajectories around the neutrally stable
reactive fixed point $(a^*,b^*,c^*)=(k_A,k_B,k_C)/(k_A + k_B + k_C)$, which corresponds to a coexistent state
with nonvanishing densities of all species. On the contrary, the corners of the simplex
are associated with absorbing states, where only one species survives. In addition, the boundary of the simplex, is absorbing: once it has been reached, one of the species gets extinct and, among the two remaining populations, one dominates the other and the system is driven to one of the corners. As a consequence of
the conservation of $K$,
\begin{equation}
\mathcal{R}=\mathcal{N}\sqrt{a^{*k_A}b^{*k_B}c^{*k_C}-K}\quad,
\label{const2}
\end{equation}
is also left invariant by Eqs.~(\ref{RE}). Here, $\mathcal{N}$ is a normalization factor.
As any perturbations to one closed orbits of Fig.~\ref{simplex_ext} (a)
give rise to a different regular cycle, they latter are said to be \emph{neutrally stable}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.85]{simplex_ext.eps}
\caption{(color online) Reproduced from [Rei]. The three-species simplex for reaction rates $k_A=1,~k_B=1.5,~k_C=2$ is drawn in (a). The rate equations predict cycles, which are shown in black. Their linearization around the reactive fixed point is solved in proper coordinates $y_A,y_B$ (blue or dark gray). The red (or light gray) erratic flow is a single trajectory in a finite system ($N=200$), obtained from stochastic simulations. It spirals out from the reactive fixed point, eventually reaching an absorbing state.
The extinction probability $P_{\rm ext}$ is reported in (b)
as function of the rescaled time $u=t/N$. Numerical data are shown for different system sizes $N=100$ (triangles), $200$ (boxes), $500$ (circles). The red (light gray), blue (dark gray), and black curves correspond to analytical results for different values $R=1/\sqrt{3}$ (top), $~1/3$ (bottom), and their average value (middle).
\label{simplex_ext}}
\end{center}
\end{figure}
The quantity $\mathcal{R}$ monotonically increases with the distance from the steady state where it vanishes. Hence, it provides an efficient measure of the separation length between the reactive fixed point and any deterministic cycles. Furthermore, in the vicinity of the stationary state, the linearization of Eqs.~(\ref{RE})
gives an accurate description of those orbits, which become circles in well suited
coordinates ($y_A,~y_B$ in Fig. \ref{simplex_ext} (a)). In the linear regime, $\mathcal{R}$
is the radius of these circles whose characteristic frequency is $\omega_0=\sqrt{\frac{k_Ak_Bk_C}{k_A+k_B+k_C}}$ [Rei].
\section*{4. Stochastic behavior.}
The stochastic model does not conserve the quantity (\ref{const2}) and, as a consequence,
the cyclic structure of the flows is lost. In fact, when one takes internal noise into account,
the trajectory in the phase space can jump between the deterministic cycles. As an illustration of this fact,
in Fig. \ref{simplex_ext} we have reported one of such a stochastic flow. Here, starting from the reactive fixed point, the trajectory soon departs from the latter,
spirals around erratically and deviate from it with time.
At some point, such a flow hits the absorbing boundary and ends
up in one of the absorbing states. This is the generic
scenario followed by the system. Indeed, due to the neutrally stable character of
the deterministic cycles, the stochastic trajectory can perform a `random walk' in the phase portrait
between them. Eventually, the
absorbing boundary is reached and coexistence of all three species is lost. Hence, stochasticity arising from
finite-size effects causes extinction of two species.
In the following, we outline an analytical approach allowing for a quantitative description
of the above behavior, with a proper treatment of the finite-size fluctuations
(see [Rei] for a detailed discussion).
For the sake of simplicity, we assume equal reaction rates ($k_A=k_B=k_C=1$) in the basic reactions (\ref{react}) that define the stochastic process. As usual, the dynamics of the latter is
enconded in the related master equation. From the latter, it is fruitful to proceed with
a Kramers-Moyal expansion (see e.g. [Gar]) which leads, up to the second term, to a
Fokker-Planck equation (FPE). To make further progress, we consider a small noise approximation
and linearize the FPE around the reactive fixed point [Rei]. We also exploit the above-mentioned symmetry
of the deterministic cycles in the $y$-variables, shown
in Fig. \ref{simplex_ext}, and introduce the polar coordinates $r,~\phi$. In the latter, the FPE
is recast in the neat form
\begin{equation}
\partial_t P(r,\phi,t)=-\omega_0\partial_\phi P(r,\phi,t)
+\frac{1}{12N}\Big[\frac{1}{r^2}\partial_\phi^2+\frac{1}{r}\partial_r+\partial_r^2\Big]P(r,\phi,t)~.
\label{pol-f-p}
\end{equation}
The interpretation of this equation is simple and enlightening.
The first term on the right-hand-side of (\ref{pol-f-p})
accounts for the deterministic oscillatory dynamics with frequency
$\omega_0$. The second term, which is a Laplacian operator in polar coordinates and is proportional to $1/N$,
encodes the finite-size fluctuations and ensures the (isotropic) diffusive
character of the dynamics. It tells that fluctuations decrease with the system size, $N$.\\
Ignoring the absorbing character of the boundary for a moment, the FPE (\ref{pol-f-p})
allows us to compute the mean-square displacement $\langle r^2(t) \rangle$ from the reactive fixed point,
when initially starting from there. In this situation, the initial probability distribution is a delta peak at
the reactive steady state, which broadens in time, remaining a spherically symmetric gaussian
distribution. Eventually, we obtain $\langle r^2(t) \rangle=t/(3N)$. Thus, larger systems necessitate a longer waiting time to reach a given deviation from the reactive fixed point.
\section*{5. Extinction probability.} Taking the absorbing nature of the boundary into account, we now investigate the extinction probability $P_\text{ext}(t)$, which is the probability at time $t$ that the system, which was initially in its reactive steady state, has reached the absorbing boundary and two species
become extinct. Note that an initially spherically symmetric probability distribution, as a delta peak in our case, is left spherically symmetric by the FPE (\ref{pol-f-p}). The dependence on $\phi$ in the latter equation therefore drops out, and only the second term remains, describing isotropic diffusion. Finding $P_\text{ext}(t)$ thus translates into a first-passage problem to a sphere. Note that, due to our linearization at the reactive fixed point, the triangular-shaped boundary is mapped onto a sphere, which is of course inaccurate. However, we are able to account for nonlinearities in a pragmatic manner, as shown below. The solution to the first-passage problem to a sphere is well-known and may e.g. be found in [Red]. We therefore directly obtain the Laplace transform of $P_\text{ext}$, which reads
\begin{equation}
\text{LT}\{P_\text{ext}(u)\}=\frac{1}{sI_0(R\sqrt{s/12})}
\label{ext_prob_u}
\end{equation}
where $u=t/N$ is the rescaled time. As already found before, increasing the system size $N$ only has the
effect of protracting the time at which extinction occurs. In Fig. \ref{simplex_ext} (b) we compare results from
stochastic simulations (based on the Gillespie algorithm) to our analytical approximations. For the
simulations, different total populations sizes $N$ were considered and,
when rescaling time according to $u=t/N$, the data are seen to collapse on a universal curve, validating the scaling result. As for the analytical curves, nonlinear effects were dealt with
by considering different
distances $R$ to the absorbing boundary. Small/large values of $R$ over/under-estimate the numerical results. However, for an intermediate value of $R$, we obtain an excellent
agreement.
\section*{6. Conclusions.} We have presented a simple stochastic model for cyclic coevolutionary dynamics. Biodiversity in the form of coexistence of all three species emerges within the deterministic approach, which leads to neutrally stable oscillations. As the fluctuation drive the system to extinction of two species, stochasticity invalidates the rate equations predictions. We have quantified this behavior by deriving an
appropriate Fokker-Planck equation, using linearization around the reactive steady state and by exploiting the polar symmetry of the system.
\references{CNRS}
{
\item{[CNRS]} Le journal du CNRS, {\bf N. 196} (2006)
\item{[Fel]} W. Feller, {\it An Introduction to Probability Theory and its Application}, 3rd ed., Vol. 1, Wiley, New York, 1968.
\item{[Gar]} C. W. Gardiner, {\it Handbook of Stochastic Methods}, 1st ed., Springer, Berlin, 1983.
\item{[Hof]} J. Hofbauer and K. Sigmund, {\it Evolutionary Games and Population Dynamics}, Cambridge University Press, Cambridge, 1998.
\item{[Kerr]} B. Kerr, M.~A.Riley, M.~W. Feldman, and B.~J.~M. Bohannan, {\it Local dispersal promotes biodiversity in a real-life game of rock-paper-scissors}, Nature
{\bf 418} (2002), 171.
\item{[Lot]} A. J. Lotka, J. Amer. Chem. Soc. {\bf 42} (1920), 1595.
\item{[May]} R. M. May, {\it Stability and Complexity in Model Ecosystems}, Cambridge University Press, Cambridge, 1974.
\item{[Mor]} P. A. P. Moran, {\it Random processes in genetics}, Proc. Camb. Phil. Soc. {\bf 54} (1958), 60.
\item{[Neal]} D. Neal, {\it Introduction to Population Biology}, Cambridge University Press, Cambridge, 2004.
\item{[Red]} S. Redner, {\it A guide to first-passage processes}, Cambridge University Press, Cambridge, 2001 .
\item{[Rei]} T. Reichenbach, M. Mobilia, and E. Frey, {\it Coexistence versus extinction in the stochastic cyclic Lotka-Volterra model}, accepted in Physical Review E, {\tt q-bio.PE/0605042}.
\item{[Sin]} B. Sinervo B. C. M. and Lively, {\it The rock-scissors-paper game and the evolution of alternative male strategies}, Nature {\bf 380} (1996), 240.
\item{[Tra1]} A. Traulsen, J. C. Claussen, and C. Hauert, {\it Coevolutionary Dynamics: From Finite to Infinite Populations}, Phys. Rev. Lett. {\bf 95} (2005), 238701.
\item{[Tra2]} A. Traulsen, J. C. Claussen, and C. Hauert, {\it Coevolutionary dynamics in large, but finite populations}, Phys. Rev. E {\bf 74} (2006), 011901.
\item{[Vol]} V. Volterra, Mem. Accad. Lincei {\bf 2} (1926), 31.
}
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3 |
'Octomom' Files for Bankruptcy, John Legend Replaces Lionel Richie on...
'Octomom' Files for Bankruptcy, John Legend Replaces Lionel Richie on ABC's 'Duets,' Woody Harrelson & Matthew McConaughey Teaming Up Again
By Valerie Schirmer at 3:33 pm on Tuesday May 1, 2012
"Octomom" Nadya Suleman Declares Bankruptcy
Just about anyone who has ever heard of Nadya Suleman knows what a hot mess she is.
Better known as the Octomom, Suleman first gained recognition is 2009 when she gave birth to octuplets.
The single mom, who already had six young children, quickly became a media favorite.
Since then she's been in and out of the spotlight with focus on her parenting skills, weight loss, talks of plastic surgery, and talks of being involved in porn.
She's once again in the news, this time for money problems.
Suleman is so far into debt she has decided to file for bankruptcy.
The 36-year-old is claiming she cannot pay her bills, has assets that equal less than $50,000, and has a debt in the range of $500,000 to $1 million.
Among those she supposedly owes are the DMV, Verizon Wireless, and DirecTV—just to name a few.
Not really surprising, huh?
John Legend Replaces Lionel Richie on ABC's "Duets", Which Premieres in May
Yet ANOTHER singing competition show is set to premiere in May. That's right, another reality show that looks for unknown singing talent.
While the premise of Duets might be a little different than the other already popular competition shows, it's the fact that it's starting off ith a little controversy that is already setting it apart.
It's been reported that the show is going through a little bit of a cast shakeup.
Apparently, Lionel Richie was scheduled to be one of the judges. But just a few weeks before the show's premiere, the singer has pulled out because of scheduling conflicts.
The good news is that Richie's departure hasn't ruined the show because John Legend has stepped up to take his place.
Other confirmed judges are Jennifer Nettles, Robin Thicke, and Kelly Clarkson.
The four will work to weed out the talent until they find the person who will make the perfect singing partner.
Look for on ABC starting May 24th.
Woody Harrelson and Matthew McConaughey are Teaming Up in "True Detective"
After starring together in Surfer, Dude and Edtv, Woody Harrelson and Matthew McConaughey may once again be working with each other. That is if the deal that HBO is currently working on goes through.
Should everything go smoothly, the actors will star as detectives in what has been dubbed 'True Detective.'
The show, which was written by The Killing's Nic Pizzolatto, will center on a pair of detectives who cross paths while probing into a 17-year-old case that involves a Louisiana serial killer.
The storyline will begin in 1995, when the murder takes place, and jumps to 2012, when the case is reopened.
Any seasons that will follow the first will have the same premise, but will feature a new storyline and new characters.
Sounds like a good one!
Valerie Schirmer is a freelance writer/editor who has written about a wide variety of topics for newspapers, magazines, and websites. She currently resides in South Jersey with her husband, daughter, and a crazy dachshund.
Check out Valerie's article from last week!
Contact Valerie at writerval@comcast.net
Register NOW with Philly2Philly!
Follow us on Philly2Philly's Facebook page! And, don't forget to "like" us!
Octomom photo from theadriennegaleexperience.blogspot.com
John Legend photo: tvbythenumbers.zap2it.com
Woody/Matthew photo: www.hollywood.com
abc duets
duets abc
octomom
valerie schirmer | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,070 |
Clausens Pakhus er en fredet bindingsværksbygning i Nysted på det sydlige Lolland. Bygningen er opført i 1807 som kornmagasin af en lokal købmand. I dag er det indrettet som bibliotek.
Historie
Clausens Pakhus blev opført i 1807 af købmanden Peter Gommensen Clausen (1762-1811), der skulle bruge bygningen som kornmagasin. Ved Clausens død i 1811 overtog Martin Sophus Mackeprang købmandsforretningen efter han havde giftet sig med Clausens enke.
Ved opførslen var hele bygningen bygget i bindingsværk, men kort inden 1847 valgt man at ombygge underetagens langsider til grundmur med luger i støbejern. I 1851 blev der opført yderligere tæt ved kaldet Bønnelyches Pakhus opkaldt efter købmanden Christian Bønnelyche. Igennem flere år havde bygningen flere ejere.
I 1921 A. Nielsen & Co hele bygningen, og i 1954 blev pakhuset fredet. Nysted Kommune købte bygningen, samt Bønnelyches Pakhus og området omkring bygningerne i 1979. I flere år blev den ikke brugt, men i 1989 blev den indrettet som byens bibliotek. I den forbindelse erstattede man lugerne med vinduer for at få lys ind i bygningen.
Beskrivelse
Clausens Pakhus er opført i bindingsværk, og består af 16 fag i to stokværk. Underetagen er grundmuret i siderne og soklen er i kampesten. Begge gavle er opført helt i bindingsværk.
Taget er et valmet, rødt tegltag. Alt træværk, samt soklen, er sorttjæret, mens tavlerne er gulkalket.
Se også
Fredede bygninger i Guldborgsund Kommune
Referencer
Eksterne henvisninger
Huse i Nysted. Nysted bevarelsesforening, revideret udgave (2008)
Niegaard, Hellen; Fra korn til kultur "Nysteds nye bibliotek er indrettet i en bygning fra 1807, Clausens Pakhus i Adelgade". Nykøbing Falsters Centralbibliotek
Fredede bygninger, konstruktioner og anlæg i Guldborgsund Kommune
Bindingsværksbygninger i Danmark
Nysted | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,921 |
We love wood because it's durable, renewable, recyclable and beautiful. As one of the largest users of wood in the retail sector, we always look for ways to use it wisely and to source it according to high set standards. Our long-term goal is that all wood will come from more sustainable sources, defined as recycled or FSC® certified wood, by 2020.
In 2017, we reached the goal of having 100% wood from more sustainable sources in countries with a history of challenges related to forest management.
Approaching 2020 and the goal of only using wood from more sustainable sources, more and more of our products only contain sustainably sourced wood. We also take great care to use the raw material in a responsible way – the wood is used so that unnecessary waste is minimised in production.
HAVSTA, a storage family made from solid pine that has its roots in Scandinavian design and craftsmanship, is an example of products that only contain sustainably sourced wood.
All wood used in HAVSTA comes from sustainably managed forests that live up to strict requirements covering e.g. environmental, social and economic aspects. The wood is responsibly harvested in pine forests mainly in Russia and Poland.
In the early 2000s, acacia's positive properties were discovered in Malaysia – a very strong and long-lasting wood. At the time, it was not used for furniture at all but was mainly grown for the paper industry. Today you'll find acacia wood in ÄPPLARÖ, one of our most durable outdoor furniture series.
All Acacia we use from Vietnam comes from FSC certified plantations. Together with our suppliers, smallholder farmers and partners like WWF, IKEA ensures that acacia is grown in a way that is better for the environment – and the local communities.
Acacia is a rot-resistant and durable wood which works great for outdoor furniture such as ÄPPLARÖ series.
Our acacia comes from certified plantations which are environmentally, socially, and economically sustainable.
IKEA sources wood from 50 different countries. Mostly from Sweden, Poland, Russia, Lithuania and Germany.
Making more from less is part of the IKEA culture. Every piece of wood is cut and shaped so that unnecessary waste is minimised in production. The new PLATSA storage and SKOGSÅ worktop are beautiful evidence of how we use resources wisely while maintaining quality and durability. We constantly look for new and smarter ways to design and build our furniture to make sure you get the best possible product which impacts our planet the least.
PLATSA storage is durable and the quality is enhanced by using the right material at the right place for the right purpose. It uses far less raw material compared to solid constructions thanks to smart design and production solutions. A combination of different natural materials makes it lightweight which also saves on transport and simplifies handling.
SKOGSÅ worktop is made with a thin layer technique where a three-millimetre solid wood surface is placed on particle board. This gives a durable surface that can handle humidity well. Apart from the benefits in the kitchen, the thin layer technique requires less raw material than a solid worktop. Plus it weighs 20% less than solid wood, so reduces emissions in transportations, and is easier to carry and handle.
Already produced to strict standards, the wood in INDUSTRIELL chair, table and bench is treated with even more care and respect. We made it a point to use as much of each pine tree as we could, keeping imperfections like knots and changes in the grain and colour.
The EKORRE rocking-moose is made from rubberwood from responsibly managed forests in Asia. Instead of letting the wood become firewood for instance, we use it for furniture – and take better care of the planet's resources by doing so.
FSC works to take care of the world's forests through responsible forest management, making sure we have forests for all forever. Forests house over two-thirds of known terrestrial species, and are home to 80% of terrestrial biodiversity. Our work together with WWF and FSC contributes to protecting ecosystems - and people's livelihoods. We are one of the world's largest buyers of FSC certified wood in the retail sector and are also part of founding FSC, together with WWF.
In addition to suppliers meeting our strict IWAY Forestry Standard, the volume of wood from more sustainable sources – recycled wood and wood from forests certified by the FSC – increased to 77% in 2017. We are aiming for 100% by 2020.
So far, we have helped to improve forest management in Europe and Asia and contributed to increasing FSC® certified forest areas by around 35 million hectares (about the size of Germany) in the countries where we work with WWF.
We work with WWF and others to combat illegal logging and promote responsible timber trade. Beginning with five forest projects in seven countries in 2002, today, we collaborate in 14 countries on a variety of projects to support credible forest certification which benefit both people and the environment. The work includes mapping and protecting High Conservation Value Forests to secure biological and social forest values.
In 1983, more than 18,500 hectares of rainforest in Sabah in the north-eastern part of Borneo was destroyed by fire. In 1998 we funded the Sow a Seed project to restore the rainforest. Today 12,500 hectares (an area equal to 26,000 football fields) of rainforest has been brought back to life. Orangutans and baby elephants have rediscovered their natural habitat. And researchers around the world have been offered a unique insight into forest regeneration in what could be the biggest rainforest restoration project in the world. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,081 |
Q: Amazon SES Rails mailer on Heroku: InvalidParameterValue - Missing required header 'From' RoR mailer environment, SES running production on Heroku and development on my notebook.
I get Missing required header 'From' ses error response from production deploy on Heroku.
My header definitely has the 'from' field, I know that because I actually get the email in my inbox , so SES sent it but then raise the error anyway.
When I run from development environment I can - and do - send thousands of emails from my development computer and receive them in my test emails, everything works just fine. Deploying to Heroku the same exact code, gives that error, but send the email anyway.
Any help will be really appreciated.
A: Seems to be a bug in the beta of SES. I went around this by turning off failure. Not ideal, but needs to move on, especially when it does send the email
in config/production.rb
config.action_mailer.raise_delivery_errors = false
config.action_mailer.delivery_method = :ses
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,649 |
Q: VBA Error on Stream.SaveToFile I'm attempting to modify a VBA script from another post (26486871).
The script will download a Zip file, extract a text file and import the data to Excel.
I don't know VBA so I'll tackle each of the functions one at-a-time.
*
*Create a temp directory with a randomized name..........................Complete
*Download a Zip file from a public server................................Error
*Extract the text file (20MB, tab-delimited).............................Not Yet
*Import the data into the open worksheet (overwrite the existing data)...Not Yet
I'm receiving a Run-time Error with the Stream.SaveToFile:
"Arguments are of the wrong type, are out of acceptable range or are in conflict."
I've tried options 1, 2 and adSaveCreateNotExists. However, the Zip file won't save to the temp folder.
I appreciate any help.
'Main Procedure
Sub DownloadExtractAndImport()
Dim url As String
Dim targetFolder As String, targetFileZip As String, targetFileTXT As String
Dim wkbAll As Workbook
Dim wkbTemp As Workbook
Dim sDelimiter As String
Dim newSheet As Worksheet
url = "http://www.example.com/data.zip"
targetFolder = Environ("TEMP") & "\" & RandomString(6) & "\"
MkDir targetFolder
targetFileZip = targetFolder & "data.zip"
targetFileTXT = targetFolder & "data.txt"
'1 download file
DownloadFile url, targetFileZip
End Sub
Private Sub DownloadFile(myURL As String, target As String)
Dim WinHttpReq As Object
Set WinHttpReq = CreateObject("Msxml2.ServerXMLHTTP")
WinHttpReq.Open "GET", myURL, False
WinHttpReq.send
myURL = WinHttpReq.responseBody
If WinHttpReq.Status = 200 Then
Set oStream = CreateObject("ADODB.Stream")
oStream.Open
oStream.Type = 1
oStream.Write WinHttpReq.responseBody
oStream.SaveToFile targetFile, 1 ' 1 = no overwrite, 2 = overwrite
oStream.Close
End If
End Sub
Private Function RandomString(cb As Integer) As String
Randomize
Dim rgch As String
rgch = "abcdefghijklmnopqrstuvwxyz"
rgch = rgch & UCase(rgch) & "0123456789"
Dim i As Long
For i = 1 To cb
RandomString = RandomString & Mid$(rgch, Int(Rnd() * Len(rgch) + 1), 1)
Next
End Function
A: It is just a typo: it must be target, not targetFile, as targetFile is not specified. Please use Option Explicit to prevent errors like this.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,599 |
Clavia may re-manufacture expansion boards for the G2. However Clavia needs enough people to commit for them to go ahead with it. A discussion has been started at the Electro-Music forums for people to say if they want one.
The Nord Modular G2 OS and the G2 Editor have been updated to version 1.6. This update is also accompanied by a new universal XP- and Vista-compatible USB driver – v2.14. This driver can also be used with the Nord Stage EX, Nord Electro 3, Nord Wave and the Nord C1. The driver is digitally signed which allows users with a Vista 64-bit system to install it.
Stage seventy six as well as a preview of the coming 2.0 OS for the Nord Stage, the anticipated new OS v1.40 for the Nord Modular G2 and OS v2.3 plus two new grand pianos for the Nord Electro.
Now you will be able to experience the Nord Modular G2 demo software. This software will demonstrate a bit of the power of the Nord Modular G2 hardware system.
All of you that are patiently waiting for the G2 1.20-software will unfortunately have to wait a little bit longer. It will be released the 5th of July. | {
"redpajama_set_name": "RedPajamaC4"
} | 36 |
package com.amazonaws.services.ecs.model;
/**
*
*/
public enum AgentUpdateStatus {
PENDING("PENDING"),
STAGING("STAGING"),
STAGED("STAGED"),
UPDATING("UPDATING"),
UPDATED("UPDATED"),
FAILED("FAILED");
private String value;
private AgentUpdateStatus(String value) {
this.value = value;
}
@Override
public String toString() {
return this.value;
}
/**
* Use this in place of valueOf.
*
* @param value
* real value
* @return AgentUpdateStatus corresponding to the value
*/
public static AgentUpdateStatus fromValue(String value) {
if (value == null || "".equals(value)) {
throw new IllegalArgumentException("Value cannot be null or empty!");
}
for (AgentUpdateStatus enumEntry : AgentUpdateStatus.values()) {
if (enumEntry.toString().equals(value)) {
return enumEntry;
}
}
throw new IllegalArgumentException("Cannot create enum from " + value
+ " value!");
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 799 |
Peter Ferdinand av Österrike Peter Ferdinand Salvator Karl Ludwig Maria Joseph Leopold Anton Rupert Pius Pancraz, född 12 maj 1874, död 8 november 1948, var en österrikisk militär. Han tjänstgjorde i första världskriget.
Han var son till Ferdinand IV av Toscana och Alice av Bourbon-Parma.
Gift sedan 1900 med Maria Christina av Bourbon-Bägge Sicilierna (1877-1947).
Barn
Gottfried av Österrike-Toscana (1902-1984); gift 1938 med Dorothea av Bayern (1920- )
Helena (1903-1924); gift 1923 med Philipp Albrecht, hertig av Württemberg (1893-1975)
Georg av Österrike-Toscana (1905-1952); gift 1936 med Marie Valerie von Waldburg-Zeil-Hohenems (1913- )
Rosa (1906-1983); gift 1928 med sin svåger Philipp Albrecht, hertig av Württemberg (1893-1975)
Källor
McIntosh, David, Die Unbekannten Habsburger Toscana, 2000
Huset Habsburg-Lothringen
Födda 1874
Män
Avlidna 1948
Österrikiska militärer under 1900-talet | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,516 |
\section{Introduction} \label{sec:intro}
\subsection{Background and Motivation} \label{ssec:background}
\paragraph{Non-Gaussian Component Analysis.}
Non-gaussian component analysis (NGCA) is a distribution learning problem
modeling the natural task of finding ``interesting'' directions in high-dimensional data.
As the name suggests, the objective is to find a ``non-gaussian'' direction (or, more generally, low-dimensional
subspace) in a high-dimensional dataset, under a natural generative model.
NGCA was defined in~\cite{JMLR:blanchard06a} and subsequently studied
from an algorithmic standpoint in a number of works,
see, e.g.,~\cite{VX11, TanV18, GoyalS19} and references therein.
For concreteness, we start by defining the relevant family of high-dimensional distributions.
\begin{definition} [High-Dimensional Hidden Direction Distribution] \label{def:pv-hidden}
{For a distribution $A$ on the real line with probability density function $A(x)$ and}
a unit vector $v \in \mathbb{R}^d$, consider the distribution over $\mathbb{R}^d$ with probability density function
$\mathbf{P}^{A}_v(x) = A(v \cdot x) \exp\left(-\|x - (v \cdot x) v\|_2^2/2\right)/(2\pi)^{(d-1)/2}.$
That is, $\mathbf{P}^{A}_v$ is the product distribution whose orthogonal projection
onto the direction of $v$ is $A$,
and onto the subspace perpendicular to $v$
is the standard $(d-1)$-dimensional normal distribution.
\end{definition}
The NGCA learning problem is the following: Given i.i.d.\ samples from a distribution
$\mathbf{P}^{A}_v$ on $\mathbb{R}^d$, where the direction $v$ is unknown, find (or approximate) $v$. The standard
formulation assumes that the univariate distribution $A$ is known to the algorithm,
it matches its first $k$ moments with $N(0, 1)$, for some $k \in \mathbb{Z}_+$,
and there is a non-trivial difference in the moment of order $(k+1)$.
\paragraph{Information-Computation Tradeoffs for NGCA.}
\new{Since $A$ has its $(k+1)^{th}$ moment differing from that of a standard Gaussian,
a moment computation on $\mathbf{P}^{A}_v$ allows us to approximate $v$ in roughly $O(d^{k+1})$ samples and time.}
Interestingly, ignoring computational considerations, the NGCA problem can usually be solved with $O(d)$ samples.
Perhaps surprisingly, the aforementioned simple method \new{(requiring $\Omega(d^{k+1})$ samples)}
is qualitatively the best known \new{sample-polynomial time} algorithm for the problem.
\new{Given this state of affairs, it is natural to ask}
whether this information-computation gap is inherent for the problem itself.
In prior work,~\cite{DKS17-sq} provided formal evidence for the existence of an {\em information-computation
tradeoff} for NGCA under appropriate assumptions on the \new{univariate non-gaussian} distribution $A$.
The~\cite{DKS17-sq} result holds for a restricted model of computation,
known as the Statistical Query (SQ) model.
Statistical Query (SQ) algorithms are the class of algorithms
that are only allowed to query expectations of bounded functions
of the underlying distribution rather than directly access samples.
The SQ model was introduced by Kearns~\cite{Kearns:98}
and has been extensively studied in learning theory.
A recent line of work, see, e.g.,~\cite{FeldmanGRVX17, FeldmanPV15, FeldmanGV17},
generalized the SQ framework for search problems over distributions.
The reader is referred to~\cite{Feldman16b} for a survey.
In more detail, the SQ lower bound of~\cite{DKS17-sq}
applies even for the (easier) hypothesis testing version of NGCA, where the goal is to distinguish
between the standard Gaussian $N(0, I)$ on $\mathbb{R}^d$ and a planted distribution $\mathbf{P}_v^A$, for a hidden direction $v$.
(Hardness for hypothesis testing can easily be used to derive hardness for the corresponding search problem.)
Roughly speaking, they established the following generic SQ-hardness result:
\begin{quote}
{\bf Informal Theorem~\cite{DKS17-sq}:}
Let $A$ be a one-dimensional distribution that matches its first
$k$ moments with the standard Gaussian $G = N(0,1)$
and its chi-squared norm with $G$, $\chi^2(A,G)$, is \new{finite}.
Suppose we want to distinguish between $N(0, I)$ on $\mathbb{R}^d$ and the
distribution $\mathbf{P}^A_v$ for a random direction $v$.
Then any SQ algorithm for this testing task requires either
at least $d^{\Omega(k)} /\chi^2(A,G)$ samples or at least $2^{d^{\Omega(1)}}$ time.
\end{quote}
\new{A concrete application of the above result, given in~\cite{DKS17-sq}, is
an SQ lower bound for the classical problem of learning mixtures of high-dimensional Gaussians.
To obtain the hard family of instances, we take the one-dimensional distribution $A$
be a mixture of univariate Gaussians $\sum_{i=1}^k w_i N(\mu_i, \sigma^2)$
with pairwise separated and bounded means $\mu_i$
and common variance $\sigma^2 = 1/\mathrm{poly}(k)$
such that $A$ matches $\Omega(k)$ moments with $N(0, 1)$.
Moreover, $A$ will have total total variation distance at least $1/2$ from $N(0, 1)$.
Then, each distribution $\mathbf{P}_v^A$ will look like a collection of $k$ ``parallel pancakes'',
in which the means lie on a line (corresponding to the smallest eigenvalue
of the identical covariance matrices of the components). The orthogonal directions
will have an eigenvalue of one, which is much larger than the smallest eigenvalue.
}
More broadly, the aforementioned generic SQ lower bound~\cite{DKS17-sq}
has been the basis for a host of new and near-optimal
information-computation tradeoffs (in the SQ model)
for high-dimensional estimation tasks, including
robust mean and covariance estimation~\cite{DKS17-sq},
robust sparse mean estimation~\cite{DKS17-sq},
adversarially robust learning~\cite{BPR18},
robust linear regression~\cite{DKS19}, list-decodable estimation~\cite{DKS18-list, DKPPS21},
learning simple neural networks~\cite{DiakonikolasKKZ20}, and robust supervised learning
in a variety of noise models~\cite{DKZ20, DiakonikolasKPZ21, DK20-Massart-hard, DKKTZ21-benign}.
Interestingly, subsequent work has obtained additional evidence of hardness for some of these problems
via reductions from lattice problems~\cite{BrunaRST21} and variants of the planted clique problem~\cite{BrennanB20}.
\paragraph{Motivation for This Work.}
\new{
Interestingly, the generic SQ lower bound of~\cite{DKS17-sq}
is vacuous for the natural setting
where the distribution $A$ is discrete (in which case, we have $\chi^2(A,N(0,1)) = \infty$) or,
more generally, when $A$ has very large chi-squared norm with the standard Gaussian.
\newblue{More specifically, for the parallel pancakes distribution described above, one needs the ``thickness
parameter'' (corresponding to the eigenvalue of the covariance in the hidden direction)
to be at least inverse exponential in the dimension.}
A natural question,
which served as one of the motivations for this work, is whether information-computation tradeoffs
persist for the discrete case.
Consider for example the case where $A$ is supported on a discrete domain of size $k$
and matches its first $\Omega(k)$ moments with $N(0, 1)$. This corresponds to the special
case of the parallel pancakes distribution, where the component covariances are degenerate ---
having zero eigenvalue in the hidden-direction.
Does any efficient algorithm for these instances require $d^{\omega_k(1)}$ samples?
We answer these questions in the negative
by designing a sample and computationally efficient algorithm
for NGCA when $A$ is discrete or nearly discrete
in a well-defined sense (Assumption~\ref{cond:A-disc}).
The key tool leveraged in \new{our reuslt} is the LLL \new{algorithm}~\cite{LLL82}
for lattice basis reduction. We note that prior work~\cite{BrunaRST21, SZB21}
had used the LLL algorithm to obtain efficient learners for related problems
that could be viewed as special cases of NGCA.
\paragraph{Connection with Sum-of-Squares (SoS) and Low-Degree Tests.}
Before we proceed with a detailed description of our results, a final remark is in order.
As already mentioned, the SQ lower bounds of~\cite{DKS17-sq} are vacuous when $A$ is a discrete distribution.
On the other hand, recent work has established information-computation tradeoffs for NGCA when
$A$ is supported on $\{-1, 0, 1\}$, both for low-degree polynomial tests~\cite{MW21} and for
SoS algorithms~\cite{GhoshJJPR20}. At first sight, these hardness results combined with
our algorithm might appear to cast doubt
on the validity of the low-degree conjecture~\cite{Hopkins-thesis}.
We note, however, that the latter conjecture only posits that a {\em noisy} version of the corresponding problem
is computationally hard (as opposed to the problem itself) --- a statement that appears to hold true in our setting.
Conceptually, we view our algorithmic contribution as a novel example of an efficient algorithm
(beyond Gaussian elimination) not captured by the aforementioned restricted models of computation.
}
\subsection{Our Contributions} \label{ssec:results}
We consider the NGCA learning problem under the following assumption:
\begin{assm} \label{cond:A-disc}
The distribution $A$ on $\mathbb{R}$ is such that:
\begin{enumerate}
\item\label{LatticeCondition} There exist $r_j \in \mathbb{R}$ for $j \in [k]$ with $|r_j| = O(1)$,
$B \in \mathbb{Z}_+$, and $\epsilon>0$ such that
a sample $y \sim A$ is deterministically within additive $\epsilon$ of some number
of the form $\sum_{j=1}^k n_j r_j$, for $n_j \in \mathbb{Z}$ with $|n_j|\leq B$ for all $j \in [k]$.
\item \label{anticoncentrationCondition}
The distribution $A$ is anti-concentrated around $0$,
specifically $\Pr_{X \sim A}[|X| > 1/d] > 1/d$.
\item \label{concentrationCondition}
The distribution $A$ is concentrated around $0$,
specifically $\Pr_{X \sim A}[|X| > \mathrm{poly}(d)] < 1/d$.
\end{enumerate}
\end{assm}
\new{Some comments are in order to interpret Assumption~\ref{cond:A-disc}.}
Condition~\ref{LatticeCondition} above is the critical condition requiring
that $A$ is approximately supported on points
that are (small) integer linear combinations of the $r_j$'s.
\new{This is the key condition that underlies our main technique.
Notice that this condition can be satisfied by any distribution $A$ that has support size at most $k$,
or even a distribution $A$ that is supported on $k$ intervals, each of length at most $\epsilon$.
In fact, it is sufficient for $A$ to be $O(1/d)$-close in total variation distance to such a distribution,
as there will be a constant probability that any $O(d)$ sample set drawn from it are supported on the appropriate intervals.
This means that our algorithmic results applies, for example,
to parallel pancake distributions, as long as the thickness of the pancakes
is no more than $O(\epsilon/\sqrt{\log(d)})$.}
Conditions~\ref{anticoncentrationCondition} and \ref{concentrationCondition}
are technical conditions that are needed for our particular algorithm to work.
However, note that if Condition~\ref{anticoncentrationCondition} is not satisfied,
then it is reasonably likely that $O(d)$ random samples from $\mathbf{P}_v^A$ will have
much smaller variance in the $v$-direction than in any of the orthogonal directions.
This provides a much easier method for approximating $v$.
Condition \ref{concentrationCondition} is essentially required to guarantee that we do not
need to deal with unlimited precision to approximate points.
However, it is easy to see that if this condition is violated, one can approximate $v$ simply
by normalizing any samples from $\mathbf{P}_v^A$ with $\ell_2$-norm more than $d$.
We prove the following theorem:
\begin{theorem}[Main Result] \label{thm:main-ngca-disc}
Under Assumption~\ref{cond:A-disc}, if $\epsilon < 2^{-\Omega(d k^2)} B^{-\Omega(k)}$
with sufficiently large implied \new{universal} constants \new{in the big-$\Omega$},
there exists an algorithm that draws $m = 2d$ i.i.d.\ samples from $\mathbf{P}_v^{A}$
for an unknown unit vector $v \in \mathbb{R}^d$, runs in time $\mathrm{poly}(d, \new{k, \log B})$,
and outputs a vector $v^{\ast}$ such that with constant probability
either $\|v^\ast - v\|_2$ is small or $\|v^\ast + v\|_2$ is small.
\end{theorem}
We note that Theorem~\ref{thm:main-ngca-disc} only guarantees an approximation of either $v$ or $-v$.
Such a guarantee may be inherent, as if $A$ is a symmetric distribution we have that $\mathbf{P}_v^A = \mathbf{P}_{-v}^A$.
\subsection{Overview of Techniques} \label{ssec:techniques}
We begin by considering the simple case where the univariate distribution $A$ is supported {\em exactly} on integers.
This special case provides a somewhat simpler version of our algorithm while capturing some of the key ideas.
In this case, we draw $m=d+1$ i.i.d.\ samples \new{$x_i \in \mathbb{R}^d$} from $\mathbf{P}^A_v$ and note that (with high probability)
they will satisfy a unique (up to scaling) linear relation $\sum_{i=1}^m c_i x_i = 0$,
\new{for some $c_i \in \mathbb{R}$ with at least one $c_i \neq 0$.}
In particular, we have that $\sum_{i=1}^m c_i (v \cdot x_i) = 0$. Since the quantities $v \cdot x_i$ for $i \in [m]$ are all integers,
we hope to solve for them by finding the (with high probability unique, up to scaling)
integer linear relation among the $c_i$'s. \new{It turns out that this can be achieved by leveraging
the Lenstra-Lenstra-Lovasz (LLL) lattice basis reduction algorithm.}
Having found an integer solution $\sum_{i=1}^m c_i n_i = 0$ for $n_i \in \mathbb{Z}$,
we can solve the system of linear equations
$v\cdot x_i = n_i$, $i \in [m]$, for \new{the hidden vector} $v$.
We now proceed to deal with the case where $A$ is no longer supported on integers,
but is instead supported on elements that are {\em close} (within some additive error $\epsilon$) to integers.
In this case, we will similarly have $\sum_{i=1}^m c_i (v\cdot x_i) = 0$, which means that if $n_i$
is the integer \new{closest} to $v \cdot x_i$, we will have that $\sum_{i=1}^m c_i n_i$ is close to $0$.
In order to solve for this \new{near-integer linear-relation}, we make essential use of basic lattice techniques.
In particular, \new{for $n = (n_i)_{i=1}^m \in \mathbb{Z}^m$,} we define the quadratic form
$Q(\new{n}) := \sum_{i=1}^m n_i^2 + N \left( \sum_{i=1}^m c_i n_i \right)^2$,
for some appropriately large $N$. Note that integer vectors with small norm under $Q$
must have $|n_i|$ small for all $i \in [m]$ and have $\left| \sum_{i=1}^m c_i n_i \right|$ be very small.
It is not hard to show that if $\epsilon$ is sufficiently small and $N$ is chosen appropriately,
with high probability over the samples $x_i$, taking $n_i$ to be the integer closest to $v\cdot x_i$
for each $i \in [m]$ will give substantially the smallest non-zero norm under $Q$.
Therefore, using the LLL algorithm to find an approximate smallest vector will return (some multiple of) this vector.
Given the $n_i$'s, we note that $v\cdot x_i \approx n_i$ for all $i \in [m]$, and we can then
use least-squares regression to solve for an approximation to $v$.
Unfortunately, if the above approach is applied naively, it will work only if $\epsilon$ is assumed to be exponentially small in $d^2$,
\new{i.e., $2^{-\Omega(d^2)}$,} rather than in $d$.
This is because the LLL algorithm only guarantees a $2^{O(d)}$-approximation to the smallest vector.
The $\sum_{i=1}^m n_i^2$-term in $Q$ ensures that any such vector will have $n_i$ at most exponential \new{in $d$}.
But given that there are $2^{\Omega(d^2)}$ integer vectors with coefficients of this size,
we can expect one to randomly have $\left| \sum_{i=1}^m c_i n_i \right|$ be only $2^{-\Omega(d^2)}$.
This will be distinguishable from the vector we are looking for only if $\epsilon$ is smaller than this quantity.
In order to fix this issue, instead of taking only $m = d+1$ samples from $\mathbf{P}_v^A$,
we instead draw $m = 2d$ samples. These now have $d$ linear relations and we note that the vector of $n_i$'s
should approximately satisfy all of them. In particular, letting $V$ be the vector space of linear relations satisfied by the $x_i$'s,
we consider the quadratic form defined by $Q(n): = \|n\|_2^2 + N \left\| \mathrm{Proj}_V(n) \right\|_2^2$.
This improves things because it is now much less likely that one of our
$2^{O(d^2)}$ ``small'' $n$'s will randomly have a small projection onto $V$.
This allows us to operate even when $\epsilon$ is only \new{$2^{-\Omega(d)}$}.
Note that we cannot hope to do much better than this because the LLL algorithm
will still have an exponential gap between the shortest vector and the one that it finds.
Finally, we are also able to extend our algorithm to the setting where the distribution $A$
is not supported on integers, but instead on numbers of the form $\sum_{j=1}^k a_j r_j$,
where the $a_j$'s are (not too large) integers and the $r_j$'s are some $k$ specific (known) real numbers.
In this more general setting, instead of $v\cdot x_i \approx n_i$, we will have that
$v\cdot x_i \approx \sum_{j=1}^k n_{i, j} r_j$, for some integers $n_{i,j}$, \new{$i \in [m], j \in [k]$}.
We then set-up a quadratic form similar to the one before, namely
$Q(n) = \|n\|_2^2 + N \left \| \mathrm{Proj}_V (t) \right \|_2^2$,
where $t = (t_i)_{\new{i \in [m]}}$ is the vector with coordinates $t_i = \sum_{j=1}^k n_{ij} r_j$
for some integers $n_{ij}$. Once again, the correct integer vector $n$
will be an unusually small vector with respect to this quadratic form; and if we can find it,
we will be able to use it to approximate the hidden direction $v$.
\new{A subtle issue in this case is that}
the correct vector (and multiples) need not be the {\em only} small vectors in this lattice.
In particular, if the $r_j$'s satisfy an approximate linear relation $\sum_{j=1}^k k_j r_j \approx 0$,
then letting $n_{ij} = k_j \cdot \delta_{i, i_0}$, for some $i_0$, will also have $\mathrm{Proj}_V(t)$ small,
because $t$ will be small. To deal with this issue, we will need to apply the LLL algorithm
and take not just the single smallest vector, but the smallest few vectors (in a carefully selected way).
We can then show that the true vector $n$ that we are looking for is in the subspace spanned by these vectors.
By finding a lattice vector in this space such that $t$ is large but $\mathrm{Proj}_V(t)$ is small,
we can find a $t$ where each $t_i$ is approximately some multiple of $v\cdot x_i$ for all $i$.
Using this $t$, we can solve for $v$ \new{as before}.
\paragraph{Independent Work.}
Concurrent and independent work by~\cite{Zadik21LLL} obtained a similar
algorithm for NGCA under similar assumptions,
by also leveraging the LLL algorthm. \new{More concretely, the algorithm of~\cite{Zadik21LLL}
efficiently solves the NGCA problem when $A$ is a discrete distribution on an integer lattice,
roughly corresponding to the $k=1$ and $\epsilon=0$ case of our result.}
\section{Proof of Theorem~\ref{thm:main-ngca-disc}}
\medskip
The pseudo-code for our algorithm is given below.
\bigskip
\fbox{\parbox{6.1in}{
{\bf Algorithm} {\tt LLL-based-NGCA}\\
Input: $m = 2d$ i.i.d.\ samples \
from $\mathbf{P}_v^A$, where $A$ satisfies Assumption~\ref{cond:A-disc} \new{for given real numbers $r_1,r_2,\ldots,r_k$ and some} parameters $k, B, \epsilon$ with $\epsilon < 2^{-C' d k^2} B^{-C' k}$,
where $C'>0$ is a sufficiently large constant. \\
\vspace{-0.5cm}
\begin{enumerate}
\item Let $N = 2^{Cmk^2} B^{Ck}$ be a positive integer, for $C$ a sufficiently large constant,
such that $N < 1/\epsilon^2$.
\item Let $x_1,x_2,\ldots,x_m$ be $m$ i.i.d. samples from $\mathbf{P}_v^A$ each rounded to the nearest multiple of $\delta = \epsilon/N^2$.
\item \label{kernelStep} Let $S$ be the $d\times m$ matrix with columns $x_1,x_2,\ldots,x_m$
and $V$ be the right kernel of $S$.
\item \label{latticeStep} Define the quadratic form $Q$ on $\mathbb{Z}^{m\times k}$
such that for an input vector $n = \{n_{i,j}\}_{i \in [m], j \in [k]}$ we have that
$
Q(n) := \sum_{i=1}^m \sum_{j=1}^k n_{i,j}^2 +
N \left\| \mathrm{Proj}_V\left(\left\{\sum_{j=1}^k n_{i,j} r_j \right\}_{i \in [m]} \right) \right \|_2^2 \;.
$
\item \label{LLLStep} Compute a $\delta$-LLL reduced basis, for $\delta = 3/4$, $\{ b_1,b_2,\ldots,b_{mk}\}$ for $Q$,
\new{where $b_i \in \mathbb{R}^{m \times k}$.}
\item \label{GSStep} Apply the Gram-Schmidt \new{orthogonalization} process to the $b_i$'s,
using $Q$ as our norm, \new{to obtain an orthogonal basis $\{b^{\ast}_1,b^{\ast}_2, \ldots, b^{\ast}_{mk}\}$}.
\item Let $\ell \new{\in[m k]}$ be the largest integer such that $Q\new{(b_{\ell}^{\ast})} \leq mkB^2+Nmk\epsilon^2.$
Let $W$ be the real span of $\{ b_1,b_2,\ldots,b_{\ell} \}$.
\item \label{eigenvalueStep} Consider the quadratic form $R$ on $\mathbb{R}^{m\times k}$ defined by
$R(n) = \sum_{i=1}^m \left( \sum_{j=1}^k n_{i,j}r_j \right)^2$.
For a sufficiently large universal constant $C>0$,
find a vector $w = \new{\{w_{i,j}\}_{i \in [m], j \in [k]}} \in W$ with
$Q(w) = 2^{Cmk} \, B^2$ such that $R(w)/Q(w)$ is approximately maximized.
Note that this can be done with an eigenvalue computation.
\item Write the vector $w$ in the form $w = \sum_{i=1}^\ell c_i b_i$, for some $c_i \in \mathbb{R}$.
Let $w' \new{=} \sum_{i=1}^\ell c'_i b_i$, where $c_i'$ is the nearest integer to $c_i$.
\item \label{LeastSquaresStep} Let $v^{\ast}$ be the minimizer of $\sum_{i=1}^m \left( v^\ast \cdot x_i - \sum_{j=1}^k w'_{i,j} r_j \right)^2.$
Note that this can be found using least squares regression.
Return the normalization of $v^\ast$.
\end{enumerate}
}}
\bigskip
To begin the analysis, we first analyze the infinite precision version of this problem,
ignoring the rounding and instead simply letting the $x_i$'s be i.i.d.\ samples from $\mathbf{P}_v^A$.
We begin by analyzing some of the basic properties of the above procedure.
We start by showing that with high probability over our set of samples \new{$x_1, \ldots, x_m$}
\new{the quadratic form} $Q(n)$ is small if and only if
the vector $y^{\new{(n)}} \new{\in \mathbb{R}^m}$ with coordinates $\sum_{j=1}^k n_{i,j} r_j$, \new{$i \in [m]$,}
is approximately a multiple of the vector $y \new{\in \mathbb{R}^m}$ with coordinates $v\cdot x_i$, \new{$i \in [m]$}.
Specifically, we prove the following lemma:
\begin{lemma}\label{QBoundsLem}
Let $y\in \mathbb{R}^m$ be the vector with coordinates $y_i := v\cdot x_i$, $i \in [m]$.
Consider a \new{vector} $\new{n =} (n_{i, j}) \in \mathbb{R}^{m \times k}$.
Let $y^{\new{(n)}} \new{\in \mathbb{R}^m}$ be the vector with \new{coordinates $(y^{\new{(n)}})_i: = \sum_{j=1}^k n_{i,j} r_j$,
$i \in [m]$, and let $(y^{\new{(n)}})' \new{\in \mathbb{R}^m}$ be the component of $y^{\new{(n)}}$ orthogonal to $y$.}
Then we have that
\begin{equation}\label{QUpperBoundEqn}
Q(n) \leq \|n\|_2^2 + N \, \|(y^{\new{(n)}})' \|_2^2 \;.
\end{equation}
Furthermore, for any positive integer $M$ and with high probability over the choice of $S = \new{[x_i]_{i=1}^m}$,
for all such $n$ with $\|n\|_2 \leq M \, m \, k$, we have that:
\begin{equation}\label{QLowerBoundEqn}
Q(n) \geq N \, O(M)^{-mk/(m-d)} \, \|(y^{\new{(n)}})'\|_2^2 \;.
\end{equation}
\end{lemma}
\begin{proof}
We start by writing $S$ using an orthonormal basis for $\mathbb{R}^d$
in which $v$ is the first vector.
(Note that changing the basis we use for $\mathbb{R}^d$ does not change $V$.)
In this basis, observe that $y$ is the first row of $S$.
Moreover, all other entries of $S$ in this basis are independent standard Gaussians.
Thus, we can write $S$ as $\left[\begin{matrix} y \\ G\end{matrix} \right]$,
where $G$ is an independent Gaussian matrix. Note that the kernel of $G$ is a random subspace of $\mathbb{R}^m$
of dimension $m-d+1$. Thus, after conditioning on $y$,
$V$ is a random $(m-d)$-dimensional subspace orthogonal to $y$.
Also note that we can generate a subspace with the same distribution
by taking the span of $m-d$ independent standard orthogonal-to-$y$ Gaussians.
We next consider a vector $\new{n} = (n_{i,j}) \in \mathbb{R}^{m\times k}$.
Let $y^{\new{(n)}} \in \mathbb{R}^m$ be the vector with coordinates $\sum_{j=1}^k n_{i,j} r_j$,
and let $(y^{\new{(n)}})' \new{\in \mathbb{R}^m}$ be the component of $y^{\new{(n)}} $ orthogonal to $y$.
To begin with, we note that $\|\mathrm{Proj}_V(y^{\new{(n)}})\|_2 \leq \|(y^{\new{(n)}})' \|_2$,
and this implies Equation \eqref{QUpperBoundEqn}.
We prove Equation \eqref{QLowerBoundEqn} by a union bound over the $O(M)^{mk}$
many vectors of appropriate norm. In particular, fix such an $n$.
Recall that conditioning on $y$, the kernel of $v$ is the span of $g_1,g_2,\ldots,g_{m-d}$,
where the $g_i$ are independent orthogonal-to-$y$ Gaussians.
Note that $\|\mathrm{Proj}_V(y^{\new{(n)}} )\|_2 \geq \max_i |g_i \cdot y^{\new{(n)}} |/\|g_i \|_2.$
Note that $g_i \cdot y^{\new{(n)}}$ is distributed like a Gaussian with standard deviation $\|(y^{\new{(n)}})' \|_2$.
For $\delta > 0$, it is not hard to see that for each $i$ we have that
$|g_i \cdot y^{\new{(n)}}|/\|g_i \|_2 < \delta/\sqrt{mk}$ with probability $O(\delta)$
(for example because the probability that $|g_i \cdot y^{\new{(n)}}| < t\delta$
and $\|g_i\|_2 > (t-1)\sqrt{mk}$ is $O(\delta/t^2)$ for any positive integer $t$).
Thus, the probability that $\|\mathrm{Proj}_V(y^{\new{(n)}})\|_2 < \delta/\sqrt{mk}$
is at most $O(\delta)^{m-d}$. Letting $\delta$ be equal to $(CM)^{-mk/(m-d)}$,
for a sufficiently large constant $C$, yields the result.
\end{proof}
We will henceforth assume that \new{the high probability conclusion of}
Lemma \ref{QBoundsLem} holds for the samples our algorithm has selected
with $M := 2^{2 C m k} B$, for $C>0$ a sufficiently large universal constant.
Given this \new{assumption}, we next need to analyze which vectors give us small values of $Q$
and what this means about the output of our call to the LLL algorithm.
In particular, there is a particular vector $n^{\ast}$ that would cause $t$ to approximate $y$.
We claim that $Q(n^{\ast})$ is small and that this in turn implies that $n^{\ast}$
is an integer linear combination of $b_1,b_2,\ldots,b_\ell$.
\new{By assumption}, each $y_i = v\cdot x_i$ is within additive $\epsilon$ of $\sum_{j=1}^k n^{\ast}_{i,j} r_j$,
\new{for some $n^{\ast}_{i, j} \in \mathbb{Z}$}.
Combining these $n^{\ast}_{i,j}$'s, we get a single vector $n^{\ast} \new{= (n^{\ast}_{ij})_{i \in [m], j \in [k]} \in \mathbb{Z}^{m \times k}}$
which has all entries with absolute value at most $B$,
and by Lemma \ref{QBoundsLem} satisfies $Q(n^{\ast}) \leq m k B^2 + N m k \epsilon^2$.
Note that $n^{\ast}$ is a linear combination of the $b_i$'s, namely
$n^{\ast} = \sum_{i=1}^{m k} c_i b_i$.
Let $t$ be the largest $i$ such that $c_i \neq 0$.
Note that we can also write $n^{\ast}$ as $\sum_{i=1}^{m k} c_i' b_i^{\ast}$, for some real $c_i'$,
and that $c_t' = c_t$. Since the $b_i^{\ast}$ are orthogonal with respect to the quadratic form $Q$,
this implies that
$$Q(n^{\ast}) \geq Q(c_t' b_t^{\ast}) \geq Q(b_t^{\ast}) \;.$$
In particular, this means that $Q(b_t^{\ast}) \leq m k B^2+ N m k \epsilon^2$. By our choice of $\ell$,
this implies that $t \leq \ell$, and in particular that $n^{\ast}$ is a linear combination of
$b_1,b_2,\ldots,b_\ell$.
Unfortunately, we cannot necessarily find $n^{\ast}$ within this subspace.
However, it will suffice for our purposes to find a vector $z$ for which $\|z\|_2$ is large,
but the part of $z$ orthogonal to $y$ is small. To do this, it will suffice to find an integer vector $n$
for which $R(n)$ is large (implying that $\|y^{\new{(n)}}\|_2$ is large), but for which $Q(n)$ is small
(implying that $y^{\new{(n)}}$ is nearly orthogonal to $y$).
We know that $n^{\ast}$ is such a vector and that it is somewhere in $W$.
It now remains to find it.
Note that $n^{\ast} \in W$.
Note that $R(n^{\ast}) = \|y^{\new{(n^\ast}}) \|_2^2 \geq \|y\|_2^2/2 + O(m\epsilon^2).$
By the anti-concentration Condition \ref{anticoncentrationCondition},
with constant probability over the choice of $y$, this is $\Omega(1/d^2)$.
On the other hand, we have that $Q(n^{\ast}) \leq m k (B^2+N \epsilon^2)$.
Therefore, we have that
$$R(n^{\ast})/Q(n^{\ast}) \geq \Omega(1/(d^2 mk B^2)) \;.$$
Given our algorithm's choice of $w$, we have that
$R(w)/Q(w) \geq \Omega(1/(d^2 mk B^2)).$
On the other hand, we note that for $i\leq \ell$ we have that
$$Q(b_i) \leq 2^{mk} Q(b_{\ell}^{\ast}) \leq 2^{m k}(mkB^2) \;.$$
This in particular follows from the fact that $Q(b_{i+1}^{\ast}) \geq Q(b_i^\ast)/2$ for positive integers $i$.
The latter statement can be derived, for example, from p.\ 86 of~\cite{Cohen-nt-book}).
This means that $\|b_i \|_2^2 \leq 2^{m k}(m k B^2)$,
and thus $R(b_i) \leq 2^{O(m k)} B^2$. This implies that
$R(w-w') \leq 2^{O(m k)}(B^2+N\epsilon^2).$
However, since $Q(b_i) \leq 2^{mk}(mkB^2)$,
by similar reasoning, we obtain that $Q(w-w') \leq 2^{O(mk)} B^2$.
Together, this implies that $\|w'\|_2^2 \leq Q(w') = \Theta(2^{C m k} B^2)$,
and that
$$R(w')/Q(w') = \Omega(1/(d^2 mk B^2)) \;.$$
Assuming the high probability statement of Lemma \ref{QBoundsLem}
with $M := 2^{2Cmk}B$, we have that
$$Q(w') \geq N 2^{-O(m^2k^2/(m-d))}B^{-mk/(m-d)}\|\new{(y^{(w')})'}\|_2^2 \;.$$
This implies that $\|\new{(y^{(w')})'} \|_2^2 \leq N^{-1} 2^{O(m^2k^2/(m-d))}B^{O(mk/(m-d))}.$
Note that this means that the vector with coordinates $\sum_{j=1}^k w'_{i,j} r_j$
is within $N^{-1} 2^{O(m^2k^2/(m-d))}B^{O(mk/(m-d))}$ of some multiple of $y$.
Thus, taking $v^{\ast}$ to be an appropriate multiple of $v$ yields an error of at most
$N^{-1} 2^{O(m^2k^2/(m-d))}B^{O(mk/(m-d))}$ in the defining equation of $v^{\ast}$.
We next need to determine how close the above implies that $v^{\ast}$ will be to a multiple of $v$.
To analyze this, we consider the eigenvalues of the matrix $\sum_{i=1}^m x_ix_i^T$.
By Condition \ref{anticoncentrationCondition}, with large constant probability,
the eigenvalue in the $v$-direction will be at least $\Omega(1/d^2)$.
As the $x_i$'s in orthogonal directions are independent standard Gaussians,
it is not hard to see (for example via a cover argument) that with this large constant probability
all eigenvalues of $\sum_{i=1}^m x_ix_i^T$ are at least $\Omega(1/d^2)$.
The error in \new{the least-squares regression problem} equals
$$
\sum_{i=1}^m \left( v^\ast \cdot x_i - \sum_{j=1}^k w_{i,j}' r_j \right)^2
= \sum_{i=1}^m \left( v^\ast \cdot x_i - \alpha y_i - (\new{(y^{(w')})'})_i \right)^2 \;,
$$
where $\alpha$ is some real multiple.
Notice that $v^\ast \cdot x_i - \alpha y_i = (v^\ast - \alpha v)\cdot x_i$.
Therefore the above is
$$
(v^\ast - \alpha v)^T \sum_{i=1}^m x_i x_i^T (v^\ast - \alpha v) +
O\left( (m/d) \|\new{(y^{(w')})'} \|_2^2 + (m/d) \, \|\new{(y^{(w')})'}\|_2 \, \|v^\ast - \alpha v\|_2 \right) \;.
$$
In particular, noting that setting $v^\ast = \alpha v$ obtains a value of
$O(m/d) \left( N^{-1} 2^{O(m^2k^2/(m-d))}B^{O(mk/(m-d))} \right)^2$,
the true $v^{\ast}$ must satisfy
$$\|v^\ast - \alpha v\|_2 \leq N^{-1} 2^{O(m^2k^2/(m-d))}B^{O(mk/(m-d))} \;.$$
On the other hand, since $R(w') > 1$, we have that $\|\new{(y^{(w')})'} \|_2^2 > 1$,
which (assuming that all $x_i$'s have norm $O(\sqrt{d})$ \new{which holds with high probability})
implies that $\|v^{\ast} \|_2 \gg 1/\sqrt{d}$. This means by the above that the normalization of $v^{\ast}$
is within \new{$\ell_2$-error}
$$N^{-1} 2^{O(m^2k^2/(m-d))}B^{O(mk/(m-d))}$$
of $\pm v$.
Since we have selected $m=2d$, $N = 2^{Cmk^2} B^{Ck}$, for $C$ a sufficiently large constant,
and $\epsilon < 2^{-C' d k^2} B^{-C' k}$, for some sufficiently large constant $C'$,
it follows that the normalization of $v^{\ast}$
is exponentially close to $\pm \new{v}$.
Next we need to show that rounding the $x_i$'s does not affect the correctness of our procedure.
For this, we note that the above analysis only needed the following facts about the $x_i$:
\begin{enumerate}
\item Lemma \ref{QBoundsLem} holds for $M=2^{2Cmk}B$.
\item \label{nonsingularCondition} $\sum_{i=1}^m x_i x_i^T \succeq \Omega(I/d^2)$.
\item \label{closeToLatticeCondition} $v\cdot x_i$ is within $2^{-\Omega( d k^2)} B^{-\Omega( k)}$
(with sufficiently large constants in the big-$\Omega$) of some integer linear combination of the $r_i$'s
with coefficients of absolute value at most $B$ for all $i$.
\end{enumerate}
We note that these hold with reasonable probability by the above.
We claim that if they hold for the unrounded $x_i$'s and if the $x_i$'s have absolute value at most $\mathrm{poly}(d)$
(which happens with constant probability by Condition \ref{concentrationCondition}),
then they hold for the rounded $x_i$'s,
perhaps with slightly worse implied constants in the big-$O$ and big-$\Omega$ terms.
To show this, we begin with Condition \ref{closeToLatticeCondition}.
This still holds since rounding an $x_i$ changes the value of $v\cdot x_i$ by at most $d\delta < \epsilon$.
For Condition \ref{nonsingularCondition}, we note that changing each coordinate of $x_i$ by $\delta$
changes $\sum_{i=1}^m x_ix_i^T$ by at most $d m\delta \max_i \|x_i\|_2$ in Frobenius norm.
As this is much less than $1/d^2$, the minimum eigenvector of $\sum_{i=1}^m x_ix_i^T$ is still large enough after the rounding.
Finally, for Lemma \ref{QBoundsLem}, we note that the argument for Equation~\eqref{QUpperBoundEqn} still applies.
For Equation \eqref{QLowerBoundEqn}, we note that for a vector $z$, $\mathrm{Proj}_V(z) = z-t$,
where $t$ is the unique vector in the range of $S^T$ such that $St = Sz$.
From this, we conclude that $t = S^T (S S^T)^{-1} S z$.
We claim that rounding this does not change the value of $t$
(or, therefore, the value of $\|t-z\|_2^2$) by much.
In particular, it is easy to see that rounding changes $S$ and $S^T$ by $O(md \delta)$ in Frobenius norm.
The effect on $(S S^T)^{-1}$ is more complicated;
but we know that $SS^T = \sum_i x_i x_i^T \succeq \Omega((1/d^2)) \ I$.
This and the fact that the rounding changes $SS^T$ by relatively little in terms of Frobenius norm,
suffices to imply that the rounding does not change much the value of $Q(n)$,
for any vector $n$ with coefficients of absolute value $M$.
Having established correctness, we need to bound the runtime.
This is relatively straightforward, as we have to solve problems in dimension $\mathrm{poly}(md)$ with $\mathrm{poly}(md\log(B))$ bits of precision.
In particular, Step~\ref{kernelStep} boils down to row-reduction; Step~\ref{latticeStep} requires computing a projection matrix;
Step~\ref{LLLStep} uses the LLL algorithm; Step~\ref{eigenvalueStep} can be done via an approximate eigenvalue computation;
and Step~\ref{LeastSquaresStep} is least squares. Each of these operations can be performed in time that is polynomial
in the dimension of the problem and in the number of bits of precision required.
This completes the proof of Theorem~\ref{thm:main-ngca-disc}. \qed
\begin{remark}
{\em We remark that our algorithm works with any number $m>d$ samples,
as long as $\epsilon$ is less than $2^{-\Omega( d k^2 m / (m-d))} B^{-\Omega(k m/(m-d))}$
for sufficiently large constants in the big-$\Omega$'s.
For example, one could take $m=d+1$ samples, as long as $\epsilon < 2^{-C'd^2k^2}B^{-C' dk}$.}
\end{remark}
\paragraph{Acknowledgements.}
We thanks Sam Hopkins and Aaron Potechin for useful discussions about the low-degree
conjecture and Sum-of-Squares lower bounds.
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,783 |
{"url":"https:\/\/www.nature.com\/articles\/s41598-018-26964-7?error=cookies_not_supported&code=018e5203-01b0-48c8-b96a-157857e0bf19","text":"Article | Open | Published:\n\n# Demonstration of a terahertz pure vector beam by tailoring geometric phase\n\n## Abstract\n\nWe demonstrate the creation of a vector beam by tailoring geometric phase of left- and right- circularly polarized beams. Such a vector beam with a uniform phase has not been demonstrated before because a vortex phase remains in the beam. We focus on vortex phase cancellation to generate vector beams in terahertz regions, and measure the geometric phase of the beam and its spatial distribution of polarization. We conduct proof-of-principle experiments for producing a vector beam with radial polarization and uniform phase at 0.36\u2009THz. We determine the vortex phase of the vector beam to be below 4%, thus highlighting the extendibility and availability of the proposed concept to the super broadband spectral region from ultraviolet to terahertz. The extended range of our proposed techniques could lead to breakthroughs in the fields of microscopy, chiral nano-materials, and quantum information science.\n\n## Introduction\n\nVector beams, which tailor spatial distribution of polarization and phase, have the potential to lead to cutting-edge developments in the fields laser physics1, biological imaging2,3, light-matter interaction4,5, next-generation quantum communication6, electron acceleration7,8, and astronomy9. Pure vector beams, which form the basis of vector beams, such as radially and azimuthally polarized beams10,11,12, produce a longitudinal electric field11,12,13 and an optical M\u00f6bius strip14. Researchers have also shown that by adjusting the intensity distribution of a pure vector beam with radial polarization, one can control the three-dimensional distribution of the electric field15. Moreover, the ability to focus the electric field by a pure vector beam enables a beam spot to be sharpened to sizes smaller than the diffraction limit \u2013 an ability that cannot be achieved with linearly polarized beams13. Because of these optical properties, pure vector beams are expected to enable super-resolution in microscopy and laser processing11,13,16. However, the super-resolution property of pure vector beams is destroyed by the geometric phase induced by cyclic polarization changes17,18,19.\n\nWhen a generated beam is radially polarized, but its phase is spirally distributed as vortex phase, the beam is referred to as a radially polarized optical vortex20,21. Propagation of a beam containing a radially polarized optical vortex is different from that of a radially polarized beam12. Such a radially polarized optical vortex shows complex polarization dynamics because its polarization and phase are spatially distributed in three-dimensional free space. For this reason, vortex phase cancellation by use of an optical vortex with opposite vortex phase has been proposed as a method of creating pure vector beams3,22. This approach, however, is limited to experimental setups with complex optical alignments and low optical material throughput using polarization elements. To be best of our knowledge, radial polarization converters, which enable conversion to be the pure vector beams, have been proposed for use in a micro-structured photoconductive antenna23, a cone lens24, laser resonances1, liquid crystal modulators6,12, sub-wavelength structures15,17, nonlinear crystals25, wave plates of form birefringence fabricated by three-dimensional printing26, and meta-materials2,27 in several spectral regions from visible to terahertz. Our group also illustrated an achromatic axially-symmetric wave plate based on internal reflections and a generation of vector beams in the visible22,28, middle infrared18, and terahertz29 spectral regions. To verify details of the pure vector beam generated by use of those radial polarization converters, one must verify that the generated beam removes the vortex phase that can result from geometric phase. Although the vortex phase must be reduced in applications of pure vector beams, few studies have rigorously determined the spatial distribution of phase and its polarization of generated vector beams2,3,7,8,15,17,23, having observed only intensity distributions transmitted through a polarizer. The radial polarization converters used in these studies, moreover, are limited to narrow wavelength bands because broadband vortex phase cancellation has not been previously demonstrated.\n\nIn this paper, we specifically show a pure vector beam that eliminates the vortex phase by tailoring geometric phase at 0.36\u2009THz. Our proposed concept of vortex phase cancellation considers symmetry of the polarization of the light and asymmetry of the polarization converter. The asymmetry of the radial polarization converter enables one to remove vortex phase in generated vector beam. Geometric phase, which is induced by the spatial distribution of polarization in the vector beam, determines whether the generated vector beam has vortex phase. For this reason, we also propose single shot determination of polarization distribution in the generated vector beam. Our work provides insights that could lead to new frontiers in highly important scientific fields involving singular optics30 and quantum communications31,32 using pure vector beams, which are expected to represent a breakthrough for surpassing the diffraction limit, high-density transmission of signals, and generating longitudinal electric fields by light. We focus on the proof-of-principle demonstration of a terahertz (THz) pure vector beam by tailoring the geometric phase. In our demonstration, we also show that our concept is not limited to the THz region only and enables us to extend this approach to different spectral regions.\n\n## Results\n\n### Pure vector beams with radial polarization\n\nWe theoretically investigate the spatial distribution of polarization and vortex phase of vector beams. The key concept for reduction of vortex phase in vector beams is the cancellation of vortex phase by tailoring left- and right-hand circularly polarized optical vortices. This cancellation is achieved from the relationship between symmetry of polarized light and a rotational asymmetry of a polarization converter (see Fig.\u00a01). In the figure, the incident beam, which has linear polarization described at \u201cA\u201d, is converted into output beam \u201cB\u201d via radial polarization converters as a non-axially-symmetric half-wave plate (Fig.\u00a01a). The incident linear polarization consists of a pair of left and right circular polarizations with uniform phase (Fig.\u00a01b). Considering polarization conversion of the incident circular polarizations by the non-axially-symmetric half-wave plate, left and right circular input polarizations with uniform phase are reversed to give right and left circular output polarization. The phase distributions of the circularly polarized beams are spirally distributed with vortex phase because of the non-axially-symmetric half-wave plate (Fig.\u00a01c). After the circularly polarized vortex beams are combined, the output beam illustrated at \u201cB\u201d becomes a radially polarized beam with uniform phase because the vortex phase is eliminated.\n\nTo verify the abovementioned concept, we estimate the phase of the generated vector beam by calculating the geometric phase induced by cyclic changes of the polarization33. The changes of the generated beam\u2019s polarization state are sketched on a Poincar\u00e9 sphere (Fig.\u00a01d). When the incident linear polarization (red circle) is converted into radial polarization, changes of the polarization state on the Poincar\u00e9 sphere is described on the equator (blue dashed line) of the Poincar\u00e9 sphere because the radial polarization has linear polarizations corresponding with the angle \u03b8. The area closed by cyclic changes of the polarization on the Poincar\u00e9 sphere is zero. As a result, the generated beam becomes a pure vector beam having radial polarization and uniform phase. For linear polarization, which consists of left and right circular polarization, the non-axially-symmetric half-wave plate enables cancellation of the vortex phase.\n\nWe calculate the phase retardance generated by internal reflections within the non-axially-symmetric half\u2013wave plate. We investigate the phase retardance along the slope angle of the non-axially-symmetric half-wave plate (Fig.\u00a02a) as well as THz frequency dependence of the phase retardance (Fig.\u00a02b). Phase retardance is related to the slope angles and refractive indices n\u2009=\u20091.52 (solid, blue) of high-density polyethylene (HDPE) and n\u2009=\u20091.44 (dashed, red) polytetrafluoroethylene (PTFE). (see the subsection \u201cInternal Fresnel Reflections\u201d in the Methods section) We estimate that total internal reflection produces achromatic phase retardance in the frequency region from 0.1 to 1.6\u2009THz (for more information, refer the subsection non-axially-symmetric half-wave plate, \u201cJones calculus\u201d in the Method sections, and Supplementary Note\u00a01 and 2). HDPE generates a retardance of 180\u00b0 due to four internal reflections at the angle of 55\u00b0 due to create the radially polarized beam with uniform phase.\n\n### Beam profile of the vector beam\n\nIn our proof-of-principle experiments, we constructed an experimental setup consisting of a vector beam polarization converter and polarization analyzer (see the subsection \u201cExperimental setup\u201d in the Methods section and Supplementary Note\u00a03). We obtain an image (Fig.\u00a03a) using a pyroelectric camera after light beam passes through an axially-symmetric wave plate and a THz polarizer. In general, such images likely indicate a radially polarized beam because the transparency angle of the THz polarizer was set at 0\u00b0. Details of the spatial distribution of polarization of the generated vector beam, however, are difficult to observe in the image of intensity distribution. To resolve this issue, we develop a full analytical theory to determine the spatial distribution of polarization on the vector beam. For effective analysis of the vector beam, the image was transformed from Cartesian x-y coordinates to polar coordinates as shown in Fig.\u00a03b. From Fig.\u00a03a and b, we can evaluate the polarization distribution of the vector beam because the intensity distribution is modulated along the angle \u03b8. The intensity distribution enables single shot determination of the polarization on the vector beam29.\n\n### Spatial distribution of polarization on the vector beam\n\nRather than performing a conventional polarization analysis, we perform a single-shot determination of the polarization of the vector beam without the effects of fluctuating light sources and the rotation of polarization elements (see the subsection \u201cSingle-shot determination of polarization on vector beams\u201d in the Methods section, and Supplementary Note\u00a04). Using the image shown in Fig.\u00a03b, we determine the two-dimensional distribution of the polarization, such as ellipticity (Fig.\u00a04a) and azimuth angle (Fig.\u00a04b). The measured ellipticity and azimuth are consistent with the theory of pure vector beams with radial polarization because the ellipticity is nearly zero and the azimuth varies linearly. For this case, the vector beam is radially polarized. However, we cannot determine whether the vector beam successfully removes the vortex phase because its phase distribution was not known.\n\nTo elucidate whether the generated beam reduces the vortex phase, we investigate the phase as well as the polarization of the vector beam. Simultaneously determining both properties of the vector beam is difficult because an identical setup is required for a wavefront sensor and an interferometer. Stokes parameters determined by single-shot determination of the spatial distribution of the polarization on the vector beam are normalized to obtain the geometric phase. The Stokes parameters are distributed on the Poincar\u00e9 sphere (Fig.\u00a05a), a s1-s2 plane projection (Fig.\u00a05b), a s1-s3 plane projection (Fig.\u00a05c), and a s2-s3 plane projection (Fig.\u00a05d). Evaluating the Poincar\u00e9 representation, we confirm the changes of the polarization of the vector beam corresponded to the polarization distribution on the Poincar\u00e9 sphere shown in Fig.\u00a01d (for more information, see Supplementary Note\u00a04).\n\n### Geometric phase of the vector beam with radial polarization\n\nTo demonstrate the reduction of the vortex phase, we evaluate the geometric phase of the experimentally generated beam. The two-dimensional distribution (Fig.\u00a06a) and the radius dependence (Fig.\u00a06b) of the geometric phase are calculated from the cyclic changes of the polarization on the Poincar\u00e9 sphere (see the subsection \u201cDetermination of the geometric phase\u201d in the Methods section, and Supplementary Note\u00a05). The average and maximum of the geometric phase were \u22120.01\u2009rad and \u22120.23\u2009rad, respectively. These values of the geometric phase obtained by our result were much smaller than \u00b16.28\u2009rad, which is that of the radially polarized optical vortex. As a result, the phase of the vector beam was uniformly distributed because the vortex phase of the vector beam eliminated 96%. To the best of our knowledge, this work provides the first demonstration of the tailoring of the geometric phase due to a pure vector beam. This approach, however, is limited by the pulse width of the beam because of the influence of the group velocity dispersion.\n\n## Discussion\n\nWe showed a pure vector beam by tailoring geometric phase in 0.36\u2009THz. As shown in Fig.\u00a02, the non-axially-symmetric half-wave plate is also calculated achromatic properties of phase retardance. Such achromatic phase retardance is resulted by frequency dependence of refractive index as shown in Fig.\u00a02b. Our experimental results containing not only the polarization distribution but also the geometric phase agreed well with calculated results. In our previous study, we have also shown achromatic properties of vectorial vortex beams using an axially-symmetric wave plate at the frequencies of 0.16 and 0.36\u2009THz29. For this reason, our proposed device is possible to generate achromatic pure vector beam in THz regions.\n\nThe range of our proposed approach extends to all spectral regions from ultraviolet to terahertz. Table\u00a01 shows the possible materials for the use in the non-axially-symmetric half-wave plate for the different spectral regions. SiO2, BK7, ZnSe, PTFE, HDPE, and Tsurupica enable fabrication of the segmented rhomb in each spectral region. However, for the segmented non-axially-symmetric half\u2013wave plate shown in Fig.\u00a01a, practical wave plate materials are limited for ultraviolet, visible, and infrared regions because of issues associated with the complex assembly and high processing accuracy. To overcome this drawback, we propose constructing a continuous non-axially-symmetric half\u2013wave plate body as shown in Fig.\u00a07a. In previous study, the continuous body has realized from form birefringence fabricated by three-dimensional printer26. However, the device is limited by the frequency of the THz light source because of frequency dependence of form birefringence. On the other hand, our proposed continuous body provides the generation of achromatic pure vector beams by the internal Fresnel reflection. Our continuous non-axially-symmetric half\u2013wave plate body consists of two rotational bodies \u201cA\u201d and \u201cB\u201d (Fig.\u00a07b). These optical elements are formed by as the volume generated by rotating a Fresnel rhomb through an angle of 90\u00b0 along the axes zA and zB, respectively. After passing through the continuous non-axially-symmetric half\u2013wave plate, four internal reflections generate the vector beam. We numerically calculate the spatial distribution of polarization of [E x (\u03b8), E y (\u03b8)]T of the vector beam via Jones calculus. The Jones vectors of E x (\u03b8) (red solid line) and E y (\u03b8) (blue dashed line) are shown in Fig.\u00a07c. The spatial distribution of polarization on the vector beam is radially polarized because the typical Jones vectors were represented by [E x (0\u00b0), E y (0\u00b0)]T\u2009=\u2009(1, 0)T, [E x (90\u00b0), E y (90\u00b0)]T\u2009=\u2009(0, 1)T, [E x (180\u00b0), E y (180\u00b0)]T\u2009=\u2009(\u22121, 0)T, and [E x (270\u00b0), E y (270\u00b0)]T\u2009=\u2009(0, \u22121)T. In addition to the spatial distribution of the polarization, the phase of the vector beam is uniformly distributed as shown in Fig.\u00a07d. This confirms that it should be possible to generate a pure vector beam with radial polarization via the proposed continuous non-axially-symmetric half\u2013wave plate of Fig.\u00a07a. In addition to radial polarization, azimuthally polarized beam, which is another of pure vector beams, also enables beam generation by changing the polarization angle at 0\u00b0 of the incident linear polarization. The continuous body of this non-axially-symmetric half\u2013wave plate is suitable for use in the superbroadband spectral region from ultraviolet to THz as shown in Table\u00a01. We expect to create the continuous body because three-dimensional printing of transparent fused silica glass enables the construction of complex shapes on optical elements34.\n\nIn conclusion, we have proposed and demonstrated a pure vector beam without vortex phase by tailoring the geometric phase of the beam. This paper reports the first demonstration of a pure vector beam with radial polarization achieved by the concept for the reduction of vortex phase. We also proposed a continuous body extended to the segmented non-axially-symmetric half-wave plate. Our technique enables the vector beam to be manipulated in singular optics. Our proposed concept can also be used in conjunction with generic technologies in other different fields, such as super resolution imaging, material science, multiplexing quantum information science, high energy physics, and even astronomy.\n\n## Methods\n\n### Internal Fresnel reflections\n\nInternal Fresnel reflections are an example of fundamental optical phenomena that result in phase retardance of $$\\varphi =4{\\tan }^{-1}\\{\\sqrt{{n}^{2}(\\lambda ){\\sin }^{2}\\beta -1}\\,\/\\,n(\\lambda )\\sin \\,\\beta \\,\\tan \\,\\beta \\}$$ between orthogonal polarizations, where n(\u03bb) and \u03b2 denote the refractive index of the material and the slope angle of a rhomb, respectively. Under the proposed conditions, \u03b2 is a constant, while n(\u03bb) shows dependence on wavelength \u03bb. An output beam generates the phase retardance of \u03d5 by total internal reflections35.\n\n### Non-axially-symmetric half\u2013wave plate\n\nA non-axially-symmetric half\u2013wave plate, which possesses phase retardance of \u03d5\u2009=\u2009\u03c0, is required for a generation of a pure vector beam with radial polarization and a uniform phase. A fast axis of the non-axially-symmetric half-wave plate is rotated with azimuth of \u03b8\/2 as a function of angle \u03b8. In this experiment, we fabricated the non-axially-symmetric half-wave plate with four Fresnel rhomb segments. (see the subsections \u201cJones calculus\u201d in the Methods section and Supplementary Note\u00a01).\n\n### Jones calculus\n\nWhen a uniform beam with linear polarization is incident onto a non-axially-symmetric half-wave plate that is acting as a radial polarization convertor (see Fig.\u00a08a), the polarization of the vector beam can be calculated via Jones calculus36,37,38 to evaluate the spatial distribution of polarization and its phase in the vector beam. The Jones vector of the linear polarization at \u03c0\/2 is described by Ein\u2009=\u2009(0, 1)T as shown in Fig.\u00a08b. For calculation between the Jones vector and Jones matrix, the polarization of the vector beam, which is converted from the non-axially-symmetric half\u2013wave plate fabricated in four parts, can be obtained in Fig.\u00a08c (for more information, refer Supplementary Note\u00a01).\n\n### Experimental setup\n\nFigure\u00a09a and b show a schematic drawing and a picture of the experimental setup for polarization conversion and polarization analysis of vector beams. The polarization conversion for the vector beams uses a THz light source, a THz lens (lens diameter d\u2009=\u200930\u2009mm and focus length f\u2009=\u2009100\u2009mm), and the non-axially-symmetric half-wave plate with four segments. Figure\u00a09c shows a picture of a non-axially-symmetric half-wave plate. The polarization analysis for the vector beams consists of an axially-symmetric wave plate, a wire-grid polarizer, a THz lens, and a pyroelectric array camera (Pyrocam IV Beam Profiling Camera, Ophir Optronics Solutions Ltd.; elements: 320 \u00d7 320, and pixel size: 80\u2009\u00b5m) See Supplementary Note\u00a03.\n\n### Single-shot determination of polarization on vector beams\n\nSpatial distribution of Stokes parameters s0(\u03b8), s1(\u03b8), s2(\u03b8), and s3(\u03b8) along the angle \u03b8 on the input vector beam converted are modulated by the axially-symmetric wave plate. After passing through a wire-grid polarizer, the intensity distribution of the vector beam varied as a function of the angle \u03b8 can be expressed as\n\n$$I(\\theta )={s}_{{\\rm{C}}0}(\\theta )=\\frac{1}{2}{s}_{{\\rm{A}}0}(\\theta )+\\frac{1}{4}{s}_{{\\rm{A}}1}(\\theta )-\\frac{1}{2}{s}_{{\\rm{A}}3}(\\theta )\\sin \\,2\\theta +\\frac{1}{4}{s}_{{\\rm{A}}1}(\\theta ){\\rm{cos4}}\\theta +\\frac{1}{4}{s}_{{\\rm{A}}2}(\\theta )\\sin \\,4\\theta .$$\n(1)\n\nThe intensity distribution I(\u03b8) contains full Stokes parameters from sA0(\u03b8) to sA3(\u03b8) given in angular frequency terms \u2014 bias, sin 2\u03b8, cos 4\u03b8, and sin 4\u03b8. To isolate each term in frequency space, we introduce Fourier transform method, which is commonly used for fringe pattern analysis in optical metrology39,40. Using Fourier transform method, Fourier spectrum of P(k) is expressed as\n\n$$P(k)={ {\\mathcal F} }^{-1}[I(\\theta )]={C}_{0}(k)+{C}_{2}(k-{k}_{2})+{{C}_{2}}^{\\ast }(k+{k}_{2})+{C}_{4}(k-{k}_{4})+{{C}_{4}}^{\\ast }(k+{k}_{4}),$$\n(2)\n\nwhere k denotes angular frequency; $${ {\\mathcal F} }^{-1}$$ represents an inverse Fourier transform operator; and C0(k), C2(k), and C4(k) represent Fourier spectra; a superscript asterisk \u201c*\u201d represents a complex conjugate. By individually extracting the C0(k), C2(k), and C4(k) components in Fourier space, the spatial distribution of Stokes parameters sA0(\u03b8)\u2009\u2212\u2009sA3(\u03b8) are determined as\n\n$${s}_{{\\rm{A}}0}(\\theta )=\\frac{1}{2}(4\\{ {\\mathcal F} [{C}_{0}(k)]-{s}_{{\\rm{A1}}}(\\theta )\\}),$$\n(3a)\n$${s}_{{\\rm{A}}1}(\\theta )=8\\Re \\{ {\\mathcal F} [{C}_{4}(k)]\\},$$\n(3b)\n$${s}_{{\\rm{A}}2}(\\theta )=8\\Im \\{ {\\mathcal F} [{C}_{4}(k)]\\},$$\n(3c)\n$${s}_{{\\rm{A}}3}(\\theta )=4\\Im \\{ {\\mathcal F} [{C}_{2}(k)]\\},$$\n(3d)\n\nwhere $${\\mathcal F}$$, $$\\Re$$, and $$\\Im$$ are the Fourier transform operator and the operators to extract the real and imaginary parts of their argument, respectively. For the spatial distribution of Stokes parameters, we determine the normalized Stokes parameters, ellipticity, and azimuth angle, respectively38 (for more information, refer to Supplementary Note\u00a04).\n\n### Determination of the geometric phase\n\nTo evaluate the phase distribution of the generated vector beam, we introduce geometric phase using the spatial distribution of Stokes parameters. The geometric phase was first suggested by Berry and Pancharatnam33,41, and it is related to cyclic changes of polarization on a Poincar\u00e9 sphere with azimuth order of $$\\ell$$\u2009=\u20090. Here, we can estimate the geometric phase using the changes of the polarization represented by the Stokes parameters of (1, \u22121, 0, 0)T, (1, cos2\u03b8 cos2\u03b5, sin2\u03b8 cos2\u03b5, sin2\u03b5)T, and (1, cos2\u03b8 cos2\u03b5, sin2\u03b8 cos2\u03b5, 0)T, where \u03b5 is ellipticity and \u03b8 is azimuth, respectively. The surface area \u03a9 of a spherical triangle is expressed as $${\\Omega }({C})=\\int {\\int }_{C}\\sin \\,\\theta d\\varepsilon d\\theta ,$$ where C is a domain of definition (for more information, refer Supplementary Note\u00a05).\n\nPublisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## References\n\n1. 1.\n\nNaidoo, D. et al. Controlled generation of higher-order Poincare sphere beams from a laser. Nat. Photonics 10, 327\u2013332 (2016).\n\n2. 2.\n\nBaklund, M. P. et al. Removing orientation-induced localization biases in single-molecule microscopy using a broadband metasurface mask. Nat. Photonics 10, 459\u2013462 (2016).\n\n3. 3.\n\nMoh, K. J., Yuan, X.-C., Bu, J., Zhu, S. W. & Gao, B. Z. Sufrace plasmon resonance imaging of cell-substrate contacts with radially polarized beams. Opt. Express 25, 20734 (2008).\n\n4. 4.\n\nToyoda, K., Miyamoto, K., Aoki, N., Morita, R. & Omatsu, T. Using optical vortex to control the chirality of twisted metal nanostructures. Nano Lett. 12, 3645\u20133649 (2012).\n\n5. 5.\n\nAmbrosio, A., Marrucci, L., Borbone, F., Roviello, A. & Maddalena, P. Light-induced spiral mass transport in azo-polymer films under vortex-beam illumination. Nat. Commun. 3, 989 (2012).\n\n6. 6.\n\nWang, J. et al. Terabit free-space data transmission employing orbital angular momentum multiplexing. Nat. Photonics 6, 488\u2013496 (2012).\n\n7. 7.\n\nNanni, E. A. et al. Terahertz-driven linear electron acceleration. Nat. Commun. 6, 8486 (2015).\n\n8. 8.\n\nSingh, K. P. & Kumar, M. Electron acceleration by a radially polarized laser pulse during ionization of low density gases. Phys. Rev. ST Accel. Beams 14, 030401 (2011).\n\n9. 9.\n\nMawet, D. et al. The Vector Vortex Coronagraph: Laboratory Results and First Light at Palomar Observatory. Astrophys. J. 709, 53\u201357 (2010).\n\n10. 10.\n\nBauer, T., Orlov, S., Peschel, U., Banzer, P. & Leuchs, G. Nanointerferometric amplitude and phase reconstruction of tightly focused vector beams. Nat. Photonics 8, 23\u201327 (2014).\n\n11. 11.\n\nWang, H., Shi, L., Lukyanchuk, B., Sheppard, C. & Chong, C. T. Creation of a needle of longitudinally polarized light in vacuum using binary optics. Nat. Photonics 2, 501\u2013505 (2008).\n\n12. 12.\n\nZhan, Q. Cylindrical vector beams: from mathematical concepts to applications. Adv. Opt. Photon. 1, 1\u201357 (2009).\n\n13. 13.\n\nZhan, Q. & Leger, J. R. Focus shaping using cylindrical vector beams. Opt. Express 10, 324\u2013331 (2002).\n\n14. 14.\n\nBauer, T. et al. Observation of optical polarization M\u00f6bius strips. Science 347, 964\u2013966 (2015).\n\n15. 15.\n\nLi, X., Lan, T.-H., Tien, C.-H. & Gu, M. T\u200ahree-dimensional orientation-unlimited polarization encryption by a single optically configured vectorial beam. Nat. Commun. 3, 998 (2012).\n\n16. 16.\n\nAhmed, M. A. et al. Multilayer polarizing grating mirror used for the generation of radial polarization in Yb:YAG thin-disk lasers. Opt. Lett. 32, 1824 (2007).\n\n17. 17.\n\nBeresna, M., Gecevi\u010dius, M., Kazansky, P. G. & Gertus, T. Radially polarized optical vortex converter created by femtosecond laser nanostructuring of glass. Appl. Phys. Lett. 98, 201101 (2011).\n\n18. 18.\n\nWakayama, T. et al. Generation of radially polarized high energy mid-infrared optical vortex by use of a passive axially symmetric ZnSe waveplate. Appl. Phys. Lett. 107, 08112 (2015).\n\n19. 19.\n\nMilione, G., Sztul, H. I., Nolan, D. A. & Alfano, R. R. Higher-Order Poincar\u00e9 Sphere, Stokes Parameters, and the Angular Momentum of Light. Phys. Rev. Lett. 107, 053601 (2011).\n\n20. 20.\n\nNiv, A., Biener, G., Kleiner, V. & Hasman, E. Manipulation of the Pancharatnam phase in vectorial vortices. Opt. Express 14, 4208 (2006).\n\n21. 21.\n\nKhonina, S. N. & Golub, I. Time behavior of focused vector beams. JOSA A 33, 1948 (2016).\n\n22. 22.\n\nWakayama, T. et al. Generation of achromatic, uniform-phase, radially polarized beams. Opt. Express 22, 3306\u20133315 (2014).\n\n23. 23.\n\nWinnerl, S., Zimmermann, B., Peter, F., Schneider, H. & Helm, M. Terahertz Bessel-Gauss beams of radial and azimuthal polarization from microstructured photoconductive antennas. Opt. Express 17, 1571\u20131576 (2009).\n\n24. 24.\n\nRadwell, N., Hawley, R. D., Gotte, J. B. & Franke-Arnold, S. Achromatic vector vortex beams from a glass cone. Nat. Commun. 7, 10564 (2015).\n\n25. 25.\n\nImai, R. et al. Terahertz vector beam generation using segmented nonlinear optical crystals with threefold rotational symmetry. Opt. Express 20, 21896\u201321904 (2012).\n\n26. 26.\n\nHernandez-Serrano, A. I., Castro-Camus, E. & Lopez-Mago, D. q-plate for the generation of terahertz cylindrical vector beams fabricated by 3D printing. J. Infrared Milli Terahz Waves 38, 938\u2013944 (2017).\n\n27. 27.\n\nKruk, S. et al. Broadband highly efficient dielectric metadevices for polarization control. APL Photonics 1, 030801 (2016).\n\n28. 28.\n\nWakayama, T., Komaki, K., Otani, Y. & Yoshizawa, T. Achromatic axially symmetric waveplate. Opt. Express 20, 29260\u201329265 (2012).\n\n29. 29.\n\nWakayama, T. et al. Determination of the polarization states of an arbitrary polarized terahertz beam: Vectorial vortex analysis. Sci. Rep. 5, 9416 (2015).\n\n30. 30.\n\nSoskin, M., Borskina, S. V., Chong, Y., Dennis, M. R. & Desyatnikov, A. Singular optics and topological photonics. J. Opt 19, 010401 (2017).\n\n31. 31.\n\nParigi, V. et al. Storage and retrieval of vector beams of light in a multiple-degree-of-freedom quantum memory. Nat. Commun. 6, 7706 (2015).\n\n32. 32.\n\nNagatsuma, T., Ducournau, G. & Renaud, C. C. Advances in terahertz communications accelerated by photonics. Nat. Photonics 10, 371\u2013379 (2016).\n\n33. 33.\n\nBerry, M. V. The adiabatic phase and Pancharatnam\u2019s phase for polarized light. J. Mod. Opt. 34, 1401\u20131407 (1987).\n\n34. 34.\n\nKotz, F. et al. Three-dimensional printing of transparent fused silica glass. Nature 544, 337\u2013339 (2017).\n\n35. 35.\n\nFresnel, A. M\u00e9moire sur les modifications que la r\u00e9flexion imprime \u00e0 la lumi\u00e8re polaris\u00e9e. M\u00e9moires de l\u2019Acad\u00e9mie des sciences de l\u2019Institute de France 11, 373\u2013434 (1832).\n\n36. 36.\n\nGoldstein, D. H. Polarized Light, T\u200ahird Edition (CRC Press, Boca Raton, 2010).\n\n37. 37.\n\nAzzam, R. M. A. & Bashara, N. M. Analysis of systematic errors in rotating-analyzer ellipsometers. J. Opt. Soc. Am. 64, 1459\u20131469 (1974).\n\n38. 38.\n\nChipman, R. A. Polarimetry. Chapter 22 in Handbook of Optics II (McGraw-Hill, New York, 1995).\n\n39. 39.\n\nTakeda, M., Ina, H. & Kobayashi, S. Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. J. Opt. Soc. Am. 72, 156\u2013161 (1982).\n\n40. 40.\n\nOka, K. & Kato, T. Spectroscopic polarimetry with a channeled spectrum. Opt. Lett. 24, 1475\u20131477 (1999).\n\n41. 41.\n\nPancharatnam, S. Generalized Theory of Interference, and Its Applications. Part I. Coherent Pencils. Proc. Indian Acad. Sci. A. 44, 247\u2013262 (1956).\n\n## Acknowledgements\n\nThe authors thank Atsushi Sasanuma and Goki Arai for assisting with the experiments and Prof. Nathan Hagen for supporting English-proofreading in this paper. Part of this work was performed under the support of MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan), JSPS KAKENHI (Grant-in-Aid for Scientific Research, Japan), Grant-in-Aid for Scientific Research (C) Number 26420205 and Grant-in-Aid for Challenging Exploratory Research Number 16K14125.\n\n## Author information\n\n### Affiliations\n\n1. #### School of Clinical Engineering, Faculty of Health and Medical Care, Saitama Medical University, 1397-1, Yamane, Hidaka, Saitama, 350-1241, Japan\n\n\u2022 Toshitaka Wakayama\n2. #### Department of Electrical and Electronic Engineering, Faculty of Engineering, Utsunomiya University, 7-1-2, Yoto, Utsunomiya, Tochigi, 321-8585, Japan\n\n\u2022 Takeshi Higashiguchi\n3. #### Center for Optical Research & Education (CORE), Utsunomiya University, 7-1-2, Yoto, Utsunomiya, Tochigi, 321-8585, Japan\n\n\u2022 Takeshi Higashiguchi\n\u2022 \u00a0&\u00a0Yukitoshi Otani\n4. #### Waseda Institute for Advanced Study, Waseda University, 3-4-1, Okubo, Shinjuku, Tokyo, 169-8555, Japan\n\n\u2022 Kazuyuki Sakaue\n5. #### Research Institute for Science and Engineering, Waseda University, 3-4-1, Okubo, Shinjuku, Tokyo, 169-8555, Japan\n\n\u2022 Kazuyuki Sakaue\n\u2022 \u00a0&\u00a0Masakazu Washio\n6. #### Department of Optical Engineering, Faculty of Engineering, Utsunomiya University, 7-1-2, Yoto, Utsunomiya, Tochigi, 321-8585, Japan\n\n\u2022 Yukitoshi Otani\n\n### Contributions\n\nT.W. and T.H. conceived the concept of reducing the vortex phase in the vector beams. T.W. and T.H. designed the experiments. T.W. and T. H. designed, fabricated and characterized the non-axially-symmetric half-wave plate. T.W. and T.H. performed the experiments and analysed the data. T.W., T.H., K.S., M.W. and Y.O. discussed the results. T.H. supervised the project. T.W., T.H., K.S., M.W. and Y.O. contributed to writing the paper.\n\n### Competing Interests\n\nThe authors declare no competing interests.\n\n### Corresponding authors\n\nCorrespondence to Toshitaka Wakayama or Takeshi Higashiguchi.","date":"2018-10-20 23:27:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 2, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5834482908248901, \"perplexity\": 3217.878652239147}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-43\/segments\/1539583513508.42\/warc\/CC-MAIN-20181020225938-20181021011438-00206.warc.gz\"}"} | null | null |
\section{Introduction}\label{S0}
As far back as the 1935s Bartlett et al \cite{B35} showed that
no ascending power series in the interparticle coordinates $r_1,~r_2$ and $r_{12}$ can be a formal solution of the Schrodinger equation
for the $~^1S$-state of helium. Later Bartlett \cite{B37} argued the existence of the helium ground state expansion included $\ln (r_1^2+r_2^2)$. Finally, Fock \cite{FOCK} proposed the expansion
\begin{equation}\label{I1}
\Psi(r,\alpha,\theta)=\sum_{k=0}^{\infty}r^k\sum_{p=0}^{[k/2]}\psi_{k,p}(\alpha,\theta)(\ln r)^p,
\end{equation}
where $r=\sqrt{r_1^2+r_2^2}$ denotes the hyperspherical radius, and the hyperspherical angles $\alpha$ and $\theta$ are defined as
\begin{equation}\label{I2}
\alpha=2\arctan \left(r_2/r_1\right),~~~\theta=\arccos\left[(r_1^2+r_2^2-r_{12}^2)/(2r_1r_2)\right].
\end{equation}
The convergence of expansion (\ref{I1}) for the ground state of helium was rigorously studied in Refs.\cite{MORG,LER}.
The method proposed by Fock \cite{FOCK} for investigating the $~^1S$ helium wave functions was generalized \cite{ES1, DEM} for arbitrary systems of charged particles and for states of any symmetry.
The Fock expansion was somewhat generalized \cite{PLUV} to be applicable to any $S$ state, and its first two terms were determined.
The most comprehensive investigation on the methods of derivation and calculation of the angular Fock coefficients (AFC)
$\psi_{k,p}(\alpha,\theta)$
was presented in the works of Abbott, Gottschalk and Maslen \cite{AM1,GAM2,GM3}.
In Ref.\cite{LEZ1} the further development of the methods of calculation of the AFC was presented. Separation of the AFC by the components, associated with definite power of the nucleus charge $Z$ was introduced.
Some of the AFC or its components that were not calculated previously, were derived \cite{LEZ1}.
The present paper improves and develops methods and extends the results obtained in the previous work \cite{LEZ1}.
We derive the exact expressions for the edge components of the most complicated AFC, $\psi_{k,0}(\alpha,\theta)$.
We calculate the subcomponent $\psi_{3,0}^{(2e)}$ missed in \cite{LEZ1}.
We show how to express some of the complicated subcomponents through the elementary functions.
Using the operator \textbf{FindSequenceFunction} of the Wolfram \emph{Mathematica}, we obtain simple explicit representations for some complex mathematical objects under consideration.
To solve the problems mentioned above, we introduce some mathematical concepts that can serve as a basis for further consideration.
It has been proven that the AFCs satisfy (see, e.g., \cite{AM1} or \cite{LEZ1}) the Fock recurrence relation (FRR)
\begin{subequations}\label{I4}
\begin{align}
\left[ \Lambda^2-k(k+4)\right]\psi_{k,p}=h_{k,p},~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\label{I4a}\\
h_{k,p}=2(k+2)(p+1)\psi_{k,p+1}+(p+1)(p+2)\psi_{k,p+2}-2 V \psi_{k-1,p}+2 E \psi_{k-2,p},\label{I4b}
\end{align}
\end{subequations}
where $E$ is the energy and $V=V_0+Z V_1$ is the dimensionless Coulomb interaction for the two-electron atom/ions.
The electron-electron $V_0$ and the electron-proton $V_1$ interactions are defined as follows
\begin{equation}\label{I5}
V_0=1/\xi,~~~~~~
V_1=-\left[\csc(\alpha/2)+\sec(\alpha/2)\right],
\end{equation}
where the variable
\begin{equation}\label{I3}
\xi=\sqrt{1-\sin \alpha \cos \theta}.
\end{equation}
The hyperspherical angular momentum operator, projected on $S$ states, is:
\begin{equation}\label{I6}
\Lambda^2=-\frac{4}{\sin^2 \alpha}\left(\frac{\partial}{\partial\alpha}\sin^2\alpha\frac{\partial}{\partial\alpha}+\frac{1}{\sin\theta} \frac{\partial}{\partial\theta}\sin\theta \frac{\partial}{\partial\theta}\right),
\end{equation}
and its eigenfunctions are the hyperspherical harmonics (HH)
\begin{equation}\label{I7}
Y_{kl}(\alpha,\theta)=N_{kl}\sin^l\alpha~C_{k/2-l}^{(l+1)}(\cos\alpha)P_l(\cos\theta),~~~~~~~~~~k=0,2,4,...; l=0,1,2,...,k/2
\end{equation}
where the $C_n^\nu(x)$ and $P_l(z)$ are Gegenbauer and Legendre polynomials, respectively. The normalization constant is
\begin{equation}\label{I8}
N_{kl}=2^ll!\sqrt{\frac{(2l+1)(k+2)(k/2-l)!}{2\pi^3(k/2+l+1)!}},
\end{equation}
so that
\begin{equation}\label{I9}
\int Y_{kl}(\alpha,\theta)Y_{k'l'}(\alpha,\theta)d\Omega=\delta_{kk'}\delta_{ll'},
\end{equation}
where $\delta_{mn}$ is the Kronecker delta, and the appropriate volume element is
\begin{equation}\label{I10}
d\Omega=\pi^2 \sin^2\alpha~d\alpha \sin\theta d\theta.~~~~~~~~~~~~~~~~~\alpha\in [0,\pi],~\theta\in [0,\pi]
\end{equation}
It was shown \cite{LEZ1} that any AFC, $\psi_{k,p}$ can be separated into the independent parts (components)
\begin{equation}\label{I11}
\psi_{k,p}(\alpha,\theta)=\sum_{j=p}^{k-p} \psi_{k,p}^{(j)}(\alpha,\theta) Z^j
\end{equation}
associated with a definite power of $Z$, according to separation of the rhs (\ref{I4b})
\begin{equation}\label{I12}
h_{k,p}(\alpha,\theta)=\sum_{j=p}^{k-p} h_{k,p}^{(j)}(\alpha,\theta) Z^j
\end{equation}
of the FRR.
Accordingly, each of the FRR (\ref{I4}) can be separated into the individual equations (IFRR) for each component:
\begin{equation}\label{I13}
\left[ \Lambda^2-k(k+4)\right]\psi_{k,p}^{(j)}(\alpha,\theta)=h_{k,p}^{(j)}(\alpha,\theta).
\end{equation}
\section{The edge components of the most complicated AFC}\label{S1}
It is well-known that calculations of the logarithmless AFC $\psi_{k,0}(\alpha,\theta)$ with $k=1,2,3,...$ are the most complicated ones.
However, it is easy to show that the edge components $\psi_{k,0}^{(0)}$ and $\psi_{k,0}^{(k)}$ of those AFC can be calculated without any problems.
Indeed, for $p=0$ Eq.(\ref{I4b}) reduces to the form
\begin{equation}\label{I14}
h_{k,0}=2(k+2)\psi_{k,1}+2\psi_{k,2}-2V\psi_{k-1,0}+2E\psi_{k-2,0}.
\end{equation}
Substitution of Eqs.(\ref{I11})-(\ref{I12}) for the $Z$-power separation into Eq.(\ref{I14}) yields for the right-hand sides of the IFFR (\ref{I13}) with the mentioned edge components
\begin{equation}\label{I15}
h_{k,0}^{(0)}=-2 V_{0} \psi_{k-1,0}^{(0)} + 2E\psi_{k-2,0}^{(0)},
\end{equation}
\begin{equation}\label{I16}
h_{k,0}^{(k)}=-2V_1 \psi_{k-1,0}^{(k-1)},~~~~~~~~~~~~~~~
\end{equation}
where the angular dependent potentials $V_0$ and $V_1$ are defined by Eq.(\ref{I5}).
It is seen that the rhs (\ref{I15}) and (\ref{I16}) of the order $k$ are represented by the corresponding edge components of the order $k-1$ and $k-2$ (if exists).
Moreover, taking into account that $\psi_{1,0}^{(1)}$ is a function of $\alpha$ (see Eq.(49)\cite{LEZ1}), and $\psi_{1,0}^{(0)}$ is a function of $\xi$ (see Eq.(36)\cite{LEZ1}), one can conclude that any component $\psi_{k,0}^{(k)}$ must be a function of a single angle $\alpha$, whereas $\psi_{k,0}^{(0)}$ must be a function of a single variable $\xi$ defined by Eq.(\ref{I3}). Representation (\ref{I5}) for the potentials was certainly used to make the above conclusions.
There is a specific difference between derivations of the edge components $\psi_{k,0}^{(0)}$, $\psi_{k,0}^{(k)}$ for even and odd $k$. Therefore, we shall present such calculations in details for $k=4$ and $k=5$, whereas for $k=6,7,8$ the corresponding results will be presented without derivations.
The general IFRR (\ref{I13}) for $k=4,~p=0,~j=0$ reduces to the form
\begin{equation}\label{1}
\left( \Lambda^2-32\right)\psi_{4,0}^{(0)}=h_{4,0}^{(0)},
\end{equation}
where
\begin{equation}\label{2}
h_{4,0}^{(0)}\equiv -2V_0 \psi_{3,0}^{(0)}+2E \psi_{2,0}^{(0)}=
-\frac{1}{36}\left[\xi^2(E-2)+3(1-2E)^2\right]
\end{equation}
according to relation (\ref{I15}).
The components $\psi_{3,0}^{(0)}$ and $ \psi_{2,0}^{(0)}$ are presented in Table I and Eq.(55) of Ref.\cite{LEZ1}, respectively.
It is seen that the rhs (\ref{2}) of the IFRR (\ref{1}) is a function of a single variable $\xi$.
It was shown in \cite{LEZ1} that in this case the solution of the IFRR (\ref{I13}) with the rhs $h_{k,p}^{(j)}(\alpha,\theta)\equiv \textsl{h}(\xi)$ reduces to solution of the differential equation
\begin{equation}\label{3}
\left(\xi^2-2\right)\Phi_k''(\xi)+\frac{5\xi^2-4}{\xi}\Phi_k'(\xi)-k(k+4)\Phi_k(\xi)=\textsl{h}(\xi),
\end{equation}
where $\Phi_k(\xi)\equiv \psi_{k,p}^{(j)}(\alpha,\theta)$. The particular solution $\Phi_k^{(p)}$ of Eq.(\ref{3}) can be found by the method of variation of parameters in the form
\begin{equation}\label{4}
\Phi_k^{(p)}(\xi)=\frac{1}{(k+2)\sqrt{2}}\left[
\textsl{u}_k(\xi)\int\textsl{v}_k(\xi)f(\xi)d\xi-
\textsl{v}_k(\xi)\int\textsl{u}_k(\xi)f(\xi)d\xi\right],
\end{equation}
where $f(\xi)=\textsl{h}(\xi)\xi^2\sqrt{2-\xi^2}$. The linearly independent solutions of the homogeneous equation associated with Eq.(\ref{3}) are defined by
\begin{equation}\label{5}
\textsl{u}_k(\xi)=\frac{P_{k+3/2}^{1/2}\left(\xi/\sqrt{2}\right)}{\xi\sqrt[4]{2-\xi^2}},~~~~~~~~~~~~
\textsl{v}_k(\xi) =\frac{Q_{k+3/2}^{1/2}\left(\xi/\sqrt{2}\right)}{\xi\sqrt[4]{2-\xi^2}},
\end{equation}
where $P_\nu^\mu(x)$ and $Q_\nu^\mu(x)$ are the associated Legendre functions of the first and second kind, respectively.
The general solution of the inhomogeneous equation (\ref{3}) is certainly of the form
\begin{equation}\label{6}
\Phi_k^{(p)}(\xi)+c_{1,k} \textsl{u}_k(\xi)+c_{2,k} \textsl{v}_k(\xi),
\end{equation}
where the values of coefficients $c_{1,k}$ and $c_{2,k}$ are defined by requirements of the finiteness and "purity" of the final physical solution.
The first requirement means that any component $\psi_{k,p}^{(j)}(\alpha,\theta)$ of the AFC must be finite at each point of the two-dimensional angular space described by the hyperspherical angles $\alpha\in[0,\pi]$ and $\theta\in[0,\pi]$. The second requirement associates with even values of $k$ (only) and concerns the obtaining of the single-valued solution containing no admixture of the HH $Y_{kl}(\alpha,\theta)$.
Turning to the component $\psi_{4,0}^{(0)}$, and simplifying the solutions (\ref{5}) for $k=4$, one obtains
\begin{equation}\label{7}
\textsl{u}_4(\xi)=\frac{2^{3/4}\left[\xi^2(3-2\xi^2)^2-1\right]}{\xi\sqrt{\pi(2-\xi^2)}},~~~~~
\textsl{v}_4(\xi)=-\frac{\sqrt{\pi}}{2^{1/4}}\left(4\xi^4-8\xi^2+3\right).
\end{equation}
Substitution of the representations (\ref{2}) and (\ref{7}) into (\ref{4}) yields
\begin{equation}\label{8}
\Phi_4^{(p)}(\xi)=\frac{\xi^2}{1440}\left[10(2E-1)^2-\xi^2(20 E^2-21 E+7)\right].
\end{equation}
It is seen that the particular solution $\Phi_4^{(p)}$, as well as $\textsl{v}_4(\xi)$, are regular over the relevant angular space, whereas
$\textsl{u}_4(\xi)$ is singular at the points $\xi=0~(\alpha=\pi/2,\theta=0)$ and $\xi=\sqrt{2}~(\alpha=\pi/2,\theta=\pi)$.
Hence, first of all, one should set $c_{1,4}=0$ in Eq.(\ref{6}) in order to comply with the finiteness condition.
It is clear that the requirement of "purity" reduces to the orthogonality condition
\begin{equation}\label{9}
\int\psi_{4,0}^{(0)}(\alpha,\theta)Y_{4l}(\alpha,\theta)d\Omega=0.
\end{equation}
Given that $\psi_{4,0}^{(0)}=\Phi_4^{(p)}(\xi)+c_{2,4}\textsl{v}_4(\xi)$, one obtains for the coefficient
\begin{equation}\label{10}
c_{2,4}=-\int\Phi_4^{(p)}(\xi)Y_{40}(\alpha,\theta)d\Omega\left(\int\textsl{v}_4(\xi)Y_{40}(\alpha,\theta)d\Omega\right)^{-1}=
\frac{E(21-20E)-7}{2880\sqrt{\pi}~2^{3/4}}.
\end{equation}
Whence, the final result for the "pure" component is
\begin{equation}\label{10a}
\psi_{4,0}^{(0)}=\frac{60E^2-63E+21+8\xi^2(E-2)}{5760}.
\end{equation}
Note that in order to derive Eq.(\ref{10}) we put $l=0$ in Eq.(\ref{9}). However, putting $l=2$ one obtains the same result, whereas for $l=1$ one obtains identity.
Next step is deriving the solution of the IFRR (\ref{I13}) for $k=4,~p=0,~j=4$ which becomes
\begin{equation}\label{11}
\left( \Lambda^2-32\right)\psi_{4,0}^{(4)}=h_{4,0}^{(4)}.
\end{equation}
Expression (\ref{I16}) yields for the rhs of Eq.(\ref{11})
\begin{equation}\label{12}
h_{4,0}^{(4)}\equiv -2V_1 \psi_{3,0}^{(3)}=-
\frac{1}{18}(2+5 \sin \alpha)\left[\tan\left(\frac{\alpha}{2}\right)+\cot\left(\frac{\alpha}{2}\right)+2\right],
\end{equation}
where the component $\psi_{3,0}^{(3)}$ is presented in Table I of Ref.\cite{LEZ1}.
Turning to the variable
\begin{equation}\label{13}
\rho=\tan(\alpha/2),
\end{equation}
one obtains
\begin{equation}\label{14}
h(\rho)\equiv h_{4,0}^{(4)}(\alpha,\theta)=-\frac{(1+\rho)^2(1+5\rho+\rho^2)}{9\rho(1+\rho^2)}.
\end{equation}
It was shown (see Sec.V in Ref.\cite{LEZ1}) that in case of the rhs $h_{k,p}^{(j)}$ of the IFRR (\ref{I13}) reduces to the function $\textmd{h}(\alpha)\equiv h(\rho)$ of a single variable $\alpha$ (or $\rho$), the solution of Eq.(\ref{I13}) represents a function $g(\rho)\equiv \psi_{k,p}^{(j)}(\alpha)$ satisfying the differential equation
\begin{equation}\label{15}
\left(1+\rho^2\right)^2g''(\rho)+2\rho^{-1}\left(1+\rho^2\right)g'(\rho)+k(k+4)g(\rho)=-h(\rho).
\end{equation}
Method of variation of parameters enables us to obtain the particular solution of Eq.(\ref{15}) in the form
\begin{equation}\label{16}
g(\rho)=u_{k}(\rho)\int \frac{v_{k}(\rho)h(\rho)\rho^2}{(\rho^2+1)^3}d\rho-
v_{k}(\rho)\int \frac{u_{k}(\rho)h(\rho)\rho^2}{(\rho^2+1)^3}d\rho,
\end{equation}
where the linearly independent solutions of the homogeneous equation associated with Eq.(\ref{15}) are
\begin{equation}\label{17}
u_{k}(\rho)=\frac{(1+\rho^2)^{k/2+2}}{\rho}~_2F_1\left(\frac{k+3}{2},\frac{k}{2}+1,\frac{1}{2};-\rho^2\right),
\end{equation}
\begin{equation}\label{18}
v_{k}(\rho)=(1+\rho^2)^{k/2+2}~_2F_1\left(\frac{k+3}{2},\frac{k}{2}+2,\frac{3}{2};-\rho^2\right).
\end{equation}
The Gauss hypergeometric function $~_2F_1$ was introduced in Eqs.(\ref{17}),(\ref{18}).
The general solution of the inhomogeneous equation (\ref{15}) is defined as
\begin{equation}\label{18a}
g(\rho)+b_{1,k}u_k(\rho)+b_{2,k}v_k(\rho),
\end{equation}
where the coefficients $b_{1,k}$ and $b_{2,k}$ can be determined by the requirements of finiteness and "purity"
as it was explained earlier.
Turning to the considered case of $k=4$ , one obtains for the independent solutions of the homogeneous equation:
\begin{equation}\label{19}
u_{4}(\rho)=\frac{(1+\rho^2)^4}{\rho}~_2F_1\left(\frac{7}{2},3,\frac{1}{2};-\rho^2\right)=
\frac{(1-\rho^2)(1-4\rho+\rho^2)(1+4\rho+\rho^2)}{\rho(1+\rho^2)^2},
\end{equation}
\begin{equation}\label{20}
v_{4}(\rho)=(1+\rho^2)^4~_2F_1\left(\frac{7}{2},4,\frac{3}{2};-\rho^2\right)=
\frac{(\rho^2-3)(3\rho^2-1)}{3(1+\rho^2)^2}.~~~~~~~~~~~~~~~~~~~~~~~
\end{equation}
Substitution of the representations (\ref{19}), (\ref{20}) and (\ref{14}) into the rhs of Eq.(\ref{16}) yields for the particular solution
\begin{equation}\label{21}
\psi_{4,0}^{(4p)}=\frac{\rho(3+7\rho+3\rho^2)}{54(1+\rho^2)^2}=
\frac{1}{216}(6+7\sin \alpha) \sin \alpha.
\end{equation}
It is seen that the particular solution $g(\rho)$ and the solution $v_4(\rho)$ of the homogeneous equation are regular over the relevant angular space, whereas
$u_4(\rho)$ is singular at the points $\rho=0~(\alpha=0)$.
Hence, one should set $b_{1,4}=0$ in Eq.(\ref{18a}) in order to comply with the finiteness condition.
The requirement of "purity" can be expressed through the relation
\begin{equation}\label{23}
\int\psi_{4,0}^{(4)}(\alpha,\theta)Y_{4l}(\alpha,\theta)d\Omega=0.
\end{equation}
Given that $\psi_{4,0}^{(4)}=g(\rho)+b_{2,4}v_4(\rho)$, one obtains for the coefficient
\begin{equation}\label{24}
b_{2,4}=-\int g(\rho)Y_{40}(\alpha,\theta)d\Omega\left(\int v_4(\rho)Y_{40}(\alpha,\theta)d\Omega\right)^{-1}=
\frac{7}{288}+\frac{2}{45\pi}.
\end{equation}
Whence, the final result for the "pure" component reads
\begin{equation}\label{25}
\psi_{4,0}^{(4)}=\frac{120 \pi \sin \alpha+128 \cos(2\alpha)+105\pi+64}{4320\pi}.
\end{equation}
Note that for $l=1,2$ the orthogonality condition (\ref{23}) represents identity.
Putting $k=5,~p=0,~j=0$ in Eq.(\ref{I13}), one obtains
\begin{equation}\label{43}
\left( \Lambda^2-45\right)\psi_{5,0}^{(0)}=h_{5,0}^{(0)},
\end{equation}
where according to relation (\ref{I15})
\begin{equation}\label{44}
h_{5,0}^{(0)}\equiv -2V_0 \psi_{4,0}^{(0)}+2E \psi_{3,0}^{(0)}=
\frac{63 E-21-60E^2+8(2+29E-60E^2)\xi^2+80E(E-2)\xi^4}{2880\xi}.
\end{equation}
We used Eq.(\ref{10a}) for representation of the component $\psi_{4,0}^{(0)}$.
The independent solutions (\ref{5}) of the homogeneous equation associated with Eq.(\ref{3}), for $k=5$ become
\begin{equation}\label{45}
\textsl{u}_5(\xi)=\frac{2^{1/4}(8\xi^6-28\xi^4+28\xi^2-7)}{\sqrt{\pi(2-\xi^2)}},~~~~~~~
\textsl{v}_5(\xi) =\frac{\sqrt{\pi}(1-12\xi^2+20\xi^4-8\xi^6)}{2^{3/4}\xi}.
\end{equation}
Substitution of the representations (\ref{44}) and (\ref{45}) into the particular solution (\ref{4}) of the inhomogeneous equation (\ref{3}) for $k=5$ yields
\begin{equation}\label{46}
\Phi_5^{(p)}=\frac{\xi}{172800}
\left[45(7-21E+20E^2)-5(113-199E+60E^2)\xi^2+2(113-119E+20E^2)\xi^4\right].
\end{equation}
It is seen that the particular solution (\ref{46}) is regular over the relevant angular space, whereas
$\textsl{u}_5(\xi)$ is singular at the point $\xi=\sqrt{2}$, and $\textsl{v}_5(\xi)$ is singular at the point $\xi=0$. Hence, the physical solution of Eq.(\ref{43}) is coincident with the partial solution (\ref{46}), that is
\begin{equation}\label{47}
\psi_{5,0}^{(0)}=\Phi_5^{(p)}.
\end{equation}
The general IFRR (\ref{I13}) for $k=5,~p=0,~j=5$ reduces to the form
\begin{equation}\label{48}
\left( \Lambda^2-45\right)\psi_{5,0}^{(5)}=h_{5,0}^{(5)},
\end{equation}
where according to Eq.(\ref{I16})
\begin{equation}\label{49}
h_{5,0}^{(5)}\equiv -2V_1 \psi_{4,0}^{(4)}=
\frac{(1+\rho)\left[64(3-10\rho^2+3\rho^4)+15\pi(1+\rho^2)(7+16\rho+7\rho^2)\right]}
{2160\pi \rho(1+\rho^2)^{3/2}},
\end{equation}
expressed through the variable $\rho$ defined by Eq.(\ref{13}).
According to Eqs.(\ref{17}), (\ref{18}) the linearly independent solutions of the homogeneous equation associated with Eq.(\ref{15}) for $k=5$ can be simplified to the form
\begin{equation}\label{50}
u_{5}(\rho)=\frac{(1+\rho^2)^{9/2}}{\rho}~_2F_1\left(4,\frac{7}{2},\frac{1}{2};-\rho^2\right)=
\frac{1-7\rho^2(3-5 \rho^2+\rho^4)}{\rho(1+\rho^2)^{5/2}},
\end{equation}
\begin{equation}\label{51}
v_{5}(\rho)=(1+\rho^2)^{9/2}~_2F_1\left(4,\frac{9}{2},\frac{3}{2};-\rho^2\right)=
\frac{7-35\rho^2+21\rho^4-\rho^6}{7(1+\rho^2)^{5/2}}.
\end{equation}
Substituting representations (\ref{49})-(\ref{51}) into the rhs of Eq.(\ref{16}), one obtains
for the particular solution of Eq.(\ref{15}) with $k=5$
\begin{eqnarray}\label{52}
\psi_{5,0}^{(5p)}=-\frac{1}{453600\pi \rho(1+\rho^2)^{5/2}}
\left[64(23+161\rho-168\rho^2-700\rho^3+105\rho^4+315\rho^5)+
\right.
\nonumber~~~\\
\left.
15\pi(43+301\rho-168\rho^2-700\rho^3+805\rho^4+735\rho^5)\right]
~~~.
\end{eqnarray}
The physical solution $\psi_{5,0}^{(5)}$ of the IFRR (\ref{48}) must be finite for all values of $0\leq\alpha\leq\pi$ and hence for $\rho\geq0$.
Therefore, let us consider the power series expansions of the particular solution $\psi_{5,0}^{(5p)}(\rho)$ and the individual solutions $u_5(\rho)$ and $v_5(\rho)$ of the corresponding homogeneous equation about $\rho=0$ and $\rho=\infty$. One obtains:
\begin{subequations}\label{53}
\begin{align}
\psi_{5,0}^{(5p)}(\rho)\underset{\rho\rightarrow 0}{=}
-\frac{1}{\rho}\left(\frac{43}{30240}+\frac{46}{14175\pi}\right)-\left(\frac{43}{4320}+\frac{46}{2025\pi}\right)+
O(\rho),~~~~~~~~~~~~~~~~~\label{53a}\\
\psi_{5,0}^{(5p)}(\rho)\underset{\rho\rightarrow \infty}{=}
-\frac{1}{\rho}\left(\frac{7}{288}+\frac{2}{45\pi}\right)-\frac{1}{\rho^2}\left(\frac{23}{864}+
\frac{2}{135\pi}\right)+O\left(\frac{1}{\rho^3}\right)
.~~~~~~~~~~~~~~~~~~\label{53b}
\end{align}
\end{subequations}
\begin{subequations}\label{54}
\begin{align}
u_5(\rho)\underset{\rho\rightarrow 0}{=}
\frac{1}{\rho}-\frac{47\rho}{2}+O(\rho^3),~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\label{54a}\\
u_5(\rho)\underset{\rho\rightarrow \infty}{=}-7+\frac{105}{2\rho^2}+O\left(\frac{1}{\rho^3}\right)
.~~~~~~~~~~~~~~~~~~~~~~~\label{54b}
\end{align}
\end{subequations}
\begin{subequations}\label{55}
\begin{align}
v_5(\rho)\underset{\rho\rightarrow 0}{=}
1-\frac{15\rho^2}{2}+O(\rho^3),~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\label{55a}\\
v_5(\rho)\underset{\rho\rightarrow \infty}{=}-\frac{\rho}{7}+\frac{47}{14\rho}+O\left(\frac{1}{\rho^3}\right)
.~~~~~~~~~~~~~~~~~~~~~~~\label{55b}
\end{align}
\end{subequations}
It is seen that $v_5(\rho)$ is divergent as $\rho\rightarrow \infty$, whereas $u_5(\rho)$ and $\psi_{5,0}^{(5p)}(\rho)$ are singular at the point $\rho=0$.
Thus, in order to comply with the finiteness condition, one should set $b_{2,5}=0$ and
\begin{equation*}
b_{1,5}=\frac{43}{30240}+\frac{46}{14175\pi}
\end{equation*}
in the general solution
\begin{equation*}
\psi_{5,0}^{(5p)}(\rho)+b_{1,5}u_5(\rho)+b_{2,5}v_5(\rho).
\end{equation*}
The final result expressed in terms of the hyperspherical angle $\alpha$ is
\begin{equation}\label{56}
\psi_{5,0}^{(5)}=-\frac{\left[\cos\left(\frac{\alpha}{2}\right)+\sin\left(\frac{\alpha}{2}\right)\right]
\left[4(32+45\pi)+3(448+155\pi)\cos(2\alpha)+(704+465\pi)\sin \alpha
\right]}{64800\pi}.
\end{equation}
It is clear that using the technique described above, one can subsequently calculate the edge components of any given order $k$.
Here we present such components up to $k=8$. They are
\begin{eqnarray}\label{57}
\psi_{6,0}^{(0)}=\frac{1}{29030400}
\left[3007-11361E+16460E^2-10080E^3-
\right.
\nonumber~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\
\left.
(4180E^2-12705E+6215)\xi^2+24(113-119E+20E^2)\xi^4\right],~~~~~~~~~~~~
\end{eqnarray}
\begin{eqnarray}\label{58}
\psi_{7,0}^{(0)}=\frac{\xi}{36578304000}\left[630(3007-11361E+16460E^2-10080E^3)+
\right.
~\nonumber~~~~~~~~~~~~~~~~~~~~~~~~~\\
\left.
105(30240E^3-141700E^2+164283E-60341)\xi^2-
\right.
\nonumber~~~~~~~~~~~~~~~~~~~~~~~~~~~\\
\left.
14(60480E^3-491860E^2+873789E-430523)\xi^4+
\right.
\nonumber~~~~~~~~~~~~~~~~~~~~~~~~~~~\\
\left.
4(22680E^3-266950E^2+660219E-430523)\xi^6\right],~~~~~~~~~~~~~~~~~~~~~~~~~~~
\end{eqnarray}
\begin{eqnarray}\label{59}
\psi_{8,0}^{(0)}=\frac{1}{21069103104000}\left[5(30481920E^4-68266800E^3+72613544E^2-39544113E+8871475)+
\right.
~\nonumber~\\
\left.
40(3250800E^3-13370360E^2+13273113E-4324891)\xi^2-
\right.
\nonumber~~~~~~~~~~~~~~~~~~~\\
\left.
420(71280E^3-556120E^2+934809E-430523)\xi^4+
\right.
\nonumber~~~~~~~~~~~~~~~~~~~\\
\left.
128(22680E^3-266950E^2+660219E-430523)\xi^6\right],~~~~~~~~~~~~~~~~~~~
\end{eqnarray}
\begin{eqnarray}\label{60}
\psi_{6,0}^{(6)}=\frac{1}{21772800\pi}
\left[80(448+255\pi)+144(448+155\pi)\cos(2\alpha)+
\right.
\nonumber~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\
\left.
315(64+55\pi)\sin \alpha+7(10816+4335\pi)\sin(3\alpha)\right],~~~~~~~~~~~~
\end{eqnarray}
\begin{eqnarray}\label{61}
\psi_{7,0}^{(7)}=-\frac{\left[\cos\left(\frac{\alpha}{2}\right)+\sin\left(\frac{\alpha}{2}\right)\right]}{609638400\pi}
\left[3(585\pi-10304)\sin \alpha+3(58845\pi+142016)\sin(3\alpha)+
\right.
\nonumber~~~~~~~~~~~\\
\left.
8(4485\pi+12992)\cos(2\alpha)+97560\pi+211456\right],~~~~~~~~~~~~~
\end{eqnarray}
\begin{eqnarray}\label{62}
\psi_{8,0}^{(8)}=\frac{1}{1843546521600\pi^2}
\left\{94502912+75\pi(1946944+626787\pi)+
\right.
\nonumber~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\
\left.
2\left[94502912+3\pi(43273408+12251925\pi)\right]\cos(2\alpha)+4096(46144+19275\pi)\cos(4\alpha)+
\right.
\nonumber~~~~~~~~\\
1008\pi\left[105(448+225\pi)\sin \alpha+(142016+58845\pi)\sin(3\alpha)\right]
\left.
\right\}.~~~~~~~~~~~~~~~~~~~~
\end{eqnarray}
\section{Single-series representation for subcomponent $\psi_{3,0}^{(2e)}$}\label{S2}
It follows from Eqs.(79),(85) of Ref.\cite{LEZ1} that subcomponent $\psi_{3,0}^{(2e)}$ corresponding to the rhs
\begin{equation}\label{63}
h_{3,0}^{(2e)}=-\frac{\sin \alpha}{\xi},
\end{equation}
of the IFFR
\begin{equation}\label{64}
(\Lambda^2-21)\psi_{3,0}^{(2e)}=h_{3,0}^{(2e)}
\end{equation}
was missed in Ref.\cite{LEZ1}.
Using the technique described in Sec.V of Ref.\cite{LEZ1}, we have found
the mentioned subcomponent (details can be found in Appendix \ref{SA}) in the form of the single-series representation
\begin{equation}\label{65}
\psi_{3,0}^{(2e)}=\sum_{l=0}^\infty P_l(\cos \theta)(\sin \alpha)^l \lambda_l(\rho).
\end{equation}
For $0\leq\rho\leq1$ functions $\lambda_l(\rho)$ can be written in the form
\begin{equation}\label{66}
\lambda_l(\rho)=\frac{1}{2l+1}\left\{u_{3l}(\rho)\mathcal{V}_{3l}(\rho)-v_{3l}(\rho)
\left[\mathcal{U}_{3l}(\rho)-(2l+1)s_l\right]\right\},
\end{equation}
where
\begin{equation}\label{67}
u_{3l}(\rho)=\frac{\left(\rho^2+1\right)^{l-\frac{3}{2}}}{\rho^{2l+1}}
\left[\frac{(2l+3)(2l+5)}{(2l-3)(2l-1)}\rho^4+\frac{2(2l+5)}{2l-1}\rho^2+1\right],
\end{equation}
\begin{equation}\label{68}
v_{3l}(\rho)=\left(\rho^2+1\right)^{l-\frac{3}{2}}
\left[\frac{(2l-3)(2l-1)}{(2l+3)(2l+5)}\rho^4+\frac{2(2l-3)}{2l+3}\rho^2+1\right],
\end{equation}
\begin{equation}\label{69}
\mathcal{U}_{3l}(\rho)=
-\frac{l(l+1)\left[(\rho^2+1)^4\arctan (\rho)+\rho^7-\rho\right]+(l^2-7l-10)\rho^5-(l^2+9l-2)\rho^3}
{2^l(2l-3)(2l-1)(\rho^2+1)^4},
\end{equation}
\begin{eqnarray}\label{70}
\mathcal{V}_{3l}(\rho)=
~\nonumber~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\
-\left[(-2)^l(l-2)(l-1)(2l+3)(2l+5)\right]^{-1}
\left\{12\left[B_{-\rho^2}(l+1,-3)-B_{-\rho^2}(l+1,-4)\right]+
\right.
~~~\nonumber~\\
\left.
(2l-3)\rho^2\left[2l^2+l-7+(l-2)(2l-1)\rho^2\right]\left[(3-l)B_{-\rho^2}(l+1,-3)-4B_{-\rho^2}(l+1,-4)\right]
\right\}
.~~~~
\end{eqnarray}
Here $B_z(a,b)$ is the Euler beta function.
It is seen that expression (\ref{70}) cannot be applied directly for $l=1,2$. For this values of $l$, one obtains
\begin{equation}\label{71}
\mathcal{V}_{31}(\rho)=
\frac{1}{140}\left[\frac{3+10\rho^2+11\rho^4-20\rho^6}{(1+\rho^2)^4}+2\ln(1+\rho^2)\right],~~~~~~~~~~
\end{equation}
\begin{equation}\label{72}
\mathcal{V}_{32}(\rho)=
-\frac{1}{84}\left[\frac{5+14\rho^2+9\rho^4-6\rho^6+24\rho^8+6\rho^{10}}{6(1+\rho^2)^4}+\ln(1+\rho^2)\right].
\end{equation}
Using the \emph{Mathematica} operator \textbf{FindSequenceFunction} we have found a simple representation (details can be found in Appendix \ref{SA}) for the coefficient
\begin{equation}\label{73}
s_l=\frac{2^{-l-3}}{(2l-3)(2l-1)(2l+1)}\left[2l(l+1)\left(H_{\frac{l+1}{2}}-H_{\frac{l}{2}}-\pi\right)+2l+3\right\},
\end{equation}
being a part of expression (\ref{66}) for $\lambda_l(\rho)$. Functions $H_z$ give the harmonic numbers.
Remind that for $\rho>1$ one should replace $\rho$ by $1/\rho$ in Eqs.(\ref{66})-(\ref{72}).
\section{Elaboration of some results obtained previously}\label{S3}
In paper \cite{LEZ1} the various components of the AFC were derived in the form of the one-dimensional series with fast convergence.
In particular, the solution of the IFRR
\begin{equation}\label{90}
\left(\Lambda^2-32\right)\psi_{4,1}^{(2d)}=h_{4,1}^{(2d)},~~~~~~~~~~~~~~~~~
\end{equation}
with the rather complicated rhs
\begin{equation}\label{91}
h_{4,1}^{(2d)}=\frac{\pi-2}{3\pi}
\left[\sin\left(\frac{\alpha}{2}\right)+\cos\left(\frac{\alpha}{2}\right) \right]\left[\frac{5}{3\sin \alpha}\xi^3+\left(1-\frac{2}{\sin\alpha}\right)\xi-\frac{1}{\xi}\right],~~
\end{equation}
was represented by single series of the form
\begin{equation}\label{92}
\psi_{4,1}^{(2d)}=\sum_{l=0}^\infty P_l(\cos\theta)(\sin \alpha)^l \tau_l(\rho),
\end{equation}
where the variables $\xi$ and $\rho$ are defined by Eqs.(\ref{I3}) and (\ref{13}), respectively.
It was shown that for $l>2$ function $\tau_l(\rho)$ can be expressed by the formula
\begin{equation}\label{93}
\tau_l(\rho)=\tau_l^{(p)}(\rho)+A_2(l) v_{4l}(\rho),
\end{equation}
where
\begin{equation}\label{94}
\tau_l^{(p)}(\rho)=\frac{1}{2l+1}\left[u_{4l}(\rho)\mathcal{V}_{4l}(\rho)-v_{4l}(\rho)\mathcal{U}_{4l}(\rho)\right],
\end{equation}
\begin{subequations}\label{95}
\begin{align}
u_{4l}(\rho)=\rho^{-2l-1}(\rho^2+1)^{l+4}~_2F_1\left(\frac{7}{2},3-l;\frac{1}{2}-l;-\rho^2\right),~~~~~~~\label{95a}\\
v_{4l}(\rho)=(\rho^2+1)^{l+4}~_2F_1\left(\frac{7}{2},4+l;l+\frac{3}{2};-\rho^2\right).~~~~~~~~~~~~~~~~\label{95b}
\end{align}
\end{subequations}
However, function $\tau_l^{(p)}(\rho)$ as well as the coefficient $A_2(l)$ were represented in the closed form (see, Eq.(C14)\cite{LEZ1}) only for given $l\leq10$.
Here, we present the mentioned above functions in a few general closed forms which applicable for any $l\geq3$.
In particular, it is shown in Appendix \ref{SB} that the functions $\mathcal{U}_{4l}$ and $\mathcal{V}_{4l}$ included into the rhs of expression (\ref{94}) can be represented in the form:
\begin{eqnarray}\label{96}
\mathcal{U}_{4l}(\rho)=a_{0l}\frac{8(l-3)!}{15\sqrt{\pi}\Gamma(l+1/2)}
\sum_{m=0}^{l-3}\frac{\Gamma(m+7/2)\Gamma(l-m+1/2)(-1)^m}{m!(l-m-3)!}\times
~~\nonumber~~~~~~~~~~~~~~~~~~~~~~\\
\sum_{n=1}^5 a_{nl}\left(\frac{\rho^{2m+n}-1}{2m+n}+\frac{\rho^{2m+n+1}-1}{2m+n+1}\right).~~~~~~~~~~~~~~~~
\end{eqnarray}
\begin{eqnarray}\label{97}
\mathcal{V}_{4l}(\rho)=\frac{4a_{0l}\Gamma(l+3/2)}{15\sqrt{\pi}}\rho^{2l+3}
\sum_{m=0}^2 \frac{m!}{\left(1+\rho^2\right)^{5-m}}\sum_{k=0}^m \frac{\Gamma(k+7/2)}{k!(m-k)!\Gamma(k+l+3/2)}
\left(-\frac{\rho^2}{1+\rho^2}\right)^k
\times
~~\nonumber~~~~~~\\
\left[\frac{(k+l+3)!\Gamma(l+m+3/2)}{(l+3)!\Gamma(k+l+m+5/2)}b_{2m+1,l}\rho^{2m}~
_2F_1\left(l+m-1,m-\frac{3}{2};k+l+m+\frac{5}{2};-\rho^2\right)
\right.
~~\nonumber~~~~~~\\
\left.
+\frac{b_{2(2-m),l}}{k+l-m+3}\rho^{3-2m}~_2F_1\left(l-2,m-\frac{3}{2};k+l+\frac{3}{2};-\rho^2\right)\right],~~~~~~~~~~~~~~~
\end{eqnarray}
where
\begin{eqnarray}\label{98}
a_{0l}=-\frac{(\pi-2)2^{-l-1}}{3\pi(2l-1)(2l+3)},~~
a_{1l}=\frac{15-4l(l+1)(4l+11)}{(2l-3)(2l+5)},~~
a_{2l}=4l(2l+3),~~
~~\nonumber~~~~~~\\
a_{3l}=2,~~
a_{4l}=4(l+1)(2l-1),~~
a_{5l}=\frac{(2l-1)(4l+5)}{2l+5},~~~~~~~~~~~~~~~~~~~~~~~~
\end{eqnarray}
\begin{equation}\label{99}
b_{0,l}=a_{1l},~~b_{5,l}=a_{5l},~~b_{s,l}=a_{sl}+a_{s+1l}~~~(s=1,2,3,4).
\end{equation}
Moreover, it is shown in Appendix \ref{SB} that all of the Gauss hypergeometric functions are contained in Eqs.(\ref{95}) and (\ref{97}) can be expressed through the elementary functions. A simple representation of the function $\mathcal{V}_{4l}(\rho)$ through the generalized hypergeometric functions $~_3F_2$ is derived too.
Making use of the \emph{Mathematica} operator \textbf{FindSequenceFunction}, the following representation was derived (see details in Appendix \ref{SC}) for the factor $A_2(l)$
being a part of the rhs of Eq.(\ref{93}):
\begin{eqnarray}\label{100}
A_2(l)=\frac{(2-\pi)\pi^{-3/2}}{360 l (l-2)\Gamma(l+\frac{1}{2})}\times
~\nonumber~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\
\left\{
\frac{\left[l(l+1)(688 l^4+1376 l^3-2480 l^2-3168 l+465)+450\right]\Gamma\left(\frac{l-1}{2}\right)\Gamma\left(\frac{l+1}{2}\right)}
{(2l-3)(2l-1)(2l+1)(2l+3)(2l+5)}-\frac{56}{l-1}\left(\frac{l}{2}!\right)^2
\right\}.~~~~~
\end{eqnarray}
Note that Eq.(\ref{100}) is correct only for even $l>2$, whereas $A_2(l)\equiv0$ for odd values of $l$.
The last subcomponent derived in \cite{LEZ1} in the form of single-series representation was
\begin{equation}\label{101}
\psi_{3,0}^{(2c)}(\alpha,\theta)=\sum_{l=0}^\infty P_l(\cos\theta)(\sin \alpha)^l \phi_l(\rho).
\end{equation}
It is the physical solution of the IFFR
\begin{equation}\label{102}
\left(\Lambda^2-21\right)\psi_{3,0}^{(2c)}=h_{3,0}^{(2c)},
\end{equation}
with the rhs
\begin{equation}\label{103}
h_{3,0}^{(2c)}=-\frac{4\xi}{3\sin \alpha}.
\end{equation}
Function $\phi_l(\rho)$ was obtained \cite{LEZ1} in the form
\begin{equation}\label{104}
\phi_l(\rho)=\phi_l^{(p)}(\rho)+c_l v_{3l}(\rho),
\end{equation}
where the closed expressions for the functions $\phi_l^{(p)}(\rho)$ was derived in \cite{LEZ1} (see also, Appendix \ref{SD}), and function $ v_{3l}(\rho)$ is defined by Eq.(\ref{68}).
The problem is that the coefficient $c_l$ was obtained in very complicated integral form.
A simple form of this coefficient can be written as follows:
\begin{equation}\label{105}
c_l=\frac{2(2l+1)-\pi-H_{\frac{l}{2}}+H_{\frac{l-1}{2}}}{6(2l-3)(2l-1)(2l+1)2^l},
\end{equation}
where $H_z$ are the harmonic numbers. Details can be found in Appendix \ref{SD}.
\section{Conclusions}\label{S4}
The individual Fock recurrence relations introduced in \cite{LEZ1} were used to derive the explicit expressions for the components $\psi_{k,0}^{(0)}$ and $\psi_{k,0}^{(k)}$ of the AFC $\psi_{k,0}$. Using the methods described in \cite{LEZ1}, the mentioned above edge components were calculated and presented for $4\leq k\leq8$. However, given that the IFRR for the edge components of the order $k$ contain only the edge components of the lower order, there is no problem to calculate the edge components with arbitrary $k$. Moreover, it was stated that the components $\psi_{k,0}^{(k)}$ are the functions of the hyperspherical angle $\alpha$ only, whereas the components $\psi_{k,0}^{(0)}$ are the functions of a single variable $\xi$ defined by Eq.(\ref{I3}).
The single-series representation was derived for the subcomponent $\psi_{3,0}^{(2e)}$ missed in \cite{LEZ1}. This subcomponent is the physical solution of the IFRR (\ref{64}) with the rhs of the form (\ref{63}).
The specific coefficient $s_l$ being a part of the mentioned representation was found in a simple explicit form. This coefficient was derived by the proper application of the \emph{Mathematica} operator \textbf{FindSequenceFunction}.
The same method was applied in order to find a simple expressions for coefficients $c_l$ and $A_2(l)$ included into the single-series representations of the subcomponents $\psi_{3,0}^{(2c)}$ and $\psi_{4,1}^{(2d)}$, respectively.
For the latter subcomponent we derive the closed explicit representations through the hypergeometric functions and the elementary functions, as well.
\section{Acknowledgment}
The author acknowledges Prof. Nir Barnea for useful discussions.
This work was supported by the PAZY Foundation.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,970 |
Unfortunately, but, most composers have a superior possibility of composing item depictions than they do of winding up the top of the line creators. Composting is a fantasy job, yet not for everybody.
So, if you need to become effective as a content writer, you require a full toolbox of attractive skills.
A few composers are procured to compose item portrayals for inventories, and some end up being J.K. Rowling.
"Ideation" is a marketing industry trendy expression that depicts the creative procedure of finding a subject, title and point to expound on; and ideation starts with analytics. WordPress themes have changing levels of automated capacities, and once in a while the best way to influence your content to show up the way you need it to will be to dive into the content/HTML tab and control the code to influence a title tag or fix a dispersing issue. Content writers are marketing specialists, SEO experts, on-page coders and social networking butterflies. The reason is that each type of composing has its own style. Search engine algorithms change always, and writers need to keep up. Social networking puts all that you require inside your grip. Assemble your audience; meet distributors and converse with industry specialists. The news is conveyed AP style, so, enlightening sections with the meat of the story at the best.
Composting is a fantasy job, yet not for everybody. A few composers are procured to compose item portrayals for inventories, and some end up being J.K. Rowling. Unfortunately, but, most composers have a superior possibility of composing item depictions than they do of winding up the top of the line creators.
While fruitful content writers appear to have a lucky life — they telecommute, make their own timings and work to such an extent or as small however they see fit by far most experience considerable difficulties bringing home the bacon of it. They do not have the right stuff necessary to succeed. Since regardless of how capable they are, the composing ability is basically insufficient. So, if you need to become effective as a content writer, you require a full toolbox of attractive skills.
Effective content writers must ace distinctive composition styles.
The reason is that each type of composing has its own style. The news is conveyed AP style, so, enlightening sections with the meat of the story at the best. Blogging is affable, agreeable and frequently stubborn. Advertisement duplicate is short and convincing. White papers are long; they depict an issue and give the arrangement. Be that as it may, in any case, every last class is content, and each style authors ace makes them more profitable and popular.
Effective content writers don't pick irregular subjects.
Comprehend their gathering of people. Advertisers call it making a "purchaser persona." If you know who your perusers are, you can compose what they need to peruse. You compose for your gathering of people. Not for yourself, not for your organization, not for your image.
Look at the opposition. What fruitful content are others in your company sharing? A focused content review gives you a huge amount of data. Not just about what your rivals are sharing, but rather who is connecting to their content, blogging about it, tweeting it out and posting it somewhere else.
Specialty a smart title. After you have a keyword, contender and peruser learning, take as much time as necessary, pick your subject and art a title that will interest perusers. The title constrains individuals to read. . . or, then again not. The most vital words on your post are the title and the Meta description.
Effective content writers are unique.
It's your notoriety. Each post with your name on it ought to be unique. That most likely sounds insane, with every one of the huge numbers of individuals expounding on similar subjects, however, it's simpler than it appears. Each skilled writer can bring a special voice, alternate point of view or new light to an exhausted subject.
Appropriated content is awful for SEO, awful for your bosses and much more terrible for you. Secure your notoriety and your profession by avoiding potential risk. Before you present your work, utilize an online program to check for unoriginality. With all the content out there, it's anything but difficult to incidentally copy composing.
Fruitful content writers know SEO, HTML, CSS, and WordPress.
Try not to freeze. You just need a couple of basics. WordPress themes have changing levels of automated capacities, and once in a while the best way to influence your content to show up the way you need it to will be to dive into the content/HTML tab and control the code to influence a title tag or fix a dispersing issue. It's worth your opportunity to take in the fundamentals.
Refreshed SEO learning is additionally impassable. Search engine algorithms change always, and writers need to keep up. One thing stays consistent: High quality is dependably sought after. If you can write deep content from a remarkable point of view, you'll be popular.
Fruitful content writers are Social Media Specialists.
Name acknowledgment is essential. Social networking puts all that you require inside your grip. Assemble your audience; meet distributors and converse with industry specialists. At the point when you're composing is distributed, the fun has just barely started. The more dynamic you are on social networking, the more probable your devotees will be to prescribe your content. Fruitful content writers are dynamic, open and friendly.
Along these lines, reconsider expressing "achievement." It quits being about words on paper when "content" is added to "writer." Content writers are marketing specialists, SEO experts, on-page coders and social networking butterflies. With the correct range of abilities, you'll succeed and find that yours is the best job on the planet. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,112 |
Q: CSS layout block below previous block How to make third block below the first?
Now third block below second.
Demo http://jsfiddle.net/SdR6e/1/
HTML
<!DOCTYPE html>
<html>
<body>
<div style="width: 400px;">
<div class="semiblock" style="height: 200px;">
First
</div>
<div class="semiblock" style="height: 100px;">
Second
</div>
<div class="semiblock" style="height: 200px;">
Third
</div>
<div class="semiblock" style="height: 200px;">
Fourth
</div>
</div>
</body>
</html>
CSS
.semiblock {
border: 1px solid black;
float: left;
margin: 0;
width: 198px;
}
I need this:
A: Try to insert clear:both like this:
DEMO
HTML
<!DOCTYPE html>
<html>
<body>
<div style="width: 400px;">
<div class="semiblock" style="height: 200px;">
First
</div>
<div class="semiblock" style="height: 100px;">
Second
</div>
<div class="clear">
</div>
<div class="semiblock" style="height: 200px;">
Third
</div>
<div class="semiblock" style="height: 200px;">
Fourth
</div>
</div>
</body>
</html>
CSS:
.semiblock {
border: 1px solid black;
float: left;
margin: 0;
width: 198px;
}
.clear {
clear:both;
}
A: Here is the solution: http://jsfiddle.net/SdR6e/2/
Use clear:both; when you want next element below another element.
If you use this with multiple blocks then write it in a class and add class to target elements.
.clearall{
clear:both;
}
A: Method of "Alessandro Minoccheri" is good (+1),
but you want , also, just add "clear:left" value in your third block :
<!DOCTYPE html>
<html>
<body>
<div style="width: 400px;">
<div class="semiblock" style="height: 200px;">
First
</div>
<div class="semiblock" style="height: 100px;">
Second
</div>
<div class="semiblock" style="height:200px;clear:left;">
Third
</div>
<div class="semiblock" style="height: 200px;">
Fourth
</div>
</div>
</body>
</html>
A: Please use below HTML.. and used same CSS OR i have updated your given fiddle. Please check it using below URL.
http://jsfiddle.net/SdR6e/11/
<div style="width: 400px;">
<div class="semiblock" style="height: 200px;">
First
</div>
<div class="semiblock" style="height: 100px;">
Second
</div><div style='clear:both;'></div>
<div class="semiblock" style="height: 200px;">
Third
</div>
<div class="semiblock" style="height: 200px;">
Fourth
</div>
</div>
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 321 |
package org.lirazs.gbackbone.reflection.client;
public interface EnumType<T> extends ClassType<T> {
public EnumConstant[] getEnumConstants();
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,226 |
Vol 28 No 2 (2022): In Progress
Seismic vulnerability of reinforced concrete bridges in Pakistan
DOI 10.3846/jcem.2022.15854
Submitted: Oct 29, 2021
Muhammad Khalid Hafiz Affiliation
Muhammad Khalid Hafiz Department of Civil Engineering, University of Engineering and Technology, Taxila, Pakistan
; Qaiser-uz-Zaman Khan Affiliation
Qaiser-uz-Zaman Khan Department of Civil Engineering, University of Engineering and Technology, Taxila, Pakistan
; Sohaib Ahmad Affiliation
Sohaib Ahmad Works and Services Organization, Islamabad, Pakistan
DOI: https://doi.org/10.3846/jcem.2022.15854
Different researchers have performed seismic hazard assessment studies for Pakistan using faults sources which differ from Building Code of Pakistan (BCP 2007) with diverse standard deviations. The results of seismic hazard studies indicate that BCP requires gross revision considering micro and macro level investigations. The recent earthquakes in Pakistan also damaged bridge structures and some studies have been conducted by different researchers to investigate capacity of existing bridges.
The most of bridge stock in Pakistan has been designed assuming seismic loads as 2%, 4% and 6% of dead loads following West Pakistan Code of Practice for Highway Bridges. The capacity of eight selected real bridges, two from each seismic zone 2A, 2B, 3 & 4 is checked against BCP demands. Static and dynamic analyses were performed and the piers were checked for elastic limits. It is established that piers are on lower side in capacity and the bridges in zone 2A are generally less vulnerable. Whereas the bridges in zone 2B, 3 and 4 are vulnerable from medium to very high level. Hence, an in-depth analytical vulnerability study of bridge stock particularly in high-risk zone needs to be conducted on priority and appropriate seismic retrofitting schemes need to be proposed.
Keyword : hazard, vulnerability, seismic, bridges, piers, tectonics, Building Code of Pakistan
Hafiz, M. K., Khan, Q.- uz-Z., & Ahmad, S. (2022). Seismic vulnerability of reinforced concrete bridges in Pakistan. Journal of Civil Engineering and Management, 28(2), 93–105. https://doi.org/10.3846/jcem.2022.15854
Ali, S. M., Khan, A. N., Rehman, S., & Reinhorn, A. M. (2011). A survey of damages to bridges in Pakistan after the major earthquake of 8 October 2005. Earthquake Spectra, 27(4), 947–970. https://doi.org/10.1193/1.3650477
American Concrete Institute. (2004). Building code requirements for structural concrete and commentary (ACI 318-05). USA.
American Institute of Steel Construction. (2005). Seismic provisions for structural steel buildings (ANSI/AISC 341-05). USA.
American Society of Civil Engineers. (1994). Minimum design loads for buildings and other structures (ANSI/ASCE 7-93). USA.
American Society of Civil Engineers. (2006). Minimum design loads for buildings and other structures (ASCE/SEI 7-05). USA.
Earthquake Reconstruction and Rehabilitation Authority. (2006a). Annual review 2005–2006. Prime Minister Secretariat (Public), Islamabad, Pakistan.
Earthquake Reconstruction and Rehabilitation Authority. (2006b). "Build back better" reconstruction and rehabilitation strategy, transport (roads & bridges) sector. Prime Minister Secretariat (Public), Islamabad, Pakistan.
Economic Adviser's Wing, Finance Division. (2019). Pakistan economic survey 2018–2019. Government of Pakistan, Islamabad, Pakistan.
Global Seismic Hazard Assessment Program. (1999). http://www.seismo.ethz.ch/GSHAP/
Hashash, Y. M. A., Kim, B., Olson, S. M., & Ahmad, I. (2012). Seismic hazard analysis using discrete faults in Northwestern Pakistan: Part I – methodology and evaluation. Journal of Earthquake Engineering, 16, 963–994. https://doi.org/10.1080/13632469.2012.681423
Highway Department, Government of West Pakistan Lahore. (1967). Code of practice, highway bridges.
Information Management and Mine Action Programs. (2013). Earthquake 2013 atlas of Balochistan: Districts Awaran, Kech, Panjgur, Khuzdar, & Washuk. Islamabad, Pakistan.
International Conference of Building Officials. (1997). Uniform building code. California, USA.
Khan, R. A., Kumar, M., Ahmed, M., Rafi, M. M., & Lodi, S. H. (2015). Earthquake damage assessment of bridges in Karachi. NED University Journal of Research, 12(3), 45–61.
Khan, S. A., Pilakoutas, K., Hajirasouliha, I., Guadagnini, M., Mulyani, R., Ahmadi, R., & Elwaeli, W. (2017). Seismic risk assessment for developing countries: Pakistan as a case study. Earthquake Engineering and Engineering Vibration, 17(4), 787–804. https://doi.org/10.1007/s11803-018-0476-3
Ministry of Housing and Works. (2007). Building code of Pakistan (BCP). Government of Pakistan, Islamabad.
MonaLisa, Khwaja, A. A., & Jan, M. Q. (2007). Seismic hazard assessment of the NW Himalayan Fold-and-Thrust Belt, Pakistan, using probabilistic approach. Journal of Earthquake Engineering, 11, 257–301. https://doi.org/10.1080/13632460601031243
National Disaster Management Authority. (2019). Mirpur Earthquake 2019 (Situation report No. 08). Islamabad, Pakistan.
Pakistan Meteorological Department, & NORSAR (Norway). (2006). Seismic hazard analysis and zonation for the northern areas of Pakistan and Kashmir.
Pakistan Meteorological Department, & NORSAR (Norway). (2007). Seismic hazard analysis and zonation for Pakistan, Azad Jammu and Kashmir.
Pakistan Meteorological Department. (2019). Seismicity map. National Seismic Monitoring Centre Islamabad, Pakistan. http://seismic.pmd.gov.pk/seismicity-maps.php
Quittmeyer, R., & Jacob, K. H. (1979). Historical and modern seismicity of Pakistan, Afghanistan, northwestern India, and southeastern Iran. Bulletin of the Seismological Society of America, 69(3), 773–823.
Rafi, Z., Ahmed, N., Ur-Rehman, S., Azeem, T., & Abd el-aal, A. e. K. (2013). Analysis of Quetta-Ziarat earthquake of 29 October 2008 in Paki-stan. Arabian Journal of Geosciences, 6, 1731–1737. https://doi.org/10.1007/s12517-011-0485-2
Shah, B. A., Sadiq, M. M., Memon, S. A., & Rehman, S. K. U. (2021). Assessment of the seismicity of Peshawar region in line with the historical data and modern building codes (ASCE-07 & IBC-2006). Journal of Earthquake Engineering, 25(9), 1826–1850. https://doi.org/10.1080/13632469.2019.1605315
USGS Earthquake Hazards Program. (2020). Maps. https://earthquake.usgs.gov/earthquakes/eventpage/usb000jyiv/executive
Waseem, M., & Spacone, E. (2017). Fragility curves for the typical multi-span simply supported bridges in northern Pakistan. Structural Engineering & Mechanics, 64(2), 213–223.
Waseem, M., Khan, M. A., & Khan, S. (2019). Seismic sources for southern Pakistan and seismic hazard assessment of Karachi. Natural Hazards, 99(1), 511–536. https://doi.org/10.1007/s11069-019-03755-5
Waseem, M., Khan, S., & Khan, M. A. (2020). Probabilistic seismic hazard assessment of Pakistan territory using an aerial source model. Pure and Applied Geophysics, 177, 3577–3597. https://doi.org/10.1007/s00024-020-02455-7
Water and Power Development Authority. (2015). Simly dam project seismotectonics & seismic hazard analysis (Technical Report by Directorate of Seismic Studies office of the General Manager & Project Director Tarbela Dam Project, Vol. 1). Pakistan.
Zaman, S., Ornthammarath, T., & Warnitchai, P. (2012). Probabilistic seismic hazard maps for Pakistan. In 15th World Conference on Earthquake Engineering (pp. 8677–8687), Lisbon, Portugal.
Abstract View - 15 Times
PDF Download - 8 Times | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,703 |
\section{INTRODUCTION}
Stars in orbit around the black hole in the center of the Milky Way,
hereafter \sgra, have been tracked for more than a decade, providing a
measure of the black hole mass (Genzel et al.\ 2010; Ghez et
al.\ 2012). The constraints have been steadily improving with the
first measurement of a fully closed orbit for the star S2 (see, e.g.,
Ghez et al.\ 2008; Gillessen et al.\ 2009) as well as with the
discovery of additional stars (S0-16, S0-102 and S0-104) in
orbits that probe the black-hole spacetime within a few thousand
gravitational radii (Meyer et al.\ 2012).
Precise astrometric observations of stars in close orbits around
\sgra\ may lead to the detection of orbital precession due to general
relativistic frame dragging, measuring the spin of the black hole,
and testing the no-hair theorem (Will 2008). Such measurements
will be complementary to those that will be achieved with the Event
Horizon Telescope (Fish \& Doeleman 2009; Johannsen \& Psaltis 2010)
as well as to timing observations of pulsars in orbit around the black
hole (Pfahl \& Loeb 2004; Liu et al.\ 2012).
Future instruments, such as GRAVITY, an adaptive-optics assisted
interferometer on the VLT (Eisenhauer et al.\ 2011), will track
stellar orbits with a single pointing astrometric accuracy of $\simeq
10-200~\mu$arcsec, for stars as faint as $m_{\rm K}=16.3-18.8$ in a
crowded field (Stone et al. 2012). At this resolution, the biggest
challenge in measuring the fundamental properties of \sgra\ with
stellar orbits will be ensuring that a particular measurement is not
affected adversely by astrophysical complications.
A number of studies have explored the effects of non-gravitational
forces exerted on the orbiting stars by other objects in the same
environment. Merritt et al\ (2010) and Sadeghian \& Will (2011)
investigated the perturbative effects of the stellar cluster on the
orbits of individual stars and found that they are negligible compared
to the general relativistic effects inside $\sim$1~mpc$\simeq 5\times
10^3$ gravitational radii. Psaltis (2012) studied the interaction of
the orbiting stars with the ambient gas and showed that hydrodynamic
drag and star-wake interactions are negligible inside $\sim
10^5$~gravitational radii.
In this paper, we study the deviations of the stellar orbits from
test-particle trajectories that are introduced by the fact that stars
are not point particles but {\em (i)\/} may lose mass in strong winds
and {\em (ii)} may be tidally deformed. We calculate the range of
orbital parameters for which orbital perturbations due to the stellar
winds and tides do not preclude the measurement of the black-hole spin
and quadrupole moment and, therefore, testing of the no-hair theorem.
\section{Characteristic Timescales}
We start by comparing the characteristic timescales for orbital
precession due to general relativistic effects to those of orbital
perturbations due to stellar winds and to tidal forces. Hereafter, we
set the mass of the black hole to $4\times 10^6 M_\odot$ and its
distance to 8.4~kpc. We also denote by $M_{\rm BH}$ the mass of the
black hole, by $M_{\rm S}$ the mass of the star, and by $a$ and $e$
the semi-major axis and eccentricity of the stellar orbit. With these
definitions, the Newtonian period of a stellar orbit is
\begin{eqnarray}
P&=&2\pi\left(\frac{a^3}{GM_{\rm BH}}\right)^{1/2}\nonumber\\
&=&123.8\left(\frac{M_{\rm BH}}{4\times 10^6~M_\odot}\right)
\left(\frac{ac^2}{GM_{\rm BH}}\right)^{3/2}~\mbox{s}\;.
\end{eqnarray}
\subsection{Dynamical Timescales}
General relativistic corrections to Newtonian gravity affect the
orbits of stars around \sgra\ in, at least, three ways.
First, eccentric orbits precess on the orbital plane (periapsis
precession). The characteristic timescale for this precession is
(Merritt et al.\ 2010)
\begin{eqnarray}
t_{\rm S}&=&\frac{P}{6}\frac{c^2 a}{GM_{\rm BH}}\left(1-e^2\right)\nonumber\\
&=&20.63\left(\frac{M_{\rm BH}}{4\times 10^6~M_\odot}\right)
\left(\frac{ac^2}{GM_{\rm BH}}\right)^{5/2}\left(1-e^2\right)~\mbox{s}\;.
\end{eqnarray}
Second, orbits with angular momenta that are not parallel to the
spin angular momentum of the black hole precess because of frame
dragging. The characteristic timescale for this precession is
(Merritt et al.\ 2010)
\begin{eqnarray}
t_{\rm J}&=&\frac{P}{4\chi}
\left[\frac{c^2a\left(1-e^2\right)}{G M_{\rm BH}}\right]^{3/2}\nonumber\\
&=&
30.95\chi^{-1}\left(\frac{M_{\rm BH}}{4\times 10^6~M_\odot}\right)
\left(\frac{ac^2}{GM_{\rm BH}}\right)^{3}\left(1-e^2\right)^{3/2}~\mbox{s}
\;,\nonumber\\
\end{eqnarray}
where $\chi$ is the spin of the black hole.
Finally, tilted orbits also precess because of the quadrupole moment
of the spacetime. The characteristic timescale for this precession is
(Merritt et al.\ 2010)
\begin{eqnarray}
t_{\rm Q}&=&\frac{P}{3\vert q\vert}
\left[\frac{c^2a\left(1-e^2\right)}{G M_{\rm BH}}\right]^2\nonumber\\
&=&
41.26\vert q\vert^{-1}\left(\frac{M_{\rm BH}}{4\times 10^6~M_\odot}\right)
\left(\frac{ac^2}{GM_{\rm BH}}\right)^{7/2}\left(1-e^2\right)^{2}~\mbox{s}
\;,\nonumber\\
\end{eqnarray}
where $q$ is the quadrupole moment of the black-hole spacetime. If the
spacetime of the black hole satisfies the no hair theorem, then
$q=-\chi^2$.
The three timescales for a spinning Kerr black hole ($\chi=0.3$,
$q=-\chi^2$) and for orbits with two different eccentricities are
shown in Figure~1 as a function of the orbital semi-major axis.
\subsection{Wind mass loss}
The angular momentum of a star in orbit around the black hole is
\begin{equation}
J=M_{\rm S}\left(GM_{\rm BH}a\right)^{1/2}
\end{equation}
(assuming here for simplicity a circular orbit). If the star is
losing mass in a wind at a rate $\dot{M}_{\rm w}$, then its orbit
will evolve according to
\begin{equation}
\frac{\dot{a}}{a}=
2\frac{\dot{J}}{J}-2\frac{\dot{M}_{\rm w}}{M_{\rm S}}\;.
\end{equation}
Assuming that the wind is carrying a fraction $\eta$ of the orbital
angular momentum, i.e.,
\begin{equation}
\dot{J}_{\rm w}=\eta \dot{M}_{\rm w}\left(GM_{\rm BH}a\right)^{1/2}
\end{equation}
then the rate of change of the orbital separation becomes
\begin{equation}
\frac{\dot{a}}{a}=2(1-\eta)\frac{\dot{M}_{\rm w}}{M_{\rm S}}\;.
\end{equation}
In other words, the timescale for orbital evolution due to the presence
of the wind is
\begin{equation}
\tau_{\rm w}\equiv\left\vert\frac{a}{\dot{a}}\right\vert=
\left[\frac{1}{2(1-\eta)}\right]
\frac{M_{\rm S}}{\dot{M}_{\rm w}}\;,
\end{equation}
or
\begin{equation}
\tau_{\rm w,-7}=1.6\times 10^{15}\left(\frac{1}{1-\eta}\right)
\left(\frac{M_{\rm S}}{10 M_\odot}\right)
\left(\frac{\dot{M}_{\rm w}}{10^{-7} M_\odot~\mbox{yr}^{-1}}\right)^{-1}~\mbox{s}\;,
\nonumber\\
\end{equation}
where we have used the subscript ``-7'' to denote the exponent in the
wind mass loss rate.
This characteristic timescale is compared to the dynamical timescales
in Figure~1, for a $10~M_\odot$ star and for a wind mass-loss rate of
$10^{-7} M_\odot$~yr$^{-1}$, which is consistent with current
observations of the star S2 in orbit around \sgra\ (Martins et
al.\ 2008). The effect of wind mass loss becomes negligible with
respect to the frame-dragging induced precession of the orbital planes
for orbits within $\sim 30,000$ gravitational radii. On the other
hand, they become negligible with respect to the quadrupole induced
precession of the orbital planes for orbits within $\sim 4,000$
gravitational radii.
\begin{figure}[t]
\psfig{file=f1.eps,width=3.5in}
\caption{Different timescales that are relevant to the evolution of
orbits of stars in the vicinity of \sgra, as a function of their
semi-major axes. The red line shows the periods of the orbits. The
blue lines show the timescales for the precession of the periapsis
($t_{\rm S}$), for the precession of the orbital plane due to frame
dragging ($t_{\rm J}$), and for the precession of the orbital frame
due to the quadrupole moment of the spacetime ($t_{\rm Q}$); the
black-hole spin is taken to be $\chi=0.3$ and solid and dashed lines
correspond to eccentricities of 0.5 and 0.8, respectively. The green
line ($t_{\rm w,-7}$) shows the characteristic timescale for orbital
evolution of a $10 M_\odot$ star due to the presence of a stellar
wind at a mass loss rate of $10^{-7} M_\odot$~yr$^{-1}$, for
$\eta$ set equal to zero.}
\label{fig:winds}
\end{figure}
\subsection{Tidal Dissipation of Orbital Energy}
\begin{figure}[t]
\psfig{file=f2a.eps,width=3.5in}
\psfig{file=f2b.eps,width=3.5in}
\caption{The blue lines show the dynamical timescales, as in
Figure~1. The red lines show the characteristic timescale for
orbital evolution due to the tidal dissipation of the orbital
energy, for two different values of the eccentricity. The vertical
segments of the red lines indicate the semi-major axes at which the
stars are tidally disrupted at periastron. The two panels correspond
to a 20~$M_\odot$ and a 10~$M_\odot$ star. In both cases, the
black-hole spin is taken to be equal to $\chi=0.3$.}
\label{fig:tides}
\end{figure}
The tidal deformations excited at each periastron passage transfer
some of the orbital energy into modes within the volume of the star
(see Alexander 2006 for a review of stellar processes around
\sgra). Since the orbital energy loss is proportional to the number
of passages (Li \& Loeb 2012), we can use the approach of Press \&
Teukolsky (1977) to estimate the rate of dissipation of orbital energy
as
\begin{equation}
\frac{\Delta E}{\Delta t}\simeq
\left(\frac{GM_{\rm S}^2}{PR_{\rm S}}\right)
\left(\frac{M_{\rm BH}}{M_{\rm S}}\right)^2
\sum_{l=2,3,...} \left(\frac{R_{\rm S}}{R_{\rm p}}\right)^{2l+2}T_l(\eta)\;.
\end{equation}
Here $R_{\rm p}=a(1-e)$ is the periastron distance, $R_{\rm S}$ is the
radius of the star, and $T_l(\eta)$ are appropriate dimensionless
functions of the quantity
\begin{equation}
\eta\equiv \left(\frac{M_{\rm S}}{M_{\rm S}+M_{\rm BH}}\right)^{1/2}
\left(\frac{R_{\rm p}}{R_{\rm S}}\right)^{3/2}
\end{equation}
that describe the excitation of modes with different spherical
harmonic index $l$.
In detail,
\begin{equation}
T_l(\eta) = 2 \pi^2 \sum_{n, m} |Q_{nl}|^2 |K_{nlm}|^2\;,
\label{e:E0n}
\end{equation}
where $n$ is the mode order and $m$ is the other spherical harmonic
index. The excited modes have $l>1$ and $-l<m<l$. The
coefficient $K_{nlm}$ represents the coupling to the orbit,
\begin{equation}
K_{nlm} = \frac{W_{lm}}{2\pi} \int _{-\infty}^{\infty} dt \left[
\frac{R_p}{r(t)}\right]^{l+1} \rm{exp} \{i [ \omega_n t + m {\Phi
(t)} ] \} ,
\end{equation}
where $r(t)$ is the instantaneous distance between the star and \sgra,
$\omega_n$ is the mode frequency, $\Phi (t)$ is the true anomaly, and
\begin{eqnarray}
W_{lm} &=& (-1)^{(l+m)/2}\left[\frac{4 \pi}{(2 l+1)}(l-m)!(l+m)!\right] ^{1/2}
\nonumber\\
&&\qquad
\left[2^l\frac{(l-m)}{2}!\frac{(l+m)}{2}!\right]^{-1}\;.
\end{eqnarray}
The tidal overlap integral $Q_{nl}$ represents the coupling of the
tidal potential to a given mode, i.e.,
\begin{equation}
Q_{nl} = \int_0^1 R^2 dR \rho(R) l R^{l-1} [\xi_{nl}^{\cal R} +
(l+1)\xi_{nl}^{\cal S}]\;,
\end{equation}
where $\rho(R)$ is the stellar density profile as a function of radius
$R$ and $\xi (R) = [\xi_{nl}^{\cal R} (R) \hat{e}_R + \xi_{nl}^{\cal
S} (R) R \nabla] Y_{lm} (\theta, \phi) $ is the mode eigenfunction,
with $\xi_{nl}^{\cal R}$ and $\xi_{nl}^{\cal S}$ being its radial and
poloidal components, respectively. We obtain the appropriate stellar
density profile from the MESA code (Paxton et al.\ 2011) and compute
the mode eigenfunctions with the ADIPLS code (Christensen-Dalsgaard
2008).
Because the energy gain in each passage depends on
$({R_S}/{R_p})^{2l+2}$ and the values of $Q_{nl}$ and
$K_{nlm}$ are similar for modes with different values of $l$, the
quadrupole ($l=2$) modes gain the most energy during the tidal
excitation (the $l=0$ and $l=1$ modes are not excited). For this reason,
we focus, hereafter, on the $l=2$ modes.
The characteristic timescale for orbital evolution due to tidal dissipation
is
\begin{eqnarray}
t_{\rm d}&\equiv& \frac{E}{\Delta E/\Delta t}\nonumber\\
&=&\frac{\pi R_{\rm S}}{c}
\left(\frac{G M_{\rm BH}}{c^2 R_{\rm S}}\right)^6
\left(\frac{M_{\rm S}}{M_{\rm BH}}\right)
\left(\frac{ac^2}{GM_{\rm BH}}\right)^{13/2}(1-e)^6T_2^{-1}\nonumber\\
&=&1.37\times 10^{-4}\left(\frac{M_{\rm BH}}{4\times 10^6~M_\odot}\right)^{5}
\left(\frac{R_{\rm S}}{10~R_\odot}\right)^{-5}
\left(\frac{M_{\rm S}}{20~M_\odot}\right)\nonumber\\
&&\qquad
\left(\frac{ac^2}{GM_{\rm BH}}\right)^{13/2}\left(1-e\right)^{6}
T_2^{-1}(\eta)~\mbox{s}\;,
\end{eqnarray}
and is shown in Figure~\ref{fig:tides} for two main-sequence stars
with masses $10 M_\odot$ and $20 M_\odot$.
If the star at periastron reaches inside the tidal radius
\begin{equation}
R_{\rm t}=R_{\rm S}\left(\frac{M_{\rm BH}}{M_{\rm S}}\right)^{1/3}\;,
\end{equation}
it gets disrupted. For simplicity, we ignore here the fact that, if
the periastron distance is smaller than $4-5$ times the tidal radius,
the repeated heating of the star at each passage will make it
vulnerable to tidal disruption (Li \& Loeb 2012). Requiring $R_{\rm
p}\ge R_{\rm t}$ sets a lower limit on the semi-major axis of the
stellar orbit, i.e.,
\begin{eqnarray}
\left(\frac{ac^2}{GM_{\rm BH}}\right)&\ge& \frac{68.9}{1-e}
\left(\frac{R_{\rm S}}{10~R_\odot}\right)\nonumber\\
&&\qquad
\left(\frac{M_{\rm BH}}{4\times 10^6~M_\odot}\right)^{-2/3}
\left(\frac{M_{\rm S}}{20~M_\odot}\right)^{-1/3}\;.
\end{eqnarray}
The tidal limit is shown as the vertical portion of the red lines in
Figure~\ref{fig:tides}. At orbital separations larger than this limit,
the tidal evolution of the stellar orbits is never fast enough to
compete with the precession of the orbital planes due to frame
dragging. On the other hand, the orbital plane precession due to the
quadrupole moment of the black hole for stars with semi-major axes a
few times larger than the tidal limit will be masked by the orbital
evolution due to tidal effects.
\begin{figure}[t]
\psfig{file=f3.eps,width=3.5in}
\caption{The two blue curves show the loci of orbital parameters for
stars around \sgra\ at which the timescale of orbital-plane
precession due to the quadrupole moment of the black hole ($t_{\rm
Q}$) is equal to the orbital evolution timescale due to stellar
winds ($t_{\rm w,-7}$) or due to tides ($t_{\rm d}$). In order
for stars to follow nearly test-particle trajectories, their
orbital parameters have to lie between the two curves. The blue
shaded area shows the range of orbital parameters for which frame
dragging will be detectable with GRAVITY at a signal-to-noise ratio
of 5, assuming a range of astrometric accuracies between
$10-200~\mu$arcsec. The red shaded area show the range of orbital
parameters that lead to the tidal disruption of the star at
periapsis. All curves are for a black-hole spin of $\chi=0.3$ and a
10~$M_\odot$ star. The three filled circles show the orbital
parameters of the three stars nearest to \sgra\ that are presently
known.}
\label{fig:orbits}
\end{figure}
\section{DISCUSSION}
We explored whether deviations of the orbits of star around \sgra\
from test particle trajectories due to stellar winds and tides may
compromise the measurements of relativistic
effects. Figure~\ref{fig:orbits} summarizes our results for an
illustrative case of a $10~M_\odot$ star and a black-hole spin of
$\chi=0.3$. The two blue curves in this figure show the combinations
of semi-major axes and orbital eccentricities for which the timescale
of orbital plane precession due to the quadrupole moment of the black
hole is equal to the orbital evolution timescale due to the wind-mass
loss ($t_{\rm w}=t_{\rm Q}$) and due to tides ($t_{\rm d}=t_{\rm
Q}$). In order for a stellar orbit not to be affected significantly by
either of the two effects, its parameters need to be in between the
two curves.
For comparison, we calculate the signal-to-noise ratio at which the
precession of the orbital plane of a star due to frame dragging will
be detected, in the near future, using the adaptive-optics assisted
interferometer GRAVITY. Following Weinberg et al.\ (2005), we write
the signal-to-noise ratio as
\begin{equation}
S=\frac{8\pi\chi}{a^{1/2}(1+e)^{1/2}(1-e)^{3/2}}
\left(\frac{G M_{\rm BH}}{Dc^2}\right)^{3/2}
\frac{N_{\rm orb} {\cos\psi}}{\delta \theta}\;,
\end{equation}
where $D$ is the distance to the black hole, $N_{\rm orb}$ is the
number of orbits monitored, $\cos\psi$ is the inclination of the
orbit, and $\delta\theta$ is the astrometric accuracy of each
measurement. Assuming that we monitor a particular orbit for a time
$\Delta T$, we can rewrite this expression as
\begin{eqnarray}
S&=&\frac{9\times 10^6\cos\psi}{(1+e)^{1/2}(1-e)^{3/2}}
\left(\frac{\chi}{0.3}\right)
\left(\frac{\Delta T}{10~\mbox{yr}}\right)
\left(\frac{D}{8.4~\mbox{kpc}}\right)^{-1}\nonumber\\
&&\qquad\qquad\left(\frac{\delta\theta}{10~\mu\mbox{arcsec}}\right)^{-1}
\left(\frac{ac^2}{GM_{\rm BH}}\right)^{-2}\;.
\end{eqnarray}
The astrometric accuracy of GRAVITY is expected to be $\sim
200~\mu$arcsec for a faint star of $m_{\rm K}\simeq 18.8$ and $\sim
10~\mu$arcsec for a brighter star of $m_{\rm K}=16.3$. Requiring a
signal-to-noise ratio of 5 for this range of astrometric accuracies
and for the typical parameters used in the above equation places an
upper limit on the semi-major axes of orbits as a function of their
eccentricity. This range of upper limits is shown as the blue-shaded
region in Figure~\ref{fig:orbits}.
For all but the most eccentric orbits for which GRAVITY will be able
to detect orbital-plane precession due to frame dragging, both
effects of stellar winds and tides do not preclude by themselves the
measurement of the quadrupole moment of the black hole. On the other
hand, for highly eccentric orbits ($e>0.96$), the tidal dissipation of
orbital energy for massive stars occurs at similar timescales as the
orbital-plane precession due to the quadrupole moment of the black
hole. As a result, it needs to be taken into account as a possible
source of systematic uncertainties in measuring the quadrupole
moment of the black hole and in testing the no-hair theorem.
\acknowledgements
We thank S.\ Gillessen for his comments on the manuscript and F.\ \"Ozel
for many constructive disussions and comments. DP acknowledges the support
of the NSF CAREER award AST-0746549. This work also was supported in
part by NSF grant AST-0907890 and NASA grants NNX08AL43G and
NNA09DB30A (for A.L.).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,741 |
{"url":"http:\/\/www.ams.org\/mathscinet-getitem?mr=95g:58234","text":"MathSciNet bibliographic data MR1280122 (95g:58234) 58G12 (46L85 57Q10) L\u00fcck, W. Approximating \\$L\\sp 2\\$$L\\sp 2$-invariants by their finite-dimensional analogues. Geom. Funct. Anal. 4 (1994), no. 4, 455\u2013481. Article\n\nFor users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.","date":"2015-07-06 01:15:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 1, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9970858097076416, \"perplexity\": 9393.198349344148}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-27\/segments\/1435375097757.36\/warc\/CC-MAIN-20150627031817-00086-ip-10-179-60-89.ec2.internal.warc.gz\"}"} | null | null |
Clean beauty products are increasingly sought by the discerning consumer. Indeed, last year the most Googled beauty company was a clean beauty one. But what exactly is clean beauty? The generally accepted definition is that clean beauty products are those created without any suspected or proven toxic ingredients. These types of products are hopefully sourced through ethical means, and are designed for the health of both the environment and the body. Clean beauty products should be made with clear principles, including using the best nature has to offer and the elimination of any ingredients with the potential for harm or questionable data regarding safety. For some clean beauty formulators, that means using only organic ingredients or only all-natural (non-laboratory derived) ingredients. However, at Synergie Skin, the emphasis is on creating clean beauty products by combining the best of nature with the latest cosmeceutical science – because that way we can incorporate natural ingredients with the latest scientific advancements in skincare. Combining the best of both worlds makes our products stand out for what they are: clean beauty that is easy on your skin but tough on your particular skincare concerns. We call that principle 'Clean Science'®, and we believe it's even better than clean beauty.
Synergie's Clean Science products provide premier rejuvenation through scientific breakthroughs and cutting-edge technology. The ingredients are combined to change, protect, and nurture your skin. There are no harmful or questionable ingredients and the products are not tested on animals. Using clean beauty products helps you feel incredible about yourself while protecting the environment. This is much more complicated than mixing a lot of ingredients in a bottle. The synergy and chemistry of the ingredients must be understood, extensive trials and research must be conducted prior to the products being released on the open market, and customers and practitioners must be educated regarding the proper way to use the products to ensure the best possible skin care results.
Clean beauty products do not use artificial colors and fragrances, questionable ingredients, or cheap fillers. A high-quality product protects your skin from the accumulation of toxins. This means what is left out of the product is just about as important as what is included. The manufacturing of the products is not saturated with chemicals because they are created to be clean. The ingredients are free of chemicals and the method used for processing is much cleaner to promote healthier skin and a healthier planet.
One of the biggest issues in the beauty industry is the lack of government regulations. Beauty products are not regulated by the FDA (with the exception of color additives). This enables companies to use any products they choose even if they are not beneficial for your skin. This includes unnecessary fillers and dangerous chemicals or irritants. Even products containing only natural ingredients can be harmful if used incorrectly. This can cause allergies, irritations, breakouts, and redness. For example, talcum is a natural ingredient, but it has recently been associated with contamination by asbestos (another natural, but very toxic, ingredient) and has also been linked to ovarian cancer. These concerns are not an issue with clean beauty products because they do not contain any harmful ingredients; and if your clean beauty products are also Clean Science ones, they have proven capability to bring about change and so are excellent for repairing, rejuvenating, and refining your skin.
Clean products acknowledge the fact that your skin is your largest organ. This is incredibly important when you consider how much your skin goes through every day and the way your skin functions. Your skin absorbs much of the products you use on it and a lot of what your skin comes in contact with. (Note that not all the products you use on your skin actually get absorbed. For example, certain molecules that are promoted by some skin care companies as being beneficial to your skin are actually too big to be absorbed. So, they look good on the label but don't actually have much benefit.) This can be either good or bad. When you are using beauty products on your skin that contain harmful chemicals, you are potentially damaging it – and other parts of your body. This is scary because many of the chemicals so commonly used for beauty products have never even been tested. This means you have no way of knowing what kind of health issue they may be responsible for causing. Clean products should at the very least not cause your skin or body harm, and if they're formulated according to Clean Science, they will also rejuvenate your skin instead of causing damage.
The majority of common beauty products often contain hundreds of different chemicals. Many of these chemicals absorb right into your skin every time you use the products. They travel directly to your bloodstream and can have a negative impact on other organs. Using clean products means you know exactly what you are putting on your skin. Any concerns regarding the impact of unknown chemicals and what they may do to your skin are eliminated because research has been conducted to ensure the products are compatible and safe for your skin. It is also important to know clean makeup products do not use synthetic fragrances, which are filled with a lot of different and often unknown chemicals.
Synthetic fragrances as found in so many skincare and even makeup products, are especially problematic. One of the many reasons clean products are so important is because according to the FDA, the actual ingredients used for fragrances do not have to be revealed by the company as they are considered to be trade secrets. The idea is to prevent a company from stealing the exact fragrance of another company to help increase their sales. Although this is understandable, it does raise several different issues. The trade secret clause enables companies to use any chemicals they choose in their products just by stating the ingredients are a part of the fragrance used for the product. The company is never obligated to tell the consumers what the ingredients in the fragrance are. You will not see the ingredients listed on the packaging and you will never know what ingredients make up the blanket term "artificial fragrance".
Unless you are using clean beauty products, you will never be certain how many chemicals were used to make the fragrance or know the identity of these chemicals. These chemicals may be something you are allergic to or they may be extremely harmful to both your body and skin. (It's no surprise that artificial fragrances are one of the most common causes of skin sensitivities.) These concerns are completely eliminated by using clean and natural products. They will not use any chemicals for the fragrance that are capable of causing you any harm. You will also have the peace of mind in knowing exactly what you are using and placing on your skin.
As you would expect from a company with almost 15 years' history, Synergie's Clean Science skincare produces amazing results. But in addition to skincare, Synergie also manufactures a full line of clean beauty makeup products. What's more, every single Synergie makeup product uses ingredients designed to benefit your skin, such as broad-spectrum, physical sunblocks like non-nano titanium dioxide and zinc oxide contained in the foundations, or vitamin C in the lipsticks. This is why Synergie labels its clean makeup as "intelligent makeup". It does so much more than simply providing color, contouring and cover: it provides essential benefits for your skin too. Both Synergie Skin (skincare) and Synergie Minerals (makeup) are available from Skin Elegance in the US, and originate from Melbourne, Australia, where they are formulated, manufactured and packaged.
Synergie's Clean Science® intelligent makeup line encompasses mineral foundations in three different formulations (Mineral Whip compact cream-to-powder, BB Flawless liquid mineral, and Second Skin Crush loose powder). Zinc oxide in levels as high as 50 percent are used to formulate these foundations for a broad-spectrum sunblock. The clean cosmetics include products for the eyes, lips, and cheeks. All of these products employ natural ingredients, contain active ingredients such as vitamins or antioxidants for the proper treatment of your skin, and offer a wide selection of different benefits. The products have been formulated to provide immediate cover after you have completed a clinical treatment such as microneedling, peels, IPL/laser, or microdermabrasion. There are no harsh chemicals or questionable ingredients such as synthetic fragrances or colors, polyethylene glycol, or parabens that have the potential to harm your health.
Along with Synergie Minerals makeup, Skin Elegance offers excellent clean beauty products from Synergie Skin that combine the benefits of natural ingredients with the latest cosmeceutical science. For example, SuperSerum+ is an anti-aging serum that contains a combination of hydrolyzed tomato skin, marjoram, aloe vera and neroli along with saccharides and peptides to trigger the production of hyaluronic acid, collagen, and epidermal skin cells. One of its peptides mimics a human protein that has the capability of repairing damage to the DNA and reversing cell aging. The product additionally helps decrease wrinkles by mimicking muscle relaxers. SuperSerum+ is exceptionally effective for improving the appearance of aging skin and decreasing the look of fine lines when used once or twice per day – and it's a perfect example of the advantages of Clean Science over plain old clean beauty (when the latter doesn't include proven, active cosmeceuticals).
SuperSerum+ is just one example of Synergie's highly active Clean Science skincare products. The company also produces essential vitamin serums for everyday use, including Ultimate A serum (retinol), Vitamin B serum (containing the amazing multi-tasker niacinamide), and SupremaC+, a brand new vitamin C serum formulated with Synergie's proprietary CMF Triacid Complex™.
In addition to these serums, Synergie makes many other serums targeted toward particular skincare concerns, such as acne, dry skin, and pigmentation. It also has a range of moisturizers that cater to different skin needs. Überzinc is Synergie's essential daily moisturizer that is both a daily moisturizer and a broad-spectrum sunblock to protect against both UVB and UVA rays. It contains 21% zinc oxide, so is a physical sunscreen that will reflect all UV light and remain on top of your skin; furthermore, it does not contain absorbable and potentially harmful 'chemical sunscreens', which have been banned in some places because of their damaging effects on marine life. Überzinc also includes pure green tea extract to neutralize harmful free radicals . What's more, besides its sun blocking effects, zinc oxide is anti-inflammatory and non-irritant. This makes it the perfect solution for sensitive, irritated, or acneic skin, or for use after a clinical treatment.
The Skin Elegance website has information all about these products. If you are interested in learning more about what the clean beauty products from Synergie that are available at Skin Elegance can do for you, feel free to visit our website today! | {
"redpajama_set_name": "RedPajamaC4"
} | 2,771 |
Dasycnemia naparimalis is a species of snout moth in the genus Dasycnemia. It was described by William James Kaye in 1925 and is known from Trinidad.
References
Moths described in 1925
Chrysauginae | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,698 |
Who was Pierre La Mure?
Pierre Lamure was born in Nice, France, on June 15, 1909. He gained fame during the 1930s as a correspondent for a French newspaper, a position which brought him to the United States in 1937.
In addition to his newspaper writing, he also wrote books, receiving the Strassburger Prize in 1939 for his biographies of Thomas Edison and John D. Rockefeller, both of which were written in French.
After coming to the U.S., Lamure, his wife Dorothy, and their daughter, Lynn, settled in Palos Verdes. Once there, the civic-minded naturalized citizen worked for the founding of the city of Palos Verdes Estates, which incorporated in December 1939.
The Lamures lived at 1501 Chelsea Road in a house that became known by the name "Lost Horizon." Dorothy Lamure had a knack for decorating and furnishing houses, and the Lamures owned and re-sold several PVE properties. After they sold the "Lost Horizon" house and moved to a home on Via Elevada, the new owners later leased it to Ethel Barrymore.
Pierre Lamure works at his desk in his Palos Verdes Estates home in this undated photo.
On January 31, 1941, Lamure and his wife revived the Palos Verdes Bulletin, a twice-monthly publication that had been founded originally in 1924, died out during the Depression and then was revived for a year in 1932 before failing again.
Its 15-month third incarnation from 1941 to 1942 reflected the Lamures' boundless energy and gave Pierre an outlet for his sometimes volatile views. He was often critical of local governing authorities such as the PVE City Council, the Homes Association and the Art Jury in its pages, frequently using his "Among Ourselves" column to tweak them.
While the Lamures were living at the Via Elevada house in the 1940s, Pierre began working on his most famous work, a biographical novel about French artist Henri de Toulouse-Latrec titled Moulin Rouge. Mildred Beckstrand, a friend of the Lamures at the time, recalled in a 1964 interview that she spent many evenings at the Lamure house listening to Pierre read chapters of his work-in-progress aloud to groups of friends.
By the time Random House published Moulin Rouge in 1950, the Lamures had left Palos Verdes, and Pierre Lamure's name had become Pierre la Mure on the book jacket. The book was a huge success, and was translated into at least 18 languages.
In 1952, the first film version, left, starring Jose Ferrer and Zsa Zsa Gabor and directed by John Huston was released. It was nominated for seven Oscars and won two, for best art direction and best costume design, the same two awards that Baz Luhrmann's 2001 remake "Moulin Rouge!" won, coincidentally. That version starred Nicole Kidman, John Leguizamo and Ewan McGregor.
Lamure went on to write other successful biographical novels, including Beyond Desire (about composer Felix Mendelssohn) and Claire De Lune (about composer Claude Debussy.
His final book, The Private Life of Mona Lisa, was published in 1976. On December 15, 1976, Pierre Lamure died in Los Angeles at the age of 67.
"Pierre La Mure, American Writer (1909-1976)," Contemporary Authors Online, Detroit: Gale Publishing, 1998.
The Palos Verdes Story, by Delane Morgan, The Palos Verdes Review, 1982.
Photos of Pierre Lamure from The Palos Verdes Story.
This 1895 photo shows the former Pacific Salt Works buildings after the plant had been bought and shut down by Liverpool Salt Works. The salt lake is partially visible at left.
The first local industry in Redondo Beach began with the sale by Manuel Dominguez of 215 acres of the Rancho San Pedro to merchants Henry Allanson and William Johnson of Los Angeles for $500 on December 15, 1854.
The centerpiece of the land sale was a spring-fed salt lake about 200 yards wide and 600 yards long located about 200 yards inland from the ocean. The natural salt lake was near Redondo Beach's northern border with Hermosa Beach, in the area where the AES Redondo Beach Generating Station now stands.
Allanson and Johnson formed the Pacific Salt Works, and began manufacturing salt by using wood-fired boilers to hasten evaporation. Though their product reportedly was excellent, the local Los Angeles market wasn't large enough to support them at the time, and shipping it out was not economically feasible due to high costs.
The rival Liverpool Salt Works actually could ship its product from the Salton Sea to Los Angeles by train more cheaply than the Pacific Salt Works could by wagon from Redondo Beach.
The pair gave up on the business in 1862, selling it to another L.A. businessman, Frances Mellus, who operated the Pacific Salt Works until 1881, when Liverpool Salt Works bought Pacific and shut it down.
An unidentified gentleman stands near the abandoned Pacific Salt Works office building in 1901.
By 1901, most of the equipment had been removed. The buildings remained abandoned for several decades, until they were finally all torn down by 1924.
The land was used for electric power stations and substations after the salt works closed down. In 1948, Southern California Edison opened its major south Bay power plant there. AES bought the facility in 1998.
In 1941, the Old Salt Lake site became California State Historical Landmark Number 373. On March 27, 1955, a granite marker denoting the salt lake was placed at the southeast corner of Harbor Drive and Yacht Club Way, near what is now the AES plant.
A granite marker placed near the site of the Old Salt Lake stands near the AES power plant in Redondo Beach.
Isabel Henderson established the first public library in Torrance in 1913 in the parlor of her house at 1804 Gramercy Avenue.
The collection consisted of 300 books donated by her cousin, Jared Sydney Torrance, the city's founder.
By September 1914, the collection had increased to 510 books and 4 magazine subscriptions. There were 193 library card holders and circulation to date for the year was 1,935. The library was open Mondays, Wednesdays and Fridays from 2 p.m. to 4 p.m., and from 7 p.m. to 9 p.m.
Henderson held the post of city librarian for 22 years. During that time, the library had several homes, including a vacated school building on Cabrillo Avenue, the former Dominguez Land Co. office at Cabrillo and El Prado Avenue and, finally, a cottage on El Prado.
Her daughter, Mrs. Dorothy Jamieson, took over as librarian in 1935, and also held the post for 22 years. Isabel Henderson died in 1936, the same year that the library's first permanent home, a Works Progress Administration project, opened at 1345 Post Avenue.
The Torrance Public Library's first permanent home was in this building on Post Avenue downtown, which now houses the Torrance Historical Society.
The library operated independently in its first year before joining the new Los Angeles County library system in 1914. In 1935, the city contracted with the county library to provide services for the Torrance library. Under the agreement, Torrance owned the buildings and furnishings, but the county owned the books and supplied library staff members.
As early as 1958, the city considered ending ties with the county and establishing its own independent library system. But it was not until April 18, 1967 that the city's voters passed a $2.35 million bond issue to finance its establishment.
In addition, the city broke ground for its new central library the next year, on Sunday, March 31, 1968. (See clip, above. Click to enlarge.) The $1.4 million structure had been in the planning stages for years as part of the city's new civic center complex. Construction began on the site just east of the Victor Bensted Plunge on Torrance Boulevard.
Construction crews work on the Torrance Civic Center Library building in this 1970 Daily Breeze file photo.
The new Torrance Civic Center Library at 3301 Torrance Boulevard was dedicated on Sunday, April 4, 1971. West fought to have a basement added on to the original plans, in order to have room for expansion.
That's exactly the space that was used 30 years later when the basement was converted into the audiovisual department in a $1.2 million project. The remodeled library reopened on February 22, 2001, after the completion of the first major renovation in the library's history.
Another major remodeling costing $1.8 million began in 2008 and was finished in 2009. After the central library remodeling was completed, each of the city's five branches – Isabel Henderson (named for the city's first librarian), El Retiro, Walteria, Southeast and North Torrance – also underwent remodeling. The final branch to be revamped, North Torrance, is currently closed and scheduled to reopen in June 2010.
Another major change occurred on December 11, 1996, when the main library was renamed the Katy Geissert Civic Center Library, after the city's first woman council member and mayor. Geissert, a driving force during the library's conversion to an independent entity back in 1967, died in July 2000.
Historic Torrance: A Pictorial History of Torrance, California, by Dennis F. Shanahan and Charles Elliott Jr., Legends Press, 1984.
Images of America: Old Torrance Olmsted Districts, by Bonnie Mae Barnard and Save Historic Old Torrance, Arcadia Publishing, 2005.
News Notes of California Libraries, Vol. 9, No. 1, Jan.-Sept. 1914, California State Library/California State Printing Office, 1915.
The year 2010 marks the 50th anniversary of the opening of Providence Little Company of Mary Medical Center in Torrance.
The hospital's unusual name comes from the Sisters of the Little Company of Mary, the congregation of nuns founded by Mary Potter in Nottingham, England in 1877. The "little company" refers to "the faithful followers who stayed with Mary, the Mother of Jesus, at the foot of the cross throughout his suffering and death," according to the hospital's website.
The congregation's stated mission is to help for the poor, sick, suffering and dying. The Sisters worked extensively in Europe in its early years. A Chicago businessman visiting Rome was so impressed with the Congregation's care of his terminally ill wife that he offered to sponsor the group in the U.S. Three nuns established the order in Chicago in 1893 as a result.
Mary Potter died in 1913. In 1988, Pope John Paul II gave her "Venerable" status, a step on her way to eventual canonization as a saint.
The Sisters of the Little Company of Mary arrived in Torrance in 1957 to begin planning and fundraising for a new hospital to be built there. Hospital board co-chairman Sam Levy headed the fundraising campaign, which raised $500,000 from donations. Some federal funds were used, but no new bond issues or taxes were used to finance the effort.
The general contractor Steed Bros. broke ground on construction of the new five-story, four-wing structure on December 19, 1957. The completed Little Company of Mary Hospital at 4101 Torrance Boulevard had 150 beds, and was engineered to accommodate three additional stories for future expansions. It cost approximately $3.5 million to build.
Dedication ceremonies for the new hospital were held on Saturday, December 12, 1959. His Eminence James Frances Cardinal McIntyre blessed the facility. Among the estimated 1500 other attendees were County Supervisor Burton Chace, Torrance Mayor Albert Isen and many other mayors and chamber of commerce presidents from throughout the South Bay.
The first patients were admitted to the hospital on Sunday, January 3, 1960. Dr. John A. Wilson, M.D., of Manhattan Beach headed its medical staff of 175 physicians.
The hospital has expanded many times over the years. A new convent and chapel opened in August 1966. Major expansions also took place in 1969 and and 1979. after which the hospital's bed count increased to 263.
Little Company of Mary Hospital in 1965. Daily Breeze file photo.
In 1992, San Pedro Peninsula Hospital became a part of the Little Company of Mary south Bay health network. Bay Harbor Hospital in Harbor City was added in 1997. In 1998, Little Company began leasing the former South Bay Hospital in Redondo Beach, transforming it into the Beach Cities Amublatory Care Center.
On September 1, 1999, Little Company of Mary began its affiliation with the Seattle-based Sisters of Providence Health System. By merging with Providence's Burbank facilities, the Providence Health Care System Southern California Region was formed.
Former Little Company of Mary chiefs of staff, right, line up with shovels at the ready during the groundbreaking ceremony for the Hannon Tower expansion on Sept. 20, 1999.
Another major expansion of Little Company's Torrance hospital occurred in 2002 with the opening of the $55 million state-of-the-art Hannon Tower. The five-story, 122,000-square-foot building now houses about two-thirds of the hospital's beds.
In April 2007, Little Company purchased the 6.7-acre parcel where the Daily Breeze had operated for more than 40 years, with plans to demolish the old Breeze building and construct new medical office buildings and a parking garage there.
On January 1, 2009, the facility officially changed its name to Providence Little Company of Mary Medical Center Torrance. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,427 |
\section{Introduction}
\label{sec:intro}
Intermittent demand with several periods of zero demand is ubiquitous in practice. Over half of inventory consists of spare parts, in which demand is typically intermittent \citep{nikolopoulos2021we}. Given the high purchase and shortage cost associated with intermittent demand applications, accurate forecasts could be coupled with imporved inventory management in the field of manufacturing, aerospace, military and so on \citep{babai2019new,jiang2020intermittent}.
What makes intermittent demand challenging to forecast is that there are two sources of uncertainty: the sporadic demand occurrence, and the demand arrival timing. Seminal work on intermittent demand forecasting \citep{croston1972forecasting} proposed to forecast the sizes of demand and the inter-demand intervals separately. Then some scholars followed this idea and put forward some developments. For example, Syntetos-Boylan Approximation (SBA) proposed by \citet{syntetos2005accuracy} delivered approximately unbiased estimates and constituted the benchmark in subsequently proposed methodologies for intermittent demand forecasting. However, \citet{croston1972forecasting}'s method and SBA update
demand sizes and intervals, which leads to inapplicability in periods of zero demand when considering inventory obsolescence. To overcome this shortcoming, \citet{teunter2011intermittent} proposed a new method, referred to as Teunter-Syntetos-Babai (TSB), to update the demand probability instead of the demand interval. TSB has been shown to have good
empirical performance for the demands within linear and sudden obsolescence \citep{babai2014intermittent}.
The above mentioned forecasting methods for intermittent demand are all parametric methods, which suppose that the future demand has a statistical distribution. Their unknown parameters can be estimated from historical data. Instead, non-parametric intermittent demand methods directly estimate empirical distribution based on the past data, with no need for any assumption of a standard probability distribution. The bootstrapping methods, and the overlapping and non-overlapping
aggregation methods dominate the research field of non-parametric intermittent demand forecasting \citep{willemain2004new, hasni2019performance, hasni2019spare, boylan2021intermittent,boylan2016performance}.
In particular, temporal aggregation is a promising approach to forecast intermittent demand, in which a lower-frequency time series can be aggregated to a higher-frequency time series. Latent characteristics of the demand, such as trend and seasonality, appear at higher levels of aggregation. \citet{nikolopoulos2011aggregate} first introduced temporal aggregation to intermittent demand forecasting and proposed the Aggregate–Disaggregate Intermittent Demand Approach (ADIDA). To tackle the challenge of determining the optimal aggregation level, \citet{petropoulos2015forecast} considered combinations of forecasts from multiple temporal aggregation levels simultaneously. This approach is called Intermittent Multiple Aggregation Prediction Algorithm (IMAPA). The overall results in their work suggested that combinations of forecasts from different frequencies led to improved forecasting performance.
Recently, some attention has been paid to applying machine learning approaches to improve forecasting accuracy for intermittent demand, such as neural networks \citep{lolli2017single}, support vector
machines \citep{kaya2018intermittent, jiang2020intermittent}, and so on.
Despite that intermittent demand forecasting has obtained some research achievements in recent decades \citep{nikolopoulos2011aggregate,petropoulos2015forecast,kourentzes2021elucidate}, there is still a lot of scope for improvements \citep{nikolopoulos2021we}. For example, limited attention has been given to the combination schemes for intermittent demand forecasting.
The literature indicates that forecast combination can improve forecast accuracy in modelling fast-moving time series \citep{bates1969combination,de2000review,kang2021diversity}. In this study, we expand on the study by \cite{petropoulos2015forecast} and aim to examine whether forecast combination improves the forecasts of intermittent demand.
The contributions of our work are:
1) a systematic evaluation of classical forecast combination methods in the context of intermittent demand forecasting, 2) a novel combination framework for intermittent demand, which can determine optimal combination weights for each time series automatically, 3) the investigation of the effects of time series features and forecast diversity on forecast combinations for intermittent demand, and
4) the evaluation of forecast accuracy and inventory-related measures based on a large set of real series and error measurements.
The rest of the paper is organized as follows. Section \ref{sec:combination} reviews a
series of forecast combination methods used in this work. Section \ref{sec:DIVIDE}
proposes a generalized forecast combination framework for intermittent demand and
discusses error metrics. Section \ref{sec:Simulation} performs a comparison of the
proposed approaches and some classic combination methods through a simulation
study. Further, in Section \ref{sec:Application}, we apply our framework to real data
and offer results based on forecast accuracy and inventory-related measures. Section \ref{sec:Discussion} provides discussions and Section \ref{sec:conclusion} concludes the paper.
\section{A review of forecast combinations}
\label{sec:combination}
Combining forecasts from different methods or models has shown to perform well in practice.
The Simple Average (SA) has been proved to be a hard-to-beat forecast combination method
\citep{clemen1989combining, stock2004combination, lichtendahl2020some}, which simply
combines forecasts with an equal weight of $1/M$ ($M$ is the number of forecasting
methods). \citet{clemen1989combining} reviewed over two hundred articles and concluded
that SA should be used as a benchmark when proposing more complex weighting schemes. \citet{palm1992combine} emphasized that SA
could reduce the variance of forecasts and avoid the uncertainty of weight estimation. The
phenomenon that SA outperforms more complicated combination methods is referred to
``forecast combination puzzle" \citep{stock2004combination,
smith2009simple,claeskens2016forecast}.
Because SA is sensitive to extreme values, some attention has been paid to other more robust combination schemes, including the median and trimmed means \citep{ stock2004combination, lichtendahl2020some, PETROPOULOS2020110}. \citet{jose2008simple} studied two simple robust methods, trimmed and Winsorized means, and verified their improved combined forecasts. The simple combination schemes based on the mean and median are easy to calculate and do not have to deal with parameter estimation error. However, there is still no consensus
whether the mean or the median of individual forecasts performs better.
In the field of intermittent demand forecasting, forecast combination methods have been largely overlooked. To the best of our knowledge, only SA was applied to improving intermittent demand forecasting \citep{petropoulos2015forecast}. Recently, the organisers of the M5 competition \citep{makridakis2020m5}, which focused on forecasting retail sales with a mass of intermittent time series, used SA as one of the combination benchmarks
To further investigate the value of forecast combinations, a number of scholars over the past half century have focused on finding optimal weights for combining different forecasting models. The seminal work by \citet{bates1969combination} proposed the idea of weighted forecast combinations. \citet{newbold1974experience} continued this stream of research and investigated more forecasting models and multiple forecast horizons. In their work, a weighted combination can be expressed as a linear function such that
\begin{equation*}
\label{eq:yhat_c}
\hat{y}_{T+1}^{c}=\sum\limits_{i=1}^{M}{{{w}_{i,T+1}}{{{\hat{y}}}_{i,T+1}}}={\mathbf{w}_{T+1}'}{\mathbf{\hat{y}}_{T+1}},
\end{equation*}
where $\mathbf{\hat{y}}_{T+1}$ is the column vector of forecasts at time $T+1$ generated from $M$ forecasting models, and ${\mathbf{w}}$ is the column vector of weights.
\citet{bates1969combination} produced an unbiased
combined forecast by minimizing the error variance, in which the combination weights can be determined by
\begin{equation}
\label{eq:wt_op}
{\mathbf{w}_{T+1}}=\frac{{{\Sigma }^{-1}}\mathbf{1}}{{\mathbf{1}}'{{\Sigma }^{-1}}\mathbf{1}},
\end{equation}
where $\mathbf{1}$ is an $m$-dimensional column vector of ones, and $\Sigma$ is the covariance matrix of forecast errors. The method is represented as \textit{Optimal} hereafter.
As the covariance matrix $\Sigma$ in Equation (\ref{eq:wt_op}) is often unknown in practice,
\citet{bates1969combination} provided five methods to weight the individual forecasts
based on their historical performance. These five alternatives, expressed as \textit{BG1}, \textit{BG2},
\textit{BG3}, \textit{BG4} and \textit{BG5}, can be formulated as follows:
\begin{equation}
\label{eq:optimal}
\begin{split}
{{w}_{i,T+1}^{BG1}}&=\frac{({{E}_{i,T+1}})^{-1}}{\sum\nolimits_{i=1}^{M}{{({{E}_{i,T+1}})^{-1}}}}, ;\\
\mathbf{w}_{T+1}^{BG2}&=\frac{{{{\hat{\Sigma }}}_{T+1}^{-1}}\mathbf{1}}{{\mathbf{1}}'{{{\hat{\Sigma }}}_{T+1}^{-1}}\mathbf{1}}, {{\left( {{{\hat{\Sigma }}}_{T+1}} \right)}_{ij}}={{\nu }^{-1}}\sum\nolimits_{t=T-\nu+1 }^{T}{{{e}_{it}}{{e}_{jt}}};\\
{{w}_{i,T+1}^{BG3}}&=\alpha {{w}_{i,T}}+\left( 1-\alpha \right)\frac{{({{E}_{i,T+1}})^{-1}}}{\sum\nolimits_{i=1}^{M}{{({{E}_{i,T+1}})^{-1}}}}, ~ 0<\alpha<1;\\
w_{i,T+1}^{BG4}&=\frac{({{S}_{i,T+1}})^{-1}}{\sum\nolimits_{i=1}^{M}{({{S}_{i,T+1}})^{-1}}}, ~ {{S}_{i,T+1}}=\sum\nolimits_{t=1}^{T}{{{\gamma }^{t}}{{\left( {{e}_{it}} \right)}^{2}}}, ~ \gamma \ge 1;\\
\mathbf{w}_{T+1}^{BG5}&=\frac{{{{\hat{\Sigma }}}_{T+1}^{-1}}\mathbf{1}}{{\mathbf{1}}'{{{\hat{\Sigma }}}_{T+1}^{-1}}\mathbf{1}}, {{\left( {{{\hat{\Sigma }}}_{T+1}} \right)}_{ij}}=\frac{\sum\nolimits_{t=1}^{T}{{{\gamma }^{t}}{{e}_{it}}{{e}_{jt}}}}{\sum\nolimits_{t=1 }^{T}{{\gamma }^{t}}}.
\end{split}
\end{equation}
where $e_{it}$ is the forecast error of the $i$-th model at time $t$,
and $E_{i,T+1}=\sum\nolimits_{t=T-\nu+1 }^{T}{{{ {{e}_{it}} }}}$, using the recent $\nu$ observations for the estimation.
The last four methods are variations of \textit{BG1}.
\textit{BG2} considers cross-correlations between the residuals from the fitted models, and \textit{BG3} contains an autocorrelation pattern. \textit{BG4} and \textit{BG5} give most weight to the
forecast which has performed best in the recent past with the aid of a parameter $\gamma$.
Through an empirical investigation based on 80 monthly time series, \citet{newbold1974experience} found that the methods \textit{BG1}, \textit{BG3} and \textit{BG4}, which ignored the correlation of forecast errors, outperformed \textit{BG2} and \textit{BG5}, which considered the correlation.
\citet{granger1984improved} further investigated some regressive approaches to obtain linear combinations. They demonstrated that the method with a constant term and unrestricted weights performed better. The combination weights can be estimated by Ordinary Least Squares (OLS).
Linear combination has a long and successful history in forecasting. However, the issue
related to determining the best set of forecasting models to combine is also worthy of
attention. Lasso-based methods can do this trick by producing the selection and shrinkage
toward zero \citep{tibshirani1996regression}. \citet{diebold2019machine} proposed a
variant of Lasso, partially-egalitarian LASSO (peLASSO), which set the weights of some
forecasting methods to zero and shrunk the survivors toward equality. They provided an
empirical assessment to forecast Eurozone GDP
growth and found that peLASSO outperformed SA and the median
\citep{diebold2019machine}.
Nonetheless, the above-mentioned forecast combination methods simply estimate the
combining weights for each time series separately. Recent studies indicate that global
methods that uses all time series to estimate the combining weights show puzzlingly good
performance in forecasting fast-moving time series \citep{montero2021principles}. One main
stream is feature-based forecast combinations. For example, \citet{monteromanso2020fforma:}
developed FFORMA (Feature-based FORecast Model Averaging), which used 42 features to
estimate the optimal combination weights based on a meta-learning algorithm.
Different feature-based combination approaches applied different time series features to improve forecasting performance \citep{wang2009rule,petropoulos2014horses,talagala2021fformpp}.
The selection of appropriate features mainly depends on expert knowledge. Especially
for intermittent demand series, a significant characteristic is that there exist a large number of zeros and irregular patterns, which makes the feature extraction procedure more difficult. \citet{theodorou2021exploring} proposed a methodological approach for feature extraction and selection, which is tailored for M5 dataset. On the basis of FFORMA framework \citep{monteromanso2020fforma:}, \citet{kang2021diversity} used the diversity of forecasting models as the only feature. Accounting for the diversity of the forecasts not only
improved the forecast accuracy but also simplified the modelling process and reduced computational complexity.
The potential of time series features and the diversity of the forecasts has not been investigated when producing forecast combinations for intermittent demand. In our work, we extract a set of time
series features tailored for intermittent demand and calculate the diversity based on a
pool of intermittent demand forecasting models. To this end, an intermittent demand
forecast combination framework can be constructed by using eXtreme Gradient Boosting
(XGBoost)~\citep{chen2016xgboost}, which we present in Section~\ref{sec:DIVIDE}.
\section{Forecast combination for intermittent demand}
\label{sec:DIVIDE}
\subsection{Time series features for intermittent demand}
We consider nine comprehensible time series features for intermittent demand forecasting,
which are listed in Table \ref{tab: fide_features}. The selected features are based on
previous studies in \citet{Kourentzes2016tsintermittent}, and \citet{christ2018time},
\citet{Hara2021feasts}. We describe them as follows.
\begin{itemize}
\item ${F}_{1}$, ${F}_{2}$: The average Inter-Demand Interval (IDI) and squared
Coefficient of Variation ($\text{CV}^2$) are the two most popular attributes to describe
demand patterns, which are used in the SBC classification scheme for intermittent demand
\citep{syntetos2005categorization,kostenko2006note}.
\item ${F}_{3}$: Entropy-based measures have been applied to quantify the regularity and
unpredictability of time-series data \citep{kang2017visualising,
theodorou2021exploring}. We use the approximate entropy in this paper, which can be
computed by the \texttt{TSEntropies} \citep{TSEntropies} package in R. A relatively
small value of ${F}_{3}$ indicates that the demand series includes more regularity and
is more forecastable.
\item ${F}_{4}$, ${F}_{5}$: The two features describe the ratios of some specific data. The percentages of zero values and the values away from the mean in ${F}_{4}$ and ${F}_{5}$ are simple tools to further measure the intermittency and volatility respectively.
\item ${F}_{6}$: This feature provides the coefficient of a linear least-squares
regression, which measures the linear trend of the variances from a series of chunks.
The length of chunks is a parameter in ${F}_{6}$, which determines how many
demand values are in each chunk. For monthly data in following experiments, we set the
parameter to be 12.
\item ${F}_{7}$: Given two quantiles, ${F}_{7}$ reflects consecutive changes of the fixed
corridor. The quantiles of concern in this paper are 40\%-quantile and 100\%-quantile.
\item ${F}_{8}$, ${F}_{9}$: In the field of intermittent demand, except for the
intermittency and volatility, the risk of obsolescence is another factor to bring
difficulty to forecasting \citep{teunter2011intermittent, babai2019new}. The last two
features capture the recent demand pattern. ${F}_{8}$ and ${F}_{9}$ focus on the sum of
squares of the last chunk and the recent zero demand, respectively. The number of chunks
in ${F}_{8}$ is an unknown parameter, which is set to be 4 in this paper.
\end{itemize}
Based on the nine time series features tailored for intermittent demand, we put forward a feature-based forecast combination approach, the \emph{Feature-based Intermittent DEmand forecasting} (FIDE).
\begin{table
\footnotesize
\centering
\caption{Description of nine time series features selected for intermittent demand data}.
\label{tab: fide_features}
\begin{tabular}{lp{9cm}l}
\toprule
Feature name & Description & Value \\
\midrule
${F}_{1}$: IDI & Intermittency feature that averages the inter-demand intervals. & $\left[ \left. 1,\infty \right) \right.$ \\
${F}_{2}$: CV$^2$
& Coefficient of variation squared of non-zero demand indicating the intermittent volatility. & $\left[ \left. 0,\infty \right) \right.$ \\
${F}_{3}$: Entropy
& Approximate entropy that is used as a regularity measure. & $(0,1)$ \\
${F}_{4}$: Percent.zero & Intermittency feature that calculates the percentage of observations in the series that are zero values. & $(0,1)$ \\
${F}_{5}$: Percent.away.mean & The percentage of values that are more than sigma away from the mean to measure the volatility. & $\left[ \left. 0,1 \right) \right.$ \\
${F}_{6}$: Linear.chunk.var & Volatility feature that utilizes linear least-squares regression for variances of the time series that were aggregated over chunks. & $\left( -\infty ,\infty \right)$ \\
${F}_{7}$: Change.quantile
& Volatility feature that counts the average, absolute value of consecutive changes of the series inside a fixed corridor given by the quantiles. & $\left[ \left. 0,\infty \right) \right.$ \\
${F}_{8}$: Ratio.last.chunk & The sum of squares of the last chunk out of all chunks expressed as a ratio with the sum of squares over the whole series. & [0,1] \\
${F}_{9}$: Prop.zero.end
& The proportion of data which ends with zero. & [0,1] \\
\bottomrule
\end{tabular}%
\label{tab:ifeatures}%
\end{table}%
\subsection{Diversity for intermittent demand}
In this paper, we extend \citet{kang2021diversity}'s work and propose an intermittent
demand forecasting combination method based on their diversity, the \emph{DIVersity based
Intermittent DEmand forecasting} (DIVIDE). The scaled diversity between any two forecasting methods
is defined as:
\begin{equation}
\label{eq:Dij}
{{DIV}_{ij}}=\frac{\frac{1}{H}\sum\nolimits_{h=1}^{H}{{{\left( {{{\hat{y}}}_{ih}}-{{{\hat{y}}}_{jh}} \right)}^{2}}}}{{{\left( \frac{1}{T}\sum\nolimits_{t=1}^{T}{\left| {{y}_{t}} \right|} \right)}^{2}}},
\end{equation}
where $H$ is the forecast horizon, ${{\hat{y}}}_{ih}$ is the $h$-th step forecast
generated from the $i$-th forecasting model, and
$\left\{ {{y}_{t}},t=1,2,\cdots ,T \right\}$ is a series of observed values.
The main merits of applying the diversity to intermittent demand forecasting are twofold.
The first aspect is simplicity, as the calculation only depends on forecasting values,
with no need to compute a separate set of features. The second is general
applicability. The diversity can be obtained straightforwardly from intermittent demand forecasts
and comprehended easily by forecasters, regardless of the inventory management
problem. Nevertheless, time series features need to be customized based on the specific
application scenarios.
\subsection{Generalized forecast combination framework}
To be able to produce forecasts combinations for intermittent demand, we first need to define a suitable pool of forecasting methods. The pool includes traditional forecasting models, which are Naive, seasonal Naive (sNaive), Simple Exponential Smoothing (SES), Moving Averages (MA), AutoRegressive Integrated Moving Average (ARIMA), ExponenTial Smoothing (ETS), and intermittent demand forecasting methods, which are Croston's method (CRO), optimized Croston's method (optCro), SBA, TSB, ADIDA, IMAPA. Implementations for these methods exist in the forecast \texttt{forecast} \citep{Hyndman2020forecast} and \texttt{tsintermittent} \citep{Kourentzes2016tsintermittent} packages in R.
Then, given a forecast error metric, the
optimization objective for the FIDE and DIVIDE is
\begin{equation}
\label{eq:MIN_fide}
\underset{{w}_{F}}{\mathop{\arg \min }}\,\sum\limits_{n=1}^{N}{\sum\limits_{i=1}^{M}{w{{\left( {{F}_{n}} \right)}_{i}}}}\times erro{{r}_{n,i}},
\end{equation}
\begin{equation}
\label{eq:MIN_divide}
\underset{{w}_{D}}{\mathop{\arg \min }}\,\sum\limits_{n=1}^{N}{\sum\limits_{i=1}^{M}{w{{\left( {{D}_{n}} \right)}_{i}}}}\times erro{{r}_{n,i}},
\end{equation}
where ${F}_{n}$ is the feature vector, and ${D}_{n}$ is the diversity vector of the $n$-th time series. $erro{{r}_{n,i}}$ is the forecast error of the $i$-th model for the $n$-th time series. A simple form of the error is the traditional absolute or squared error measure. To meet the actual application needs, we can also use a comprehensive error by adding some other measures that can reflect inventory problems, such as stockout. A series of error metrics for intermittent demand forecasting are discussed in the next subsection. The optimal combination weights ${{w}_{F}}$ and ${{w}_{D}}$ can be estimated by mapping the features or the diversity to the errors based on the \texttt{M4metalearning} package by \citet{monteromanso2020fforma:}.
In the training phase, we generate forecasts based on the intermittent demand forecasting pool of methods and calculate errors required in the objective function. In FIDE, we compute the features tailored for intermittent demand and learn the relationship between the features and combination weights based on XGBoost algorithm. In DIVIDE, the intermittent demand forecasting combination model can be obtained based on the diversity of different forecast methods. In the forecasting phase, we calculate the features of the new time series and the diversity of the respective forecasts, and get the combination weights through the pre-trained XGBoost model for FIDE and DIVIDE respectively. Finally, we average point forecasts from different methods in the pool based on the optimal weights to achieve the combined forecast results. The flowchart of the proposed framework is presented in Figure \ref{flowchart}.
The advantages of the proposed framework are as follows:
(1) a diverse forecasting pool, consisting of intermittent demand forecasting methods and traditional time series forecasting models;
(2) a customizable objective function depending on actual inventory management requirements;
(3) the selection of comprehensible time series features especially for intermittent demand; and
(4) the use of the diversity of the forecasts.
The above advantages allow us to propose a generalized intermittent demand forecasting combination framework by considering both features and the diversity to learn from other time series and estimate optimal combination weights.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=\textwidth]{figures/flowchart.png}
\end{tabular}
\end{center}
\caption
{
The flowchart for the proposed intermittent demand forecasting combination framework. }
\label{flowchart}
\end{figure}
\subsection{Evaluation metrics for intermittent forecasting}
\label{sec:error-metric}
In previous studies, various forecasting evaluation metrics have been used for intermittent demand. The chosen metric of forecast errors may influence the ranked performance of the forecasting methods. \citet{silver1998inventory} pointed out that no single metric was
universally best. Since the groundbreaking intermittent demand method by \citet{croston1972forecasting} was proposed, a large number of scholars began to examine the forecasting performance of \citet{croston1972forecasting}'s method and its variants by comparing them with traditional time series methods.
\citet{syntetos2005accuracy} evaluated the simple Moving Average (MA), single Exponential Smoothing (ES), Croston method \citep{croston1972forecasting} and a modified Croston method \citep{syntetos2001bias} based on 3000 Stock Keeping Units (SKUs). They used Mean Error (ME),
scaled Mean Error (sME), Relative
Geometric Root Mean Squared Error (RGRMSE) and Percentage Better (PBt) to evaluate the forecasting results. \citet{syntetos2006stock} continued their work by focusing on the stock control performance and used two percentage accuracy measures, PBt and Average Percentage Regret (APR).
\citet{teunter2009forecasting} pointed that some traditional error measures, such as Mean Absolute Deviation (MAD),
Mean Square Error (MSE) and RGRMSE, were not suitable for intermittent demand forecasting, because they preferred zero forecasts. Therefore, \citet{teunter2009forecasting} tried to measure service
and inventory levels under the order-up-to policy and showed that Croston-type
approaches outperformed others (MA, ES, and bootstrapping) based on the Royal Air Force (RAF) dataset. \citet{wallstrom2010evaluation} discussed a series of forecasting error
measurements especially for intermittent demand and split them into two categories, traditional (accuracy) and bias error measurements. As traditional measures, \citet{wallstrom2010evaluation} considered MSE, MAD, symmetric Mean Absolute Percentage Error (sMAPE). As
bias error measures, they examined the Cumulated Forecast Error (CFE), Number Of Shortages (NOS), and Periods In Stock (PIS).
Further, they
suggested that complementary measures should be considered based on Principal Components Analysis (PCA).
Given that traditional error metrics favor distorted forecasts, bias measures need to be also considered when evaluating forecasts for intermittent demands.
\citet{kourentzes2014intermittent} proposed two novel cost functions to parameter optimization for intermittent demand methods, the Mean Squared Rate (MSR) and the Mean Absolute Rate (MAR) errors.
He evaluated parameter and model selection results based on two accuracy metrics. The first is the Mean Absolute Scaled Error (MASE), which was suggested to be the standard measure for the data with different scales and zero values \citep{hyndman2006another}. The second is the scaled Absolute Periods In Stock (sAPIS), which is a scale-independent variant of PIS.
\citet{petropoulos2015forecast} tracked five measurements to assess the performance of different intermittent demand approaches, including sME, scaled Mean Absolute Error (sMAE), scaled Mean Squared Error (sMSE), scaled Mean Periods In Stock (sMPIS), and scaled Mean Absolute Periods In Stock (sMAPIS). We sum up the aforementioned research in Table \ref{tab:literature}.
\begin{table}%
\centering
\caption{Summary of intermittent demand forecasting error measures.}
\begin{tabular}{lp{4cm}p{3cm}}
\toprule
Literature & Error measure & Data used \\
\midrule
\citet{syntetos2005accuracy} & ME, sME, RGRMSE, PBt & Monthly demand histories of 3000 SKUs \\
\citet{syntetos2006stock} & PBt, APR & Same as above \\
\citet{teunter2009forecasting} & MAD, MSE, RGRMSE & RAF dataset \\
\citet{wallstrom2010evaluation} & MSE, MAD, sMAPE, CFE, PIS, NOS & Daily demand data of 72 items from an
company \\
\citet{kourentzes2014intermittent} & MASE, sAPIS & A simulated dataset and 3000 real demand of automotive spare parts \\
\citet{petropoulos2015forecast} & sME, sMAE, sMSE, sMPIS,
sMAPIS & RAF dataset \\
\bottomrule
\end{tabular}%
\label{tab:literature}
\end{table}%
There is still no consensus regarding the most appropriate measure to evaluate intermittent demand forecasts. In our analysis, we consider the two categories of error metrics
\citep{wallstrom2010evaluation} and adopt some scaled measures. Let ${{y}_{t}}$ be the
demand and ${{\hat{y}}_{t}}$ be the demand forecast at time $t$ for each time series.
Four traditional error measures are presented in Table \ref{tab:class1error}. Minimizing these error measures results in the proposed methods to prefer zero forecasts and
leads to under-stocking. Thus, we also consider three bias error measures as shown in Table
\ref{tab:class2error}. To achieve a balance in improving
forecast accuracy and reducing shortage, we add a penalty for NOS into the optimization
objective function and propose a novel form of ${error}_{ni}$:
\begin{equation}
\label{eq:error_nos}
erro{{r}_{ni}}={{MASE}_{ni}}+\alpha \cdot NO{{S}_{ni}},
\end{equation}
where the penalty factor $\alpha$ is non-negative and can be adjusted according to actual inventory cost and shortage cost.
\begin{table}%
\centering
\caption{Traditional (accuracy) error measures used in intermittent demand forecasting.}
\begin{tabular}{ll}
\toprule
Error measure & Definition \\
\midrule
sMAE & $\frac{1}{H}\sum\nolimits_{h=1}^{H}{\left| {{y}_{T+h}}-{{{\hat{y}}}_{T+h}} \right|/\left( \frac{1}{T}\sum\nolimits_{t=1}^{T}{{{y}_{t}}} \right)}$ \\
sMSE & $\frac{1}{H}{{\sum\nolimits_{h=1}^{H}{\left( \left( {{y}_{T+h}}-{{\hat{y}}_{T+h}} \right)/\left( \frac{1}{T}\sum\nolimits_{t=1}^{T}{{{y}_{t}}} \right) \right)}}^{2}}$ \\
MASE & $\frac{1}{H}\sum\nolimits_{h=1}^{H}{\left|
{{y}_{T+h}}-{{\hat{y}}_{T+h}} \right|/\left(
\frac{1}{T-m}\sum\nolimits_{t=m+1}^{T}{\left| {{y}_{t}}-{{y}_{t-m}}
\right|} \right)}$
\\
sMAPIS & $\left| \sum\nolimits_{h=1}^{H}{\sum\nolimits_{j=1}^{h}{\left( {{y}_{T+j}}-{{\hat{y}}_{T+j}} \right)}} \right|/\left( \frac{1}{T}\sum\nolimits_{t=1}^{T}{{{y}_{t}}} \right)$ \\
\bottomrule
\multicolumn{2}{p{10cm}}{\footnotesize Note: $m$ is the length of periodicity
and $H$ is the forecasting horizon.}
\end{tabular}%
\label{tab:class1error}
\end{table}%
\begin{table}%
\centering
\caption{Bias error measures used in intermittent demand forecasting.}
\begin{tabular}{ll}
\toprule
Measure & Definition \\
\midrule
sME & $\frac{1}{H}\sum\nolimits_{h=1}^{H}{\left( {{y}_{T+h}}-{{\hat{y}}_{T+h}} \right)/\left( \frac{1}{T}\sum\nolimits_{t=1}^{T}{{{y}_{t}}} \right)}$ \\
sMPIS & $\left( -\sum\nolimits_{h=1}^{H}{\sum\nolimits_{j=1}^{h}{\left( {{y}_{T+j}}-{{\hat{y}}_{T+j}} \right)}} \right)/\left( \frac{1}{T}\sum\nolimits_{t=1}^{T}{{{y}_{t}}} \right)$ \\
NOS & $\sum\nolimits_{h = 1}^H \mathcal{I}\left((\sum\nolimits_{j=1}^{h}{ {{y}_{T+j}}-{{{\hat{y}}}_{T+j}})} > 0\right)$ \\
\bottomrule
\multicolumn{2}{p{0.7\textwidth}}{\footnotesize Note: $\mathcal{I}(\cdot)$ is the
indicator function.}
\end{tabular}%
\label{tab:class2error}
\end{table}%
\section{Simulation studies}
\label{sec:Simulation}
\subsection{Simulated data }
In our work, we adopt the SBC classification scheme for demand patterns \citep{syntetos2005categorization,kostenko2006note} based on two attributes: IDI and $\text{CV}^2$.
\citet{petropoulos2014horses} investigated four possible determinants (IDI, $\text{CV}^2$, the number of observations and forecasting horizon) of forecasting accuracy through simulations on intermittent data. They found that with the increase of IDI and $\text{CV}^2$, the accuracy of all the investigated forecasting models decreased. Additionally, the number of observations had small influence and the forecasting horizon had almost no effect on forecasting performance in \citet{petropoulos2014horses}'s research.
Therefore, in this paper, we generate the simulated data based on the three factors (IDI, $\text{CV}^2$ and the number of observations). Each factor is varied around four levels as shown in Table \ref{tab:simulation}. The forecast horizon is set to be 12 periods for each time series. Considering 64 ($4^3$) combinations and 1,000 series at each level, we generate in total 64,000 simulated time series in the following experiment. The production of simulated data with the required characteristics can be made using the \pkg{tsintermittent} \citep{Kourentzes2016tsintermittent} \proglang{R} package.
Figure \ref*{fig:simulation_data} demonstrates the distribution of the simulated data, which covers all the four categories in the SBC classification scheme. To this end, we can compare the performance of aforementioned forecast combination methods based on the intermittent demand data from different categories.
\begin{table}%
\centering
\caption{Levels of the three factors considered for generating the simulated data.}
\begin{tabular}{lllll}
\toprule
\multirow{2}[3]{*}{Factors} & \multicolumn{4}{c}{Level} \\
\cmidrule(lr){2-5}
& 1 & 2 & 3 & 4 \\
\midrule
IDI & 1.00 & 1.32 & 2.00 & 4.00 \\
$\text{CV}^2$ & 0.25 & 0.49 & 1.00 & 2.00 \\
No. of observations & 84 & 108 & 132 & 156 \\
\bottomrule
\end{tabular}%
\label{tab:simulation}
\end{table}%
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.9\textwidth]{figures/simulationdata.png}
\end{tabular}
\end{center}
\caption {The heat map of hexagon matrix for the simulated data. The data density is denoted by the color depth. The x-axis and y-axis show the logarithmic transform of IDI and $\text{CV}^2$ respectively. Rad lines indicate the boundaries of different categories. Class 1 is the time series with low intermittence and high variability, Class 2 is that with high intermittence and high variability, Class 3 is that with low intermittence and low variability, and Class 4 is that with high intermittence and low variability.}
\label{fig:simulation_data}
\end{figure}
\subsection{Forecasting results}
In the following experiment, we investigate the forecasting performance of
combination methods for intermittent demand. For each
simulated time series, the last 12 observations constitute the forecasting period. We
consider simple combinations (SA, Median), an error-based combination (Optimal),
performance-based combinations (BG1, BG4), Lasso-based combinations and combinations by
learning (FFORMA, FIDE, DIVIDE). We only preserve BG1 and BG4 in
\citet{newbold1974experience} because the two methods with simple algorithms have been
proved to be more successful than others in previous studies. The forecasting accuracy in
this experiment is measured by two scaled absolute errors, MASE and sMAE. The gains associated with reducing the
bias are generally considered to be more beneficial for inventory-related decisions compared
to the decrease of squared errors in literature
\citep{kourentzes2019another,sanders2009quantifying}.
For local combination methods, we perform a rolling origin evaluation to generate 12 out-of-sample forecasts, and the obtained forecast errors are used to calculate combination weights for Optimal, BG1 and BG4 methods.
In particular, the estimation of combination weights requires some conditions, such that the covariance matrix is non-singular and the training data can be standardized. To this end, we exclude some series in the second and fourth categories with high intermittence, which contain a lot of continuous zero demands in the tail of training period. For global combination methods, the model training setup in this paper is consistent with FFORMA. We suggest referring to \citet{monteromanso2020fforma:}'s work for details.
Table \ref{tab:simulation_results} presents the forecasting accuracy of different methods based on the simulated data.
We observe that the best individual methods are IMAPA and SBA, which are proposed especially for intermittent demand forecasting. Additionally, IMAPA performs the best in most cases. The traditional combination methods (SA, Median, Optimal, BG1, BG4) perform worse than the corresponding best individual models. However the Lasso method, which considers a logarithmic transform before regression, significantly improves the forecasting accuracy. Indeed, this simple combination method yields the best MASE and sMAE for the time series with low intermittence and high variability in Class 1.
The proposed methods based on intermittent demand features and the diversity consistently outperform the best individual
models. FIDE shows obvious superiority compared with the 42 traditional time series features in FFORMA. Furthermore, DIVIDE offers the best forecasting results overall. Especially for the data from Class 2 and 4 with high intermittence, the benefits from the proposed methods compared with FFORMA are more significant. Therefore, the simulation results in Table \ref{tab:simulation_results} provide good evidence for the importance of intermittent demand features and the diversity in estimating combination weights.
\begin{table
\footnotesize
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\centering
\caption{Forecasting accuracy of different methods based on the simulated data. The results for different classifications and the whole data set are presented. For each column, the smallest
values from both individual methods and combination methods are marked in bold.}
\begin{tabular}{l ccccc ccccc}
\toprule
Method & \multicolumn{5}{c}{sMAE} & \multicolumn{5}{c}{MASE} \\
\cmidrule(lr){2-6} \cmidrule(lr){7-11}
& Class 1 & Class 2 & Class 3 & Class 4 & All
& Class 1 & Class 2 & Class 3 & Class 4 & All \\
\midrule
Naive & 1.1364 & 1.6166 & 0.7199 & 1.3921 & 1.3134
& 1.0083 & 1.0366 & 1.0171 & 1.0144 & 1.0216\\
SES & 0.8775 & 1.3829 & 0.5372 & 1.2075 & 1.0933
& 0.7715 & 0.8728 & 0.7572 & 0.8656 & 0.8296\\
MA & 0.9054 & 1.4166 & 0.5504 & 1.2171 & 1.1167
& 0.7958 & 0.8941 & 0.7763 & 0.8740 & 0.8482\\
ARIMA & 0.8690 & 1.3775 & 0.5305 & 1.2006 & 1.0866
& 0.7634 & 0.8677 & 0.7475 & 0.8593 & 0.8228\\
ETS & 0.8837 & 1.3802 & 0.5328 & 1.2026 & 1.0919
& 0.7767 & 0.8697 & 0.7512 & 0.8608 & 0.8277\\
CRO & 0.8822 & 1.3860 & 0.5380 & 1.2092 & 1.0961
& 0.7753 & 0.8731 & 0.7584 & 0.8657 & 0.8308\\
optCro & 0.8747 & 1.3780 & 0.5332 & 1.2040 & 1.0893
& 0.7687 & 0.8678 & 0.7521 & 0.8612 & 0.8253\\
SBA & \textbf{0.8619} & 1.3772 & 0.5295 & 1.2029 & 1.0852
& \textbf{0.7571} & 0.8673 & 0.7462 & 0.8605 & 0.8213\\
TSB & 0.8703 & 1.3699 & 0.5311 & 1.1987 & 1.0837
& 0.7645 & 0.8631 & 0.7485 & 0.8579 & 0.8212\\
ADIDA & 0.8775 & 1.3759 & 0.5371 & 1.2017 & 1.0893
& 0.7715 & 0.8677 & 0.7572 & 0.8613 & 0.8267\\
IMAPA & 0.8648 & \textbf{1.3659} & \textbf{0.5289} & \textbf{1.1949} & \textbf{1.0797}
& 0.7597 & \textbf{0.8604} & \textbf{0.7451} & \textbf{0.8554} & \textbf{0.8180}\\
\midrule
SA & 0.8826 & 1.3868 & 0.5399 & 1.2073 & 1.0963
& 0.7764 & 0.8757 & 0.7613 & 0.8655 & 0.8325\\
Median & 0.8687 & 1.3643 & 0.5322 & 1.1968 & 1.0810
& 0.7636 & 0.8598 & 0.7503 & 0.8569 & 0.8198\\
Optimal& 0.8742 & 1.3868 & 0.5320 & 1.2030 & 1.0921
& 0.7682 & 0.8744 & 0.7498 & 0.8613 & 0.8273\\
BG1 & 0.8784 & 1.3757 & 0.5358 & 1.2013 & 1.0891
& 0.7723 & 0.8689 & 0.7554 & 0.8611 & 0.8270\\
BG4 & 0.8761 & 1.3810 & 0.5350 & 1.2025 & 1.0907
& 0.7703 & 0.8715 & 0.7542 & 0.8616 & 0.8274\\
Lasso & \textbf{0.8039} & 1.2264 & 0.5565 & 1.0512 & 0.9829
& \textbf{0.7127} & 0.8052 & 0.7791 & 0.7618 & 0.7688\\
FFORMA & 0.8511 & 1.2172 & 0.5290 & 1.2020 & 1.0231
& 0.7490 & 0.7752 & 0.7453 & 0.8610 & 0.7852\\
\textbf{FIDE} & 0.8554 & 1.1498 & 0.5297 & \textbf{1.0329} & 0.9583
& 0.7512 & 0.7285 & 0.7464 & \textbf{0.7545} & 0.7429\\
%
\textbf{DIVIDE}& 0.8397 & \textbf{1.1257} & \textbf{0.5285} & 1.0355 & \textbf{0.9461}
& 0.7396 & \textbf{0.7187} & \textbf{0.7444} & 0.7569 & \textbf{0.7368}\\
\bottomrule
\end{tabular}%
\label{tab:simulation_results}%
\end{table}%
Figure \ref{boxplot} presents the distributions of combination weights based on our FIDE and DIVIDE methods respectively. The boxplots in the left and right panels show similar patterns. This indicates that intermittent demand features and diversity are two types of effective inputs for the forecast combination model to learn and estimate optimal weights.
For the time series from different classifications, the weights' distributions of individual models show obvious variation. However, Class 2 and 4 with high intermittence have similar distributions, and the same is true of Class 1 and 3 with low intermittence. Therefore, the intermittence is a crucial factor affecting the intermittent demand forecasting performance.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=\textwidth]{figures/weights.pdf}
\end{tabular}
\end{center}
\caption
{The distributions of combination weights based on our FIDE and DIVIDE methods. The weights of forecasting models for the time series from different classifications are presented respectively. }
\label{boxplot}
\end{figure}
\section{Empirical evaluation}
\label{sec:Application}
\subsection{Real dataset}
The RAF dataset used for the empirical evaluation has been previously investigated in the literature \citep{kourentzes2021elucidate,petropoulos2015forecast,teunter2009forecasting}.
It contains 5000 monthly time series, with 84 observations each. Figure \ref{rafdata} describes the distribution of the dataset, which has the same x-axis and y-axis as Figure \ref{fig:simulation_data}. We can observe that the time series exhibit high intermittence and only relate to Class 2 and 4 based on SBC scheme \citep{kostenko2006note}.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.8\textwidth]{figures/rafdata.pdf}
\end{tabular}
\end{center}
\caption
{The distribution of RAF dataset. The x-axis and y-axis show the logarithmic transform of IDI and $\text{CV}^2$ respectively. Rad lines indicate the boundaries of different categories. }
\label{rafdata}
\end{figure}
In the following experiment, we produce 12-step-ahead point forecasts. To this end, the first 72 periods constitute the in-sample set to train the forecast combination model for the proposed methods. The following 12 periods are treated as the out-of-sample set, which is used to evaluate the forecasting performance of different combination methods.
A series of forecasting error measurements for intermittent demand have been reviewed in
Section~\ref{sec:error-metric}. We consider both traditional errors and bias errors to
give a comprehensive description of point forecasting performance. For traditional errors,
we calculate two absolute errors sMAE and MASE, a squared error sMSE, and a cumulated
error sMAPIS. For bias errors, we compute two relative errors sME and sMPIS, and a
shortage-based error NOS, which is always nonnegative.
\subsection{Forecasting performance}
Firstly, we use traditional forecast errors in the optimal objective function as shown in
Table~\ref{tab:class1error}. We compare our methods with individual models, SA, Median and
FFORMA, and the results are presented in Table \ref{tab:pointforecast}. Other combination
methods in Table \ref{tab:simulation_results} are omitted here, being inferior to the simple combination based on the RAF dataset. For each
column, the optimal objectives in FFORMA and the proposed methods depend on the relevant
error measures. We observe that:
\begin{itemize}
\item According to sMAE and MASE, the best individual model is Naive. This is due to that
a zero forecast is optimal for the majority of out-of-sample periods that exhibit high intermittence. However, Naive yields the worst performance based on sMAPIS, which indicates the
adverse impact of under-stocks. Therefore, we need to consider complementary metrics to
evaluate forecast accuracy.
\item The DIVIDE method achieves the best forecast accuracy based on sMAE, MASE and
sMAPIS. However, it is worse than the best individual model based on sMSE, as the
weighted average loss function in the proposed framework is not suitable for quadratic
errors, due to the sensitivity of sMSE in the presence of outliers and high variability.
\item Combinations based on the diversity and intermittent demand features consistently
outperform FFORMA based on traditional time series features for all four measures. In
the field of intermittent demand forecasting, the time series features designed for
fast-moving data are not appropriate to describe intermittent data. Our chosen time series features for intermittent demand appear to be more appropriate to tackle this challenge. Furthermore, diversity, that only relates to predicted values, is a simple and robust choice for feature-based forecast combinations with improved forecasting accuracy.
\end{itemize}
\begin{table
\footnotesize
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\centering
\caption{Forecasting performance of different methods. For each error measure (column), the smallest value is marked in bold. The last column is the average rank of all
methods based on the four metrics. }
\begin{tabular}{l ccccc }
\toprule
Method & sMAE & sMSE & MASE & sMAPIS & Average rank \\
\midrule
Naive & 1.4569 & 66.7696 & 0.8491 & 108.5635 & 10.25 \\
sNaive & 1.6546 & 69.3216 & 0.9706 & 94.2955 & 13.00 \\
SES & 1.6615 & 59.9634 & 0.9657 & 95.7405 & 13.00 \\
MA & 1.6780 & 59.6904 & 0.9865 & 99.2663 & 14.25 \\
ARIMA & 1.6650 & 57.5444 & 0.9474 & 77.7958 & 8.75 \\
ETS & 1.6693 & 57.8585 & 0.9493 & 79.5946 & 11.50 \\
CRO & 1.7504 & 57.2558 & 0.9971 & 79.5944 & 11.75 \\
optCro & 1.7483 & 57.2632 & 0.9957 & 79.4407 & 11.25 \\
SBA & 1.7394 & \textbf{57.2542} & 0.9907 & 79.0423 & 9.75 \\
TSB & 1.6923 & 57.3153 & 0.9655 & 78.0195 & 9.50 \\
ADIDA & 1.5832 & 57.6764 & 0.9094 & 77.7058 & 6.50 \\
IMAPA & 1.6266 & 57.2964 & 0.9238 & 74.8702 & 4.75 \\
\midrule
SA & 1.6551 & 57.7310 & 0.9509 & 79.1409 & 10.00 \\
Median & 1.6283 & 57.3461 & 0.9251 & 75.6345 & 6.50
\\
FFORMA & 1.1406 & 57.4036 & 0.6396 & 80.8106 & 7.00 \\
\textbf{FIDE} & 0.9342 & 57.3391 & 0.5285 & 73.2410 & 3.25 \\
\textbf{DIVIDE} & \textbf{0.9323} & 57.3097 & \textbf{0.5281} & \textbf{69.4176} & \textbf{2.00} \\
\bottomrule
\end{tabular}%
\label{tab:pointforecast}%
\end{table}%
\subsection{Inventory analysis}
The proposed methods good appealing forecasting accuracy by minimizing traditional errors. However, the optimal objective based on traditional errors closely tracks the median of the distribution of intermittent demand, which is often zero. Therefore, FIDE and DIVIDE will inevitably lead to under-stocking. As low absolute errors and reducing shortage can not be reached simultaneously, we modify the objective function with a novel error in Equation~\eqref{eq:error_nos}. To reflect the inventory and shortage clearly, we introduce two bias measures sME and sMPIS. The sME reports the point forecast bias. Its positive values refer to stock-outs. The sMPIS is enhanced by the dimension of time via cumulative summation. Positive values imply that the stock is left over, which is opposite of sME.
Figure \ref{5.3} presents the forecasting performance of MASE, NOS, sMPIS and sME with different penalty factors in the objective function of our methods. We use MASE as a representative measure of forecast accuracy and NOS, sMPIS and sME to estimate the inventory status.
The penalty factor ranges from 0 to 0.5. The objective function is only based on MASE when the penalty factor is 0. As shown in Figure \ref{5.3}, with the increase of penalty factor, the under-stocking decreases gradually, but the absolute error MASE increase. FIDE and DIVIDE have similar patterns in Figure \ref{5.3} and the range of evaluation measures varies across other methods. Therefore, our proposed methods are flexible and can achieve a balance of reducing shortage and improving forecast accuracy. Furthermore, the rate of change in DIVIDE is greater than that of FIDE, which indicates that the diversity is more effective than time series features in intermittent demand forecast combination. The exact choice of a penalty factor depends on the application context, and forecast managers can adjust the parameter depending on inventory management goals.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=\textwidth]{figures/5.3.png}
\end{tabular}
\end{center}
\caption
{ Forecasting performance of MASE, NOS, sMPIS and sME with different penalty factors in the objective function of our methods. The red and blue solid lines denote proposed DIVIDE and FIDE respectively, which consider the penalty of under-stocking in the objective function. The forecasting errors of other methods are fixed and indicated by dotted lines for comparison.
}
\label{5.3}
\end{figure}
\section{Discussion}
\label{sec:Discussion}
Forecast combinations have been proven to improve forecast accuracy in recent years. In this paper, we focus on a specific forecasting field, intermittent demand, which is almost widespread in spare parts and inventory management.
To the best of our knowledge, our study
is the first to analyze forecast combination methods systematically in the field of intermittent demand forecasting.
Most importantly, we are the first to apply time series features explicitly related to intermittent demand data and the diversity for estimating combination weights for intermittent demand forecasts.
Our proposed approaches offer superior forecasting accuracy performance compared to individual methods for intermittent demand forecasting as well as existing combination schemes. The good performance of our approaches is verified both through a simulation study and a large-scale empirical evaluation.
Nine time series features customized for intermittent demand are used in the proposed framework, which are selected based on their interpretability, the convenience of computation and the dispersion degree of distributions.
Based on the experiments in this paper, ${F}_{9}$ is the most important feature in training the forecast combination model and ${F}_{8}$ ranks the second. ${F}_{8}$ and ${F}_{9}$ extract characteristics of recent demand. Therefore, the pattern of recent data is crucial to determine the optimal
forecast combination for intermittent demand.
Except for the results shown in Section \ref{sec:Simulation} and \ref*{sec:Application}, we investigate the effect of two factors on the proposed framework: intermittend demand classes and methods in the combination pool.
Given the SBC classification scheme \citep{syntetos2005categorization,kostenko2006note}, we implement the proposed framework
by training the forecast combination model for each class individually versus training for the whole data set.
The forecasting results show no significant difference between the two settings.
The quality of forecast combination has been demonstrated to depend on the individual forecasts as well as the diversity between forecasts \citep{lemke2010meta, kourentzes2019another, kang2021diversity}. Therefore, defining an appropriate forecasting pool is one of the most crucial steps in forecast combination exercises. We study the effect of three pooling algorithms based on the proposed framework, which are forecast islands proposed by \citet{kourentzes2019another}, a screened method from \citet{lichtendahl2020some}, and a Lasso-based method by \citet{diebold2019machine}.
Through a series of experiments, we find that none of
the pooling methods significantly improves forecasting performance. The proposed framework automatically appropriately reduces the weights of some methods to very small values, which can be regarded as a generalized pooling method customized for each time series.
Therefore, unless the pool of forecasting methods is too large that would render the computation process very time-consuming, we found that there is no need to add modelling complexity by implementing explicit pooling approaches on top of our proposed framework.
The good performance of our proposed framework is attributed to: (1)
defining a forecasting pool suitable for intermittent demand, which consists of several intermittent demand forecasting methods and traditional time series forecasting methods; (2) applying diversity and time-series features to determine the optimal combination weights automatically; (3) customizing the objective function based on the inventory management plan in practice.
Based on the results presented in this paper, DIVIDE
that uses diversity outperforms FIDE that is based on time-series features. The process of extracting the diversity is independent of historical data, making more attractive for intermittent demand forecasting where the training set is limited especially with regards to positive demand values.
\section{Conclusion}
\label{sec:conclusion}
This paper focused on forecast combinations for intermittent demand. We reviewed a series of forecasting methods and evaluation measures, and investigated the performance of some existing forecast combination methods for intermittent demand.
We proposed time-series features and diversity to improve the performance of intermittent demand forecasting combinations and propose a generalized framework, which can automatically determine the optimal combination weights.
We conducted a simulation exercise and an empirical investigation based on real-life data to analyze the forecast accuracy and gain insights related to the inventory performance of different combination approaches.
The results of simulation study showed that simple combination and local weighted combination methods do not perform better than the best individual method for intermittent data. The proposed forecast combination framework notably outperforms others, which confirms the value of the diversity and time-series features in the context of intermittent demand forecasting. In the empirical evaluation, we utilized both traditional errors and bias-related errors to describe the performance of point forecasts comprehensively. Based a modified objective function, the proposed FIDE and DIVIDE approaches offer flexibility between forecast accuracy and the reduction of shortages.
However, we recognise the lack of explicit inventory performance metrics in the current study. \citet{petropoulos2019inventory} combined financial, operational, and service metrics to form a holistic measure for inventory control objectives. \citet{ducharme2021forecasting} focused on stock-out events and proposed a novel metric called Next Time Under Safety Stock. Such utility measures are important to achieve a direct link with inventory holding costs and service levels in operation management. Future research should focus on further analyzing the performance of our proposed framework to this direction. Another limitation of this paper is the lack of estimating the uncertainty of the intermittent demand forecasts, either in terms of quantiles or probability distributions. Nevertheless, these are important for determining optimal safety stock levels in supply chain management \citep{trapero2019quantile}. Therefore, another avenue for further work could be to extend our approach to probability forecast combinations for intermittent demand.
\section*{Acknowledgments}
Yanfei Kang is supported by the National Natural Science Foundation of China (No. 72171011). Feng Li is supported by the Beijing Universities Advanced Disciplines Initiative (No. GJJ2019163) and the Emerging Interdisciplinary Project of CUFE.
\bibliographystyle{tfcad}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,638 |
Our REALTORS® can help you see inside every Noblesville Township, Hamilton County, Indiana house listed by any real estate agent or broker. When you find a home you'd like to see, submit a showing request or call us.
Get Noblesville Township automated home sale updates. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,310 |
Social Search Is a Win-Win For Marketers
A few days ago, you could practically hear marketers and SEO geniuses alike salivating at the new possibilities of social searches. Google recently included relevant Twitter updates in search results, and last week Facebook and Bing partnered up in an agreement which allows the search engine to produce results based on "Likes" made on Facebook by the searcher's friends. Joe Devine pondered the implications this could have on SEO professionals over at Mashable.com on Monday. Some have blogged about social searches being a coup for analysts, while still others have wondered if social search has crossed a line into creepy consumerism spying.
For marketers, though, social searches finally may provide a glimpse into the effectiveness of our social media marketing strategies that previously relied on one-part guesswork and one-part blind faith.
While working on a social media campaign for a local small business, I was thrilled to find its name in tweets and on Facebook while doing a routine search engine trolling. My elation wasn't solely based on our campaign being successful, but the ease of having all the social media mentions in one place. This is a necessary time saver for marketers and freelance PR wizards who keep a delicate balance between working and watching YouTube videos for, um, research. These social search results also can reassure a client who still hasn't surrendered to social media marketing: Proof of effectiveness is now in black and white.
Additionally, social searches can give us a look at what brands are hitting social media marketing campaigns out of the park. If there are 100 tweets about a new Pepsi campaign on Google's first three pages, for example, it's safe to say the effort has been a success. If, however, a social media campaign has generated no search engine buzz, we'll be able to tell right away where the glitches are. Mainly, social search legitimizes the importance of social media and the brand-generated conversations that take place on sites like Facebook and Twitter. Our customer's voices soon will wind up on coveted Goggle searches – and that, in and of itself, is invaluable marketing gold.
Filed Under: Weekly Five Things You Might Have Missed | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,658 |
\section{Introduction}
Following Buchweitz \cite{Bu} and Orlov \cite{Orl} the singular category $\EuScript D_{\mathop{\mathrm{sg}}\nolimits}(A)$ of an associative algebra $A$ is the localisation of the bounded derived category of $A$ at the full subcategory of compact objects. For more details see e.g. [Zim]. In the recent paper \cite{Wang}, we defined the singular Hochschild cohomology groups
$\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^i(A, A)$ of an associative $k$-algebra $A$ over a commutative base ring $k$, as morphisms from $A$ to $A[i]$
in the singular category $\EuScript D_{\mathop{\mathrm{sg}}\nolimits}(A\otimes_k A^{\mathop{\mathrm{op}}\nolimits})$. That is, for any $i\in \mathbb{Z}$,
$$\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^i(A, A):=\mathop{\mathrm{Hom}}\nolimits_{\EuScript D_{\mathop{\mathrm{sg}}\nolimits}(A\otimes_k A^{\mathop{\mathrm{op}}\nolimits})}(A, A[i]),$$
where we denote as usual by $[i]$ the $i$-th iterate of the suspension functor.
Moreover, we proved that the singular Hochschild cohomology ring $\mathop{\mathrm{HH}}\nolimits^*_{\mathop{\mathrm{sg}}\nolimits}(A, A)$
is a Gerstenhaber algebra if $A$ is a $k$-algebra such that $A$ is projective as a $k$-module (cf. \cite[Theorem 4.2]{Wang}). The singular Hochschild cohomology group $\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^i(A, A)$ can be computed by the bar resolution of $A$, roughly speaking,
it is realized as the colimit of an inductive system associated to the bar resolution (cf. \cite[Proposition 3.1]{Wang}).
A radical square zero algebra is a finite dimensional algebra over a field $k$ such that
the square of its Jacobson radical is zero.
From Gabriel's theorem it follows that every radical square zero algebra over an algebraically closed field $k$ is Morita equivalent to a path algebra modulo relations of the form
$kQ/\langle Q_2\rangle$, where $Q$ is a finite quiver and $Q_2$ is the set of all paths
in $Q$ of length 2.
We follow \cite{Cib1, Cib2, Cib3} and \cite{San, San2} to compute the singular Hochschild cohomology of radical square zero algebras. We will study the dimension of $\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^i(A, A)$ and the Gerstenhaber algebra structure on $\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^*(A, A)$.
Recall that in \cite{Cib1},
Cibils constructs a reduced bar resolution (cf. Lemma \ref{lemma1cib} below) of $A$ as an $A$-$A$-bimodule for any finite
dimensional $k$-algebra $A$, where $k$ is a perfect field. By this projective resolution,
Cibils gave a very combinatorial construction of the Hochschild cohomology groups $\mathop{\mathrm{HH}}\nolimits^*(A, A)$
in the case of a radical square zero algebra $A$. Following Cibils's construction, in \cite{San, San2} S\'{a}nchez
studied the Lie module structure on the Hochschild cohomology groups $\mathop{\mathrm{HH}}\nolimits^*(A, A)$
over the Lie algebra $\mathop{\mathrm{HH}}\nolimits^1(A, A)$. In this paper, we will generalize these results to
the singular Hochschild cohomology $\mathop{\mathrm{HH}}\nolimits^*_{\mathop{\mathrm{sg}}\nolimits}(A, A)$ of a radical square zero algebra $A$.
We also give examples for algebras $A$ whose Hochschild cohomology (or singular Hochschild cohomology)
do not admit a BV algebra structure (cf. Remarks \ref{rem-conter} and \ref{rem-conter1}).
This paper is organized as follows.
Section 2 is devoted to recall some notions and results from
\cite{Cib1, Cib2, Cib3}. In Section 3, we develop the general theory
on the singular Hochschild cohomology groups $\mathop{\mathrm{HH}}\nolimits^i_{\mathop{\mathrm{sg}}\nolimits}(A, A)$
for any radical square zero algebra $A$. In Section 4, we consider examples of the radical square zero algebra $A$ associated to the $c$-crown quiver. The dimension of $\mathop{\mathrm{HH}}\nolimits^i_{\mathop{\mathrm{sg}}\nolimits}(A, A)$
and the Gerstenhaber algebra structure on $\mathop{\mathrm{HH}}\nolimits^*_{\mathop{\mathrm{sg}}\nolimits}(A, A)$ will be computed explicitly.
Section 5 considers the radical square zero algebra $A$ associated to the two loops quiver. We give a description on the Gerstenhaber algebra structure on $\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^*(A, A)$. In Section 6, we give a PROP interpretation
for the Gerstenhaber bracket in the case of radical square zero algebras. This generalizes the construction in Section 5. In Appendix A,
we use PROP theory to construct a new Lie algebra structure on $\mathop{\mathfrak{gl}}\nolimits_{\infty}(k)$.
In this paper, we frequently use some notions on quivers without recalling. We refer to
\cite{Cib1, Cib3, Boe, Mich, Zim} for details.
\section*{Acknowledgement} This work is a part of author's PhD thesis, I would like to thank my PhD supervisor
Alexander Zimmermann for introducing this interesting topic and for his many valuable suggestions for improvement. I also would like to thank Huafeng Zhang for
many useful discussions during this project.
\section{Background and Notation}
In this section, we assume that $k$ is a field and $A$ is a finite dimensional $k$-algebra.
Recall that the Jacobson radical $\mathop{\mathrm{rad}}\nolimits(A)$ is defined as the intersection of all maximal left ideals of $A$. Wedderburn-Malcev theorem says that there exists a semi-simple subalgebra $E$ of $A$ such that $A\cong E\oplus \mathop{\mathrm{rad}}\nolimits(A)$.
Let $Q=(Q_0, Q_1, s, t)$ be a finite (i.e. $|Q_0\cup Q_1|<+\infty$) and connected quiver.
For $n\in\mathbb{Z}_{\geq 0}$, we define $Q_n$ to be the set of all paths in $Q$ of length $n$. The trivial path is denoted by $e_i$ for a vertex $i$ in $Q_0$. We consider the path algebra $kQ$ of $Q$ over the field $k$. As a vector space,
$$kQ=\bigoplus_{n\in\mathbb{Z}_{\geq 0}} kQ_n.$$
Let $p$ and $q$ be two paths in $Q$, then their product $pq$ is the concatenation of the paths $p$ and $q$ if $t(q)=s(p)$ and $0$ otherwise.
Then the radical square zero algebra of $Q$ is defined as
$$A_Q:=kQ/\langle Q_2 \rangle,$$
where $\langle Q_2\rangle$ is the two-sided ideal in $kQ$ generated by $Q_2,$ the set of paths of length 2.
Hence $A_Q$ is a finite-dimensional $k$-algebra with the Jacobson radical $\mathop{\mathrm{rad}}\nolimits(A_Q)=kQ_1$ and the Wedderbrun-Malcev decomposition is $A_Q=kQ_0\oplus kQ_1$, where $E=kQ_0$ is
a commutative semi-simple algebra, which is isomorphic to $$\bigoplus_{e_n\in Q_0} ke_n.$$ We remark that $\mathop{\mathrm{rad}}\nolimits^2(A_Q)=0$.
For a finite-dimensional $k$-algebra $A$ (not necessarily, radical square zero algebra) with a Wedderburn-Malcev decomposition $A=E\oplus \mathop{\mathrm{rad}}\nolimits(A)$, we have a projective resolution of the $A$-$A$-bimodule $A$. We abbreviate in the sequel
$$a_{i, j}:=a_i\otimes a_{i+1}\otimes \cdots\otimes a_j$$ for $i<j$.
\begin{lemma}[Lemma 2.1 \cite{Cib1}]\label{lemma1cib}
Let $A$ be a finite dimensional $k$-algebra and let $$A=E\oplus \mathop{\mathrm{rad}}\nolimits(A)$$ be a Wedderburn-Malcev decomposition of $A$. Put $r:=\mathop{\mathrm{rad}}\nolimits(A)$. Then the following is a projective resolution of the $A$-$A$-bimodule $A$:
\begin{equation}\label{equ-proj-res}
\xymatrix{
\mathcal{R}(A):\cdots\ar[r]^-{d_{i+1}} & A\otimes_E r^{\otimes_E i}\otimes_E A \ar[r]^-{d_i} & \cdots\ar[r]^-{d_2} & A\otimes_E r\otimes_E A\ar[r]^-{d_1} &
A\otimes_E A \ar[r]^-{\epsilon} & A\ar[r] & 0
}
\end{equation}
where $$\epsilon(a\otimes b)=ab$$
and
\begin{equation*}
\begin{split}
d_i(a\otimes x_{1, i}\otimes b)=&ax_1\otimes x_{2, i}\otimes b+\sum_{j=1}^i(-1)^ja\otimes x_{1, j-1}\otimes x_jx_{j+1}\otimes x_{j+2, i}\otimes b+\\
& (-1)^ia\otimes x_{1, i-1}\otimes x_ib
\end{split}
\end{equation*}
where all tensor products $\otimes$ represent $\otimes_E$.
\end{lemma}
\begin{prop}[Proposition 2.2. \cite{Cib1}]\label{prop2.2}
Let $A$ be a finite-dimensional $k$-algebra and $E$ be a subalgebra of $A$ such that $A=E\oplus \mathop{\mathrm{rad}}\nolimits(A)$ and let $M$ be a $A$-$A$-bimodule.
Then the Hochschild cohomology vector spaces $\mathop{\mathrm{HH}}\nolimits^i(A, M)$ are the cohomology groups of the complex:
\begin{equation}\label{equ-proj-res1}
\xymatrix{
0\ar[r] & M^E\ar[r]^-{\delta} & \mathop{\mathrm{Hom}}\nolimits_{E-E}(r, M)\ar[r]^-{\delta} & \mathop{\mathrm{Hom}}\nolimits_{E-E}(r\otimes_E r, M)\ar[r]^-{\delta} & \\
& \cdots\ar[r]^-{\delta} & \mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E i}, M) \ar[r]^-{\delta} & \cdots
}
\end{equation}
where we denote $$r:=\mathop{\mathrm{rad}}\nolimits(A),$$
$$M^E:=\{m\in M\ |\ sm=ms \mbox{ for all $s\in E$}\},$$
for $m\in M^E$ and $x\in \mathop{\mathrm{rad}}\nolimits(A)$,
$$\delta(m)(x)=mx-xm$$
and for $f\in\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_Ei}, M)$,
\begin{equation*}
\begin{split}
\delta(f)(x_{1, i+1})=x_1f(x_{2, i+1})+\sum_j^{i} (-1)^{j} f(x_{1, j-1} \otimes x_jx_{j+1}\otimes x_{j+2, i+1})
+(-1)^{i+1} f(x_{1, i}) x_{i+1},
\end{split}
\end{equation*}
where all tensor products $\otimes$ represent $\otimes_E$.
\end{prop}
\begin{rem}\label{rem-twoterms}
If $A$ is a radical square zero algebra (i.e. $\mathop{\mathrm{rad}}\nolimits^2(A)=0$),
then the differential $\delta(f)$ just has two terms, namely, we have
$$\delta(f)(x_{1, i+1})=x_1f(x_{2, i+1})+(-1)^{i+1} f(x_{1, i})x_{i+1}.$$
Next we will apply the projective resolution $\mathcal{R}(A)$ in Lemma \ref{lemma1cib} to compute the singular Hochschild cohomology of
radical square zero algebras.
\end{rem}
\section{General theory for radical square zero algebras}\label{section3}
Now we fix a finite and connected quiver $Q=(Q_0, Q_1, s, t)$ and let
$A:=kQ/\langle Q_2\rangle$ be the radical square zero algebra of $Q$ over a field $k$.
\begin{lemma}[Lemma 2.1 \cite{Cib3}]\label{lemma2.1}
Let $k$ be a field. Let $r=kQ_1$ be the Jacobson radical of $A$ and $E=kQ_0$.
Then $r^{\otimes_E n}$ has a basis given by $Q_n$, the set of paths of length $n$.
\end{lemma}
\begin{rem}
Let $p_i\in r$, then
$$p_1\otimes_E \cdots \otimes_E p_n\neq 0\in r^{\otimes_E n}$$ if and only if
for $1\leq i \leq n-1$, $s(p_i)=t(p_{i+1}),$ that is, $p_1p_2\cdots p_n$ is a path of length $n$ in $kQ$.
\end{rem}
We follow Cibils to introduce the notation $X//Y$ for two sets $X$ and $Y$ of paths in $kQ$. Namely,
$$X//Y=\{(\gamma, \gamma')\in X\times Y \ | \ \mbox{such that $s(\gamma)=s(\gamma')$ and $t(\gamma)=t(\gamma')$.} \}$$
For instance, $Q_n//Q_0$ is the set of oriented cycles of length $n$. In the following, we will denote by $k(B)$, the vector space spanned
by a given set $B$.
\begin{lemma}[Lemma 2.2 \cite{Cib3}]\label{lemma2.2}
For the algebra $A:=kQ/\langle Q_2\rangle$, the vector space $\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E n}, A)$ is isomorphic to $k(Q_n//Q_0)\oplus k(Q_n//Q_1)$.
\end{lemma}
Recall the projective resolution $\mathcal{R}(A)$ defined in Lemma \ref{lemma1cib}.
We denote the image of the $i$-th differential $$\xymatrix{A\otimes_E r^{\otimes_E i} \otimes_E A\ar[r]^-{d_i}&
A\otimes_E r^{\otimes_E i-1}\otimes_E A}$$
by $\Omega^{i}(A)$. In particular, $\Omega^0(A)=A$.
Then we have the following lemma analogous to Lemma \ref{lemma2.1} and \ref{lemma2.2} above.
\begin{lemma}\label{lemma-new}
Let $p\in Z_{\geq 0}$ and let $\Omega^p(A)$ be the image of the $p$-th differential in the projective resolution $\mathcal{R}(A)$.
Then $\Omega^p(A)$ has a basis given by $Q_{p}\cup Q_{p+1}$ and
the vector space $$\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A))$$ is isomorphic to $k(Q_m//Q_{p})\oplus k(Q_m// Q_{p+1})$.
\end{lemma}
{\it Proof.}\hspace{2ex} The proof is analogous to the ones of Lemma \ref{lemma2.1} and \ref{lemma2.2}. For the reader's convenience,
we give the proof here. Since we have $A\cong r\oplus E$,
\begin{equation*}
\begin{split}
A\otimes_E r^{\otimes_E p}\otimes_E A \cong& r\otimes_E r^{\otimes_E p}\otimes_E r\bigoplus
E\otimes_E r^{\otimes_E p}\otimes_E r\bigoplus\\
&r\otimes_E r^{\otimes_E p}\otimes_E E\bigoplus E\otimes_E r^{\otimes_E p}\otimes_E E.
\end{split}
\end{equation*}
So we have the following isomorphism of $k$-linear vector spaces
$$A\otimes_E r^{\otimes_E p}\otimes_E A\cong k(Q_{p+2}\cup eQ_{p+1}\cup Q_{p+1}e\cup eQ_pe),$$
where we use the word $e$ to distinguish the differences between $eQ_{p+1}$ and $Q_{p+1}e$, more precisely,
$eQ_{p+1}$ is the basis corresponding to $ E\otimes_Er^{\otimes_E p}\otimes_E r$,
$Q_{p+1}e$ is the one corresponding to $r\otimes_Er^{\otimes_E p}\otimes_E E$ and
$eQ_pe$ corresponds to the basis of $E\otimes_E r^{\otimes_E p}\otimes_E E$.
Moreover, we have the following commutative diagram,
\begin{equation*}
\xymatrix{
A\otimes_Er^{\otimes_E p-1}\otimes_E A\ar[d]^{\cong}\ar[r]^-{d_{p-1}} & A\otimes_E r^{\otimes_E p-2}\otimes_EA\ar[d]^-{\cong}\\
k(Q_{p+1}\cup eQ_{p}\cup Q_{p}e\cup eQ_{p-1}e) \ar[r]^-{\overline{d}_{p-1}} &k(Q_{p}\cup eQ_{p-1}\cup Q_{p-1}e\cup eQ_{p-2}e)},
\end{equation*}
where
\begin{equation*}
\overline{d}_{p-1}=\left(\begin{smallmatrix} 0 & 0& 0&0\\ \mathop{\mathrm{id}}\nolimits & 0 & 0 & 0\\ (-1)^{p-1} \mathop{\mathrm{id}}\nolimits & 0 & 0 & 0\\ 0 & (-1)^{p-1} \mathop{\mathrm{id}}\nolimits & \mathop{\mathrm{id}}\nolimits & 0 \end{smallmatrix} \right).
\end{equation*}
Since $\mathcal{R}(A)$ is exact, the image of the $p$-th differential equals to the kernel of the $(p-1)$-th differential.
Let us consider the kernel of $\overline{d}_{p-1}$.
Note that
$$\overline{d}_{p-1}(\gamma)=0$$
for any $\gamma\in Q_{p+1}$
and
$$\overline{d}_{p-1}(e\gamma_p+(-1)^p\gamma_pe)=0,$$
for any $\gamma_p\in Q_p$.
We also note that
$$\overline{d}_{p-1}(x)=0$$
if and only if
$$x=\sum_{\gamma\in Q_{p+1}} a_{\gamma} \gamma+ \sum_{\gamma'\in Q_p} b_{\gamma'} (e\gamma'+(-1)^p\gamma'e)$$
where $a_{\gamma}, b_{\gamma'}\in k$. So the kernel of the $(p-1)$-th differential has a basis given by $(Q_{p+1}\cup Q_p)$.
Thus we have
\begin{equation*}
\xymatrix{
\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A)) \ar[r]^-{\cong} & \mathop{\mathrm{Hom}}\nolimits_{E-E}(k(Q_m), k(Q_p\cup Q_{p+1}))\ar[r]^-{\cong}
& k(Q_m// Q_p)\oplus k(Q_m//Q_{p+1}).
}
\end{equation*}
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
Now let us translate the differential $\delta$ in Proposition \ref{prop2.2} through these linear isomorphisms. Let
$$D_{m, p}: k(Q_m//Q_p)\rightarrow k(Q_{m+1}//Q_{p+1})$$
defined by the following formula
\begin{equation}\label{equ-defnD}
D_{m, p}(\gamma_m, \gamma'_p):=\sum_{a\in Q_1}(a\gamma_m, a\gamma'_p)+(-1)^{p+m+1} \sum_{a\in Q_1} (\gamma_m a, \gamma'_pa).
\end{equation}
\begin{prop}\label{prop-new}
We have the following commutative diagram, where the vertical maps are given by the linear isomorphisms of Lemma \ref{lemma-new},
\begin{equation}\label{1}
\xymatrix@C=4pc{
\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A))\ar[d]^{\cong} \ar[r]^-{\delta} & \mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m+1}, \Omega^p(A))\ar[d]^{\cong}\\
k(Q_m//Q_p)\oplus k(Q_m//Q_{p+1}) \ar[r]^-{\left( \begin{smallmatrix} 0 & D_{m,p} \\0 &0 \end{smallmatrix}\right)}& k(Q_{m+1}//Q_p)\oplus k(Q_{m+1}//Q_{p+1}).
}
\end{equation}
\end{prop}
{\it Proof.}\hspace{2ex}
Let $(\gamma_m, \gamma'_{p+1})\in Q_{m}//Q_{p+1}$,
Then by the vertical isomorphism, $(\gamma_m, \gamma'_{p+1})$ corresponds the element
$\eta_{(\gamma_m, \gamma'_{p+1})}\in\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A))$ defined as follows.
For any $\alpha_m\in Q_m$,
\begin{equation*}
\eta_{(\gamma_m, \gamma_{p+1}')}(\alpha_m):=
\begin{cases}
\gamma_{p+1}' & \mbox{if $\alpha_m=\gamma_m$;}\\
0 & \mbox{otherwise}
\end{cases}
\end{equation*}
Then from Remark \ref{rem-twoterms}, it follows that
$$\delta(\eta_{(\gamma_m, \gamma_{p+1}')})=0.$$
Similarly, let $(\gamma_m, \gamma_{p}')\in Q_m//Q_p$.
The vertical isomorphism sends $(\gamma_m, \gamma_{p}')$ to
the element $\eta_{(\gamma_m, \gamma_{p}')}\in\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A))$,
which is defined as follows,
\begin{equation*}
\eta_{(\gamma_m, \gamma_{p}')}(\alpha_m):=
\begin{cases}
e\gamma_{p}' +(-1)^p\gamma_p'e & \mbox{if $a\alpha_m=\gamma_m$;}\\
0 & \mbox{otherwise.}
\end{cases}
\end{equation*}
Hence from Remark \ref{rem-twoterms} we have, for any $\alpha_{m+1}\in Q_{m+1}$,
\begin{equation*}
\delta(\eta_{(\gamma_m, \gamma_{p}')})(\alpha_{m+1})=
\begin{cases}
a\gamma_p' & \mbox{if $\alpha_{m+1}=a\gamma_m$ for some $a\in Q_1$;}\\
(-1)^{p+m+1} \gamma_p'a & \mbox{if $\alpha_{m+1}=\gamma_ma$ for some $a\in Q_1$;}\\
0 & \mbox{otherwise}.
\end{cases}
\end{equation*}
Therefore, we have verified the commutativity of Diagram (\ref{1}).
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
\begin{rem}\label{rem-coho}
From Proposition \ref{prop-new}, it follows that
\begin{equation}\label{equ-isom}
\mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A))\cong \frac{k(Q_m//Q_{p+1})}{\mathop{\mathrm{Im}}\nolimits(D_{m-1, p})}\oplus\mathop{\mathrm{Ker}}\nolimits(D_{m,p}).
\end{equation}
Recall that the connecting homomorphism (cf. \cite[Proposition 4.7]{Wang}),
$$\theta_{m, p}:\mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A))\rightarrow \mathop{\mathrm{HH}}\nolimits^{m+1}(A, \Omega^{p+1}(A))$$
is defined as follows,
\begin{equation}\label{equa-theta}
\theta_{m,p}(f)(a_{1, m+1})=-f(a_{1, m})\otimes_E a_{m+1}.
\end{equation}
\end{rem}
\begin{lemma}\label{lemma-limitnew}
Let $A$ be the radical square zero $k$-algebra of a quiver $Q$ over a field $k$. Then under the isomorphism (\ref{equ-isom}) above, we have the following commutative diagram.
\begin{equation}\label{diagram3}
\xymatrix@C=5pc{
\mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A)) \ar[r]^-{\theta_{m,p}}\ar[d]^{\cong} & \mathop{\mathrm{HH}}\nolimits^{m+1}(A, \Omega^{p+1}(A))\ar[d]^{\cong}\\
\frac{k(Q_m//Q_{p+1})}{\mathop{\mathrm{Im}}\nolimits(D_{m-1, p})}\oplus\mathop{\mathrm{Ker}}\nolimits(D_{m,p})\ar[r]^-{\left( \begin{smallmatrix} E_{m, p+1} & 0\\ 0 &E_{m, p} \end{smallmatrix}\right)} & \frac{k(Q_{m+1}//Q_{p+2})}{\mathop{\mathrm{Im}}\nolimits(D_{m, p+1})}\oplus\mathop{\mathrm{Ker}}\nolimits(D_{m+1,p+1})
}
\end{equation}
where $$E_{m, p+1}: \frac{k(Q_m//Q_{p+1})}{\mathop{\mathrm{Im}}\nolimits(D_{m-1, p})}\rightarrow \frac{k(Q_{m+1}//Q_{p+2})}{\mathop{\mathrm{Im}}\nolimits(D_{m, p+1})}$$ is defined as,
\begin{equation*}
E_{m, p+1}(\gamma_m, \gamma_{p+1}'):=-\sum_{a\in Q_1} (\gamma_ma,\gamma_{p+1}'a).
\end{equation*}
\end{lemma}
{\it Proof.}\hspace{2ex} First, let us prove that $E_{m, p+1}$ is well-defined on $ \frac{k(Q_m//Q_{p+1})}{\mathop{\mathrm{Im}}\nolimits(D_{m-1, p})}$
and $E_{m, p}$ is well-defined on $\mathop{\mathrm{Ker}}\nolimits(D_{m, p})$.
Let $(\gamma_{m-1}, \gamma_p')\in Q_{m-1}//Q_p$. We need to prove that
$$E_{m, p+1}(D_{m-1, p}((\gamma_{m-1}, \gamma_p'))\in \mathop{\mathrm{Im}}\nolimits(D_{m, p+1}).$$
Indeed, we have
\begin{equation*}
\begin{split}
E_{m, p+1}(D_{m-1, p}((\gamma_{m-1}, \gamma_p'))&=\sum_{a\in Q_1} E_{m, p+1}(a\gamma_{m-1}, a\gamma_p')+(-1)^{p+m}
\sum_{a\in Q_1} E_{m, p+1}(\gamma_{m-1}a, \gamma_p'a)\\
&=\sum_{a, b\in Q_1} -(a\gamma_{m-1}b, a\gamma_p'b)+(-1)^{p+m+1}\sum_{a, b\in Q_1}
(\gamma_{m-1}ab, \gamma_p'ab)\\
&=\sum_{a\in Q_1} D_{m, p+1}(\gamma_{m-1}a, \gamma_{p}'a).
\end{split}
\end{equation*}
Similarly, we can also prove that $E_{m, p}$ is well-defined on $\mathop{\mathrm{Ker}}\nolimits(D_{m, p})$.
Let $(\gamma_m, \gamma_{p+1}')\in Q_m//Q_{p+1}$,
then by the vertical isomorphism, $(\gamma_m, \gamma_{p+1}')$ is sent to
an element in $\mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A))$ represented by the element
$$\eta_{(\gamma_m, \gamma_{p+1}')}\in\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A)),$$
where $\eta_{(\gamma_m, \gamma_{p+1}')}$ is defined in the proof of Proposition \ref{prop-new}.
From Formula (\ref{equa-theta}), it follows that
\begin{equation*}
\theta_{m, p}(\eta_{(\gamma_m, \gamma_{p+1}')})(a_{m+1})=
\begin{cases}
-\gamma_{p+1}'a & \mbox{if $a_{m+1}=\gamma_m a$ for some $a\in Q_1$;}\\
0 & \mbox{otherwise}.
\end{cases}
\end{equation*}
Similarly, let $$z:=\sum_{(\gamma_m, \gamma_p')\in (Q_m//Q_p)} x_{(\gamma_m, \gamma_p')} (\gamma_m, \gamma_p')$$
be an element in $\mathop{\mathrm{Ker}}\nolimits(D_{m, p})$.
Then $z$ is sent to an element in $\mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A))$ represented by
$$\eta_z:=\sum_{(\gamma_m, \gamma_p')\in (Q_m//Q_p)} x_{(\gamma_m, \gamma_p')} \eta_{(\gamma_m, \gamma_p')}\in\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A)). $$
Formula (\ref{equa-theta}) implies
\begin{equation*}
\theta_{m, p}(\eta_z)(a_{m+1})=
\begin{cases}
-\gamma_pa & \mbox{if $a_{m+1}=\gamma_ma$ for some $a\in Q_1$;}\\
0 & \mbox{otherwise.}
\end{cases}
\end{equation*}
Therefore, we have verified the commutativity of Diagram (\ref{diagram3}).
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
\begin{prop}
Let $k$ be a field. Let $A$ be the radical square zero $k$-algebra of a quiver $Q$. Then for $m\in\mathbb{Z}$, we have
\begin{equation}\label{prop-isom}
\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^m(A, A)\cong \lim_{\substack{\longrightarrow\\p\in\mathbb{Z}_{> 0}\\ m+p>0}}\frac{k(Q_{m+p}//Q_{p+1})}{\mathop{\mathrm{Im}}\nolimits(D_{m+p-1, p})}\oplus \lim_{\substack{\longrightarrow\\p\in\mathbb{Z}_{> 0}\\ m+p>0}} \mathop{\mathrm{Ker}}\nolimits(D_{m+p, p}).
\end{equation}
\end{prop}
{\it Proof.}\hspace{2ex} From \cite[Proposition 3.1]{Wang},
it follows that
$$\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^m(A, A)\cong \lim_{\substack{\longrightarrow\\p\in\mathbb{Z}_{> 0}\\ m+p>0}}\mathop{\mathrm{HH}}\nolimits^{m+p}(A, \Omega^p(A)).$$
So from Lemma \ref{lemma-limitnew}, we have
$$\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^m(A, A)\cong \lim_{\substack{\longrightarrow\\p\in\mathbb{Z}_{> 0}\\ m+p>0}}\frac{k(Q_{m+p}//Q_{p+1})}{\mathop{\mathrm{Im}}\nolimits(D_{m+p-1, p})}\oplus \lim_{\substack{\longrightarrow\\p\in\mathbb{Z}_{> 0}\\ m+p>0}} \mathop{\mathrm{Ker}}\nolimits(D_{m+p, p}).$$
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
\begin{cor}
Let $k$ be a field. Let $Q$ be a finite and connected quiver and $A$ be its radical
square zero $k$-algebra. If $Q$ has no oriented cycles, then
for any $m\in \mathbb{Z}$, we have
$$\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^m(A, A)=0.$$
\end{cor}
{\it Proof.}\hspace{2ex} If $Q$ has no oriented cycles, then for $p\gg0$,
$$k(Q_{m+p}//Q_{p+1})=0.$$ Hence the right hand side of the isomorphism
(\ref{prop-isom}) vanishes. So $\mathop{\mathrm{HH}}\nolimits^m_{\mathop{\mathrm{sg}}\nolimits}(A, A)=0$.
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
\begin{rem}
In general, for $m, p\in \mathbb{Z}_{>0}$, the map $D_{m, p}$ is not injective. We are interested in the case when $D_{m, p}$ is injective.
\end{rem}
\begin{defn}\label{defn-crown}
Let $c\in \mathbb{Z}_{\geq 0}$. A $c$-crown is a quiver with $c$-vertices
cyclically labelled by the cyclic group of order $c$, and $c$ arrows
$a_0, \cdots, a_{c-1}$ such that $s(a_i)=i$ and $t(a_i)=i+1$. For
instance, a $1$-crown is a loop, and a $2$-crown is the two-way quiver.
\end{defn}
Now let us consider a finite and connected quiver $Q=(Q_0, Q_1, s, t)$ without sources or sinks (cf. e.g.
\cite{Mich}), that is, for any vertex $e\in Q_0$, there
exist arrows $p, q\in Q_1$ such that $s(p)=t(q)=e$.
For instance, $c$-crowns and the following quivers have no sources or sinks.
$$
\begin{tikzcd}
\bullet \ar[%
,loop
,out=135
,in=225
,distance=4em
]{}{a} \ar[
, loop
,out=315
,in=45
,distance=4em]{}{b}
& & \bullet \arrow[ loop, out=135, in=225, distance=4em]{}{a} \ar{r}&\bullet \arrow{r} & \bullet \arrow[loop, out=315,in=45, distance=4em ]{}{b}
\end{tikzcd}$$
\begin{prop}\label{prop-com}
Let $k$ be a field. Let $Q=(Q_0, Q_1, s, t)$ be a finite and connected quiver without sources or sinks. Suppose that $Q$ is not a crown. Then we have the following two cases for $m,p\in \mathbb{Z}_{>0}$:
\begin{enumerate}
\item If $m=p$, then we have
$\mathop{\mathrm{Ker}}\nolimits(D_{m, m})$ is a one-dimensional $k$-vector space with a basis $\sum_{\gamma_m\in Q_{m}} (\gamma_m, \gamma_m)$.
\item If $m\neq p$, then the map
$$D_{m, p}: k(Q_m//Q_p)\rightarrow k(Q_{m+1}//Q_{p+1})$$ is injective.
\end{enumerate}
\end{prop}
{\it Proof.}\hspace{2ex} This proof is analogous to the one of \cite[Theorem 3.1]{Cib3}. Suppose that $$x=\sum_{(\gamma_m, \beta_p)\in Q_m//Q_p}x_{(\gamma_m, \beta_p)}
(\gamma_m, \beta_p)$$
is an element in $\mathop{\mathrm{Ker}}\nolimits(D_{m, p})$, where $x_{(\gamma_m, \beta_p)}\in k$. Let us fix an element
$(\gamma_m, \beta_p)$ such that $x_{(\gamma_m, \beta_p)}\neq 0$.
We consider the contribution of $x_{(\gamma_m, \beta_p)}(\gamma_m, \beta_p)$
in $D_{m, p}(x)$, which is
\begin{equation}\label{equ-contribution}
x_{(\gamma_m, \beta_p)}\left(\sum_{a\in Q_1s(\gamma_m)}(a\gamma_m, a\beta_p)+(-1)^{p+m+1} \sum_{a\in t(\gamma_m)Q_1} (\gamma_m a, \beta_pa)\right).
\end{equation}
Then from (\ref{equ-contribution}), we have the following observation.
\begin{obs}\label{obs}
If $x_{(\gamma_m, \beta_p)}\neq 0$, then for $1\leq i\leq \min\{m, p\}$,
$$f_i(\gamma_m)=f_i(\beta_p)$$ and
$$l_i(\gamma_m)=l_i(\beta_p),$$
where $f_i(\gamma)$ and $l_i(\gamma)$ denote the first $i$ arrows and last $i$
arrows of a path $\gamma$ respectively.
\end{obs}
Suppose $m=p$.
From Observation \ref{obs} above, it follows that
$x_{(\gamma_m, \beta_m)}\neq 0$ if and only if $\gamma_m=\beta_m$. Hence
we can write $x\in \mathop{\mathrm{Ker}}\nolimits(D_{m,m})$ as follows,
$$x=\sum_{(\gamma_m, \gamma_m)\in Q_m//Q_p}x_{(\gamma_m, \gamma_m)}
(\gamma_m, \gamma_m).$$
We claim that, for any $(\gamma_m, \gamma_m), (\gamma_m', \gamma_m')\in
Q_{m}//Q_m$,
$$x_{(\gamma_m, \gamma_m)}=x_{(\gamma_m', \gamma_m')}.$$
Indeed, we define an equivalence relation on $Q_m$ as follows.
For $$\gamma_m:=a_1\cdots a_m, \gamma_m':=b_1\cdots b_m\in Q_m,$$ we say that
$$\gamma_m\sim \gamma_m'$$ if we have
$$a_{i}=b_{i+1}$$ for $1\geq i \geq m-1,$ or
$$b_{i}=a_{i+1}$$ for $1\geq i\geq m-1$.
Then we can extend $\sim$ to an equivalence relation $\sim_{c}$ on $Q_m$.
Observe that if $\gamma_m\sim_c \gamma_m'$, then
$$x_{(\gamma_m, \gamma_m)}= x_{(\gamma_m', \gamma_m')}.$$
So it is sufficient to show that for any $\gamma_m, \gamma_m'\in Q_m$,
$$\gamma_m\sim_c \gamma_m'.$$
Since $Q$ is a finite and connected quiver without sources or sinks, we have the following two observations.
\begin{obs}\label{obs-3.14}
If $\gamma_m$ intersects with $\gamma_m'$ (that is,
there exist a vertex $e\in Q_0$ such that
$e\in \gamma_m$ and $e\in \gamma_m'$),
then $$\gamma_m\sim_c \gamma_m'.$$
\end{obs}
\begin{obs}\label{obs-3.15}
Given any path $\beta$ of length smaller than $m$, we can extend $\beta$ to
a path of length $m$.
\end{obs}
Now let us prove $\gamma_m\sim_c\gamma_m'$ for any $\gamma_m, \gamma_m'\in Q_m$ by
the two observations above. Since $Q$ is finite and connected,
there exists a non-oriented path connecting $\gamma_m$ and $\gamma_m'$.
\begin{equation} \label{dia3.14}
\xymatrix{
\gamma_m:& \bullet_1 \ar[r] & \bullet_2\ar[dl]^{a}\ar[r] &\cdots\ar[r] & \bullet_m \\
&\bullet& &&\bullet\ar[d]&&\\
& && \bullet\ar[ull]\ar[ur] &\bullet\ar[dll]&&\\
\gamma_m': & \bullet_1'\ar[r] & \bullet_2'\ar[r] &\cdots\ar[r] & \bullet_m'\\
}
\end{equation}
Let us use induction on the length $l(\beta)$ of the (non-oriented) path $\beta$ connecting
$\gamma_m$ and $\gamma_m'$.
It is clear for $l(\beta)=0$ by Observation \ref{obs-3.14}. Assume that
$$\gamma_m\sim_c\gamma_m'$$ for any $\gamma_m, \gamma_m'\in Q_m$ such that
there exists a path of length smaller than $(l-1)$ between them.
Now by Observation \ref{obs-3.15}, the arrow $a$ in Diagram (\ref{dia3.14}) can
be extended to a path $\gamma_m^a$ of length $m$. From Observation
(\ref{obs-3.14}) again it follows that $$\gamma_m\sim_c \gamma_m^a.$$
Note that the length of the (non-oriented) path connecting $\gamma_m'$ and $\gamma_m^a$ is
$(l-1)$, thus by induction hypothesis, we have
$$\gamma_m^a\sim_c \gamma_m'.$$
Since $\sim_c$ is transitive, we have $$\gamma_m\sim_c \gamma_m'$$
for any $\gamma_m, \gamma_m'\in Q_m$.
Therefore we have $$x\in k\left(\sum_{\gamma_m\in Q_{m}} (\gamma_m, \gamma_m)\right)$$
if $x\in \mathop{\mathrm{Ker}}\nolimits(D_{m,m}).$
On the other hand, for any $\lambda\in k$, we observe that
$$D_{m,m}(\lambda \sum_{\gamma_m\in Q_m}(\gamma_m, \gamma_m))=0.$$
Indeed,
\begin{equation*}
\begin{split}
D_{m, m}(\lambda \sum_{\gamma_m\in Q_m}(\gamma_m, \gamma_m))&=\lambda \sum_{a\in Q_1t(\gamma_m)}\sum_{\gamma_m\in
Q_m}(a\gamma_m, a\gamma_m)-\lambda \sum_{a\in s(\gamma_m)Q_1}\sum_{\gamma_m\in Q_m}
(\gamma_ma, \gamma_ma)\\
&=\lambda \sum_{\gamma_{m+1}\in Q_{m+1}}(\gamma_{m+1}, \gamma_{m+1})-\lambda
\sum_{\gamma_{m+1}\in Q_{m+1}}(\gamma_{m+1}, \gamma_{m+1})\\
&=0.
\end{split}
\end{equation*}
Therefore $$\mathop{\mathrm{Ker}}\nolimits(D_{m, m})=k\left(\sum_{\gamma_m\in Q_{m}} (\gamma_m, \gamma_m)\right).$$
Suppose $m\neq p$. Without loss of generality, we may assume that $m<p$. Then there will be two cases.
\begin{enumerate}
\item If $p\geq 2m$, let us write $\gamma_m=a_1a_2\cdots a_m$ (possibly $a_i=a_j$ for some $i, j\in\{1, 2, \cdots, m\}$). If $x_{(\gamma_m, \beta_p)}\neq 0$, then from Observation \ref{obs} it follows that
$$\beta_p=a_1\cdots a_mb_1b_2\cdots b_{p-2m} a_1\cdots a_m$$ is an oriented cycle (possibly with self-intersection).
\begin{equation}\label{equ-qui}
\begin{tikzcd}
\bullet \ar{r}{a_1}& \bullet \ar{r}{a_2} & \bullet \cdots\bullet\ar{r}{a_m} & \bullet \ar{d}{b_1} &\\
\bullet \ar{u}{b_{p-2m}} & &&\bullet e\ar{dl}{b_2}\ar{r}{c_1} &\bullet\ar{r}{c_2}& \bullet \cdots& \\
\bullet\ar{u}{b_{p-2m-1}}&\bullet \cdots\bullet\ar{l}{b_{p-2m-2}} & \bullet\ar{l}{b_3}&
\end{tikzcd}
\end{equation}
Since $Q$ is not a crown, there exists a vertex $e$ such that there is an another arrow $c_1$ starting at $e$
and $c_1\neq b_2$ (or there is an another arrow $c_1$ ending at $e$ and $c_1\neq b_1$, respectively) (cf. Diagram (\ref{equ-qui})).
Let us just consider the first case, namely where there is an arrow $c_1$ starting at $e$ and $c_1\neq b_2$, the other case being analogous.
From (\ref{equ-contribution}), we obtain that
$$(-1)^{p+m+1}x_{(\gamma_m, \beta_p)}(a_1\cdots a_mb_1, a_1\cdots a_mb_1\cdots b_{p-2m}a_1\cdots a_m b_1)$$
is a nonzero summand of $D_{m,p}(x)$.
Since $D_{m, p}(x)=0$, we get that
$x_{(\gamma_m', \beta_p')}\neq 0,$
where $$(\gamma_m',\beta_p'):=(a_2\cdots a_mb_1, a_2\cdots a_mb_1\cdots b_{p-2m}a_1\cdots a_m b_1).$$
Thus it follows that
$$(-1)^{p+m+1}x_{(\gamma_m', \beta_p')}(a_2\cdots a_mb_1c_1, a_2\cdots a_mb_1\cdots b_{p-2m}a_1\cdots a_m b_1c_1)$$
is a nonzero summand of $D_{m,p}(x).$
By this induction process, we have that
$$x_{(c_1c_2\cdots c_{m-1}c_m, b_2\cdots b_{p-2m}a_1\cdots a_mb_1c_1\cdots c_{m-1}c_m)}\neq 0.$$
Since $c_1\neq b_2$, from Observation \ref{obs} it follows that
$$x_{(c_1c_2\cdots c_{m-1}c_m, b_2\cdots b_{p-2m}a_1\cdots a_mb_1c_1\cdots c_{m-1}c_m)}=0.$$
Contradiction! Therefore, we have $x=0$.
For the remaining cases, we can apply an analogous induction process to
argue $x=0$.
\item If $m< p< 2m$, the proof is similar to the one of Case 1. From Observation \ref{obs}.
it follows that $\gamma_m$ is an oriented cycle (possibly with self-intersection).
\begin{equation}\label{equ-qui2}
\begin{tikzcd}
e\bullet \ar{r}{a_1} & \bullet\ar{r}{a_2} & \bullet \ar{d}{a_3} &\\
\bullet \ar{u}{a_{r}} &\bullet\cdots\bullet \ar{l}{a_{r-1}} &\bullet\ar{r}{c_1}\ar{l}{a_4}& \bullet \cdots& \\
\end{tikzcd}
\end{equation}
Suppose $s(\gamma_m)=t(\gamma_m)=e$ in Diagram (\ref{equ-qui2}).
Since $Q$ is not a crown, there exists an arrow $c_1$ such that $c_1\neq a_4$, as we have shown in Diagram (\ref{equ-qui2}).
Then by the induction process similar to the one in Case 1,
we can show that $x=0$.
\end{enumerate}
Therefore, we have proved that $$\mathop{\mathrm{Ker}}\nolimits(D_{m,p})=0$$ for $m\neq p$. So the proof is completed.
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
\begin{prop}\label{prop-singular}
Let $k$ be a field. Let $Q$ be a finite and connected quiver without sources or sinks and $A$ be its radical square
zero $k$-algebra. Suppose that $Q$ is not a crown. Then we have the following:
\begin{enumerate}
\item For $m, p\in\mathbb{Z}_{>0}$,
\begin{equation*}
\mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A))=
\begin{cases}
\frac{k(Q_m//Q_{p+1})}{\mathop{\mathrm{Im}}\nolimits(D_{m-1, p})} & \mbox{if $m\neq p$},\\
\frac{k(Q_m//Q_{m+1})}{\mathop{\mathrm{Im}}\nolimits(D_{m-1,m})}\oplus \mathop{\mathrm{Ker}}\nolimits(D_{m, m}) & \mbox{if $m=p$}.
\end{cases}
\end{equation*}
\item The connecting homomorphism
$$\theta_{m, p}: \mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A))\rightarrow \mathop{\mathrm{HH}}\nolimits^{m+1}(A, \Omega^{p+1}(A))$$
is injective for any $m, p\in \mathbb{Z}_{>0}$.
\item For any $m\in \mathbb{Z}$, the singular Hochschild cohomolgy $\mathop{\mathrm{HH}}\nolimits^m_{\mathop{\mathrm{sg}}\nolimits}(A, A)$ has a filtration
$$\cdots\subset \mathop{\mathrm{HH}}\nolimits^{m+p}(A, \Omega^p(A))\subset \mathop{\mathrm{HH}}\nolimits^{m+p+1}(A, \Omega^{p+1}(A))\subset \cdots$$
Moreover, this filtration respects the Gerstenhaber algebra structure, that is, for $m, n\in \mathbb{Z}$,
$$\mathop{\mathrm{HH}}\nolimits^{m+p}(A, \Omega^p(A))\cup\mathop{\mathrm{HH}}\nolimits^{n+q}(A, \Omega^q(A))\subset \mathop{\mathrm{HH}}\nolimits^{m+p+n+q}(A, \Omega^{p+q}(A)),$$
$$[\mathop{\mathrm{HH}}\nolimits^{m+p}(A, \Omega^p(A)), \mathop{\mathrm{HH}}\nolimits^{n+q}(A, \Omega^q(A))]\subset \mathop{\mathrm{HH}}\nolimits^{m+p+n+q-1}(A, \Omega^{p+q}(A)).$$
\end{enumerate}
\end{prop}
{\it Proof.}\hspace{2ex} It is sufficient to verify that $\theta_{m,p}$ is injective for $m,p\in\mathbb{Z}_{>0}$.
Recall that we have the long exact sequence
\begin{equation*}
\xymatrix{
\cdots\ar[r] &\mathop{\mathrm{HH}}\nolimits^m(A, A\otimes r^{\otimes p}\otimes A)\ar[r]^-{d} &\mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A))\ar[r]^-{\theta_{m, p}} & \mathop{\mathrm{HH}}\nolimits^{m+1}(A, \Omega^{p+1}(A))\ar[r]& \cdots
}
\end{equation*}
where $d$ is induced by
the differential in the resolution $\mathcal{R}(A)$ (cf. Lemma \ref{lemma1cib}).
Hence $\theta_{m, p}$ is injective if and only if
$d=0$. Now let us show that $d=0$ for any $m, p\in\mathbb{Z}_{>0}$.
Note that $\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, A\otimes_Er^{\otimes_E p}\otimes_E A)$ has a decomposition
with respect to the decomposition $A\cong E\oplus r$. Namely,
\begin{equation*}
\begin{split}
\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, A\otimes_Er^{\otimes_E p}\otimes_E A)\cong&\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, r\otimes_Er^{\otimes_E p}\otimes_E r)\\
&\bigoplus\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, E\otimes_Er^{\otimes_E p}\otimes_E r)\\
&\bigoplus\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, r\otimes_Er^{\otimes_E p}\otimes_E E)\\
&\bigoplus\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, E\otimes_Er^{\otimes_E p}\otimes_E E),
\end{split}
\end{equation*}
hence $\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, A\otimes_Er^{\otimes_E p}\otimes_E A)$ has a basis $$S_{m,p}:=(Q_m// Q_{p+2})\cup (Q_m//eQ_{p+1})\cup (Q_m//Q_{p+1}e)\cup (Q_m//Q_{p}),$$
where we use the word $e$ to distinguish the differences between $(Q_m//eQ_{p+1})$ and $(Q_m//Q_{p+1}e)$, more precisely,
$(Q_m//eQ_{p+1})$ is the basis corresponding to $\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, E\otimes_Er^{\otimes_E p}\otimes_E r)$ and
$(Q_m//Q_{p+1}e)$ is the one corresponding to $\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, r\otimes_Er^{\otimes_E p}\otimes_E E)$.
Moreover, we can write down the differential.
\begin{equation}\label{diffen-equ}
\xymatrix{
\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, A\otimes_Er^{\otimes_E p}\otimes_E A)\ar[d]^{\cong}\ar[r]^-{\delta}&\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m+1}, A\otimes_Er^{\otimes_E p}\otimes_E A)\ar[d]^{\cong}\\
k(S_{m, p}) \ar[r]_-{\left( \begin{smallmatrix} 0 & D_{m, p+1}&D_{m,p+1}' &0\\0& 0 & 0 & E_{m, p}\\0& 0 & 0 & E_{m, p}'\\
0 & 0 & 0 & 0 \end{smallmatrix}\right)} & k(S_{m+1, p})
}\end{equation}
where
\begin{equation*}
\begin{split}
D_{m, p+1}(\gamma_m, e\beta_{p+1}):=&\sum_{a\in Q_1}(a\gamma_m, a\beta_{p+1});\\
D_{m, p+1}'(\gamma_m, \beta_{p+1}e):=&\sum_{a\in Q_1}(-1)^{m+1}(\gamma_ma, \beta_{p+1}a);\\
E_{m, p}(\gamma_m, \beta_{p}):=& \sum_{a\in Q_1}(a\gamma_m, a\beta_{p+1});\\
E_{m, p}'(\gamma_m,\beta_{p}):=& \sum_{a\in Q_1}(-1)^{m+1}(\gamma_ma, \beta_{p+1}a).
\end{split}
\end{equation*}
Now suppose $$x\in \mathop{\mathrm{HH}}\nolimits^m(A, A\otimes r^{\otimes p}\otimes A)$$
is a nonzero element.
Then from Diagram (\ref{diffen-equ}), we can write $x$ as follows,
\begin{equation*}
\begin{split}
x=&\sum_{(\gamma_m, \beta_{p+2})\in Q_m//Q_{p+2}} x_{(\gamma_m, \beta_{p+2})}(\gamma_m, \beta_{p+2})+\sum_{(\gamma_m, \beta_{p+1})\in Q_m//Q_{p+1}} x_{(\gamma_m, e\beta_{p+1})}(\gamma_m, e\beta_{p+1})
+\\
&\sum_{(\gamma_m, \beta_{p+1})\in Q_m//Q_{p+1}} x_{(\gamma_m, \beta_{p+1}e)}(\gamma_m, \beta_{p+1}e).
\end{split}
\end{equation*}
Now $\delta(x)=0$ is equivalent to
\begin{equation}\label{equ-contri}
\sum_{\substack{ (\gamma_m, \beta_{p+1})\in (Q_m//Q_{p+1}) \\a\in Q_1}} \left( x_{(\gamma_m, e\beta_{p+1})}(a\gamma_m, a\beta_{p+1})
+(-1)^{p+m+1}x_{(\gamma_m, \beta_{p+1}e)}(\gamma_ma, \beta_{p+1}a)\right)=0.
\end{equation}
We have the following observation.
\begin{obs}\label{obs3}
Suppose that $$x_{(\gamma_m, e\beta_{p+1})}\neq 0.$$
Then
$t(\gamma_m)=t(\beta_{p+1})$.
Moreover, for those $a\in Q_1$ such that $a\gamma_{m}\neq 0$,
$$x_{(af_{m-1}(\gamma_{m}), af_{p}(\beta_{p+1}e)}=(-1)^{p+m}x_{(\gamma_m, e\beta_{p+1})},$$
where we denote by $f_{m-1}(\gamma_{m})$ the path formed by the first $m-1$ arrows in $\gamma_m$.
Similarly, suppose that
$$x_{(\gamma'_m, \beta'_{p+1}e)}\neq 0.$$
Then $s(\gamma'_m)=s(\beta'_{p+1}).$ Moreover, for those $a\in Q_1$ such that $\gamma_m'a\neq 0$,
$$x_{(l_{m-1}(\gamma_{m}')a, el_{p}(\beta_{p+1}')a)}=(-1)^{m+p}x_{(\gamma_m', \beta_{p+1}'e)},$$
where we denote by $l_{m-1}(\gamma'_{m})$ the path formed by the last $m-1$ arrows in $\gamma'_m$.
\end{obs}
Now let us compute
\begin{equation*}
\begin{split}
d(x)=\sum x_{(\gamma_m, e\beta_{p+1})}(\gamma_m, \beta_{p+1})
+\sum (-1)^p x_{(\gamma_m, \beta_{p+1}e)}(\gamma_m, \beta_{p+1})\in\mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A)).
\end{split}
\end{equation*}
We claim that $$d(x)=(-1)^{m}\sum_{(\gamma_m, e\beta_{p+1})} x_{(\gamma_m, e\beta_{p+1})}D_{m-1, p}(f_{m-1}(\gamma_{m}), f_p(\beta_{p+1})).$$
Indeed, we have that
\begin{eqnarray*}
\lefteqn{
\sum_{(\gamma_m, e\beta_{p+1})} x_{(\gamma_m, e\beta_{p+1})}D_{m-1, p}(f_{m-1}(\gamma_{m}), f_p(\beta_{p+1}))}\\
&=&\sum_{\substack{ (\gamma_m,e \beta_{p+1}), \\a\in Q_1}} x_{(\gamma_m, e\beta_{p+1})}(af_{m-1}(\gamma_{m}), af_p(\beta_{p+1}))+\\
&&(-1)^{p+m}\sum_{\substack{ (\gamma_m,e \beta_{p+1}), \\a\in Q_1}}x_{(\gamma_m, e\beta_{p+1})}(f_{m-1}(\gamma_{m})a, f_p(\beta_{p+1})a)\\
&=&\sum_{\substack{ (\gamma_m,e \beta_{p+1}), \\a\in Q_1}}(-1)^{p+m} x_{(af_{m-1}(\gamma_{m}), af_p(\beta_{p+1})e)}(af_{m-1}(\gamma_{m}), af_p(\beta_{p+1}))+\\
&&(-1)^m\sum_{\substack{ (\gamma_m,e \beta_{p+1}), \\a\in Q_1}}x_{(f_{m-1}(\gamma_m)a, ef_p(\beta_{p+1})a)}(f_{m-1}(\gamma_{m})a, f_p(\beta_{p+1})a)\\
&=&(-1)^md(x),
\end{eqnarray*}
where the second identity comes from Observation \ref{obs}.
So it follows that $d(x)=0$ in $\mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A))$. Therefore we have showed that $\theta_{m, p}$ is injective.
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
As a corollary, we have the following result.
\begin{cor}\label{cor-cup}
Let $Q$ be a finite and connected quiver without sources or sinks and $A$ be its radical square
zero $k$-algebra over a field $k$. Suppose that $Q$ is not a crown. Then for any $m, n, p, q\in\mathbb{Z}_{>0}$ such that $m\neq p$ and $n\neq q$,
$$\mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A))\cup\mathop{\mathrm{HH}}\nolimits^n(A, \Omega^q(A)=0.$$
In particular, for $m, n\in \mathbb{Z}$ and $mn\neq 0$,
$$\mathop{\mathrm{HH}}\nolimits^m_{\mathop{\mathrm{sg}}\nolimits}(A, A)\cup \mathop{\mathrm{HH}}\nolimits^n_{\mathop{\mathrm{sg}}\nolimits}(A, A)=0.$$
\end{cor}
{\it Proof.}\hspace{2ex} From Proposition \ref{prop-singular}, it follows that
$$\mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A))=\frac{k(Q_m//Q_{p+1})}{\mathop{\mathrm{Im}}\nolimits(D_{m-1, p})},$$
for $m\neq p$. Note that we have the following identity on the level
of chains, for $m\neq p$ and $n\neq q$,
$$k(Q_m//Q_{p+1})\cup k(Q_n//Q_{q+1})=0.$$
Hence on the level of cohomology groups, we have for $m\neq p$ and $n\neq q$,
$$\mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A))\cup \mathop{\mathrm{HH}}\nolimits^n(A, \Omega^q(A))=0.$$
In particular, $$\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^m(A, A)\cup \mathop{\mathrm{HH}}\nolimits^n_{\mathop{\mathrm{sg}}\nolimits}(A, A)=0,$$
for $mn\neq 0$.
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
\begin{rem}\label{rem-conter}
From Corollary \ref{cor-cup} above, it follows that in general, $$(\mathop{\mathrm{HH}}\nolimits^*(A, A), \cup, [\cdot, \cdot])$$ and $$(\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^*(A, A), \cup, [\cdot, \cdot])$$ have no BV algebra structures since
the cup products vanish, however, the Lie brackets does not vanish in general (cf. Section \ref{two loops quiver}).
\end{rem}
In the rest of this section, we will consider the Gerstenhaber bracket $[\cdot,\cdot]$ (cf. \cite{Wang}) on the total space $$\bigoplus_{m, p\in \mathbb{Z}_{\geq 0}}\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A)).$$
First let us recall the construction of the Gerstenhaber bracket $[\cdot, \cdot]$ on $\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^*(A, A)$.
Let $k$ be a field and $A$ be a finite dimensional $k$-algebra (not necessarily radical square zero) with a Wedderburn-Malcev decomposition $A:=E\oplus r$.
For $m\in Z_{>0}$ and $p\in Z_{\geq 0}$, denote $$C^m(A, \Omega^p(A)):=\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A)).$$
Then we have a double complex $C^*(A, \Omega^*(A))$. Take two elements
$f\in C^m(A, \Omega^p(A))$ and $g\in C^n(A, \Omega^q(A))$.
Denote
\begin{equation*}
f\bullet_i g:=
\begin{cases}
d((f\otimes \mathop{\mathrm{id}}\nolimits^{\otimes q})(\mathop{\mathrm{id}}\nolimits^{\otimes i-1}\otimes g\otimes \mathop{\mathrm{id}}\nolimits^{\otimes m-i})\otimes 1)&\mbox{if} \ 1\leq i\leq m, \\
d((\mathop{\mathrm{id}}\nolimits^{\otimes -i}\otimes f\otimes \mathop{\mathrm{id}}\nolimits^{\otimes q+i})(g\otimes \mathop{\mathrm{id}}\nolimits^{\otimes m-1})\otimes 1) & \mbox{if} \ -q\leq i \leq -1,
\end{cases}
\end{equation*}
where $\otimes$ represents $\otimes_E$.
We also denote
\begin{equation*}
f\bullet g:=\sum_{i=1}^m(-1)^{p+q+(i-1)(q-n-1)}f\bullet_i g +\sum_{i=1}^q(-1)^{p+q+i(p-m-1)}f\bullet_{-i} g
\end{equation*}
Then we define
\begin{equation}\label{defn-bracket}
[f, g]:=f\bullet g-(-1)^{(m-p-1)(n-q-1)} g\bullet f.
\end{equation}
Note that
$$[f, g]\in C^{m+n-1}(A, \Omega^{p+q}(A)).$$
Then from Proposition 4.6 in \cite{Wang}, it follows that $[\cdot, \cdot]$ defines a differential graded Lie algebra
structure on the total complex
$$\bigoplus_{m\in \mathbb{Z}_{>0}, p\in \mathbb{Z}_{\geq 0}}C^m(A, \Omega^p(A))$$
and thus $[\cdot, \cdot]$ defines a graded Lie algebra structure on the cohomology groups
$$\bigoplus_{m\in\mathbb{Z}_{>0}, p\in \mathbb{Z}_{\geq 0}} \mathop{\mathrm{HH}}\nolimits^m(A, \Omega^p(A)).$$
Let $A$ be a radical square zero algebra with a Wedderbrun-Malcev decomposition $A=E\oplus r$.
Then given $$f\in \mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A))$$ and $$g\in\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E n}, \Omega^{q}(A)),$$ the Gerstenhaber bracket $[\cdot, \cdot]$ can be written as
\begin{equation}\label{equa-ger}
\begin{split}
[f, g](a_{1, m+n-1})=&\sum_{i=1}^m (-1)^{(i-1)(q-n-1)}(f\otimes \mathop{\mathrm{id}}\nolimits)(a_{1, i-1}\otimes
g(a_{i, i+n-1})\otimes a_{i+n, m+n-1})+\\
&\sum_{i=1}^q(-1)^{i(p-m-1)}(\mathop{\mathrm{id}}\nolimits^{\otimes i}\otimes f)(g(a_{1, n})\otimes a_{n+1, m+n-1})-
(-1)^{(m-p-1)(n-q-1)}\\
&\sum_{i=1}^{n}(-1)^{(i-1)(p-m-1)}(g\otimes \mathop{\mathrm{id}}\nolimits)(a_{1, i-1}\otimes f(a_{i, i+m-1})\otimes a_{i+m, m+n-1})-\\
&(-1)^{(m-p-1)(n-q-1)}\sum_{i=1}^p(-1)^{i(q-n-1)}(\mathop{\mathrm{id}}\nolimits^{\otimes i}\otimes g)(f(a_{1,m})\otimes a_{m+1, m+n-1}),
\end{split}
\end{equation}
for any $a_{1, m+n-1}\in r^{\otimes_E m+n-1}$.
In particular, for $m=n=1$, we have
\begin{equation}\label{equa-ger2}
\begin{split}
[f, g](a)=& \sum_{i=0}^q(-1)^{ip}(\mathop{\mathrm{id}}\nolimits^{\otimes i}\otimes f)(g(a))-(-1)^{pq}\sum_{i=0}^p(-1)^{iq}
(\mathop{\mathrm{id}}\nolimits^{\otimes i}\otimes g)(f(a)).
\end{split}
\end{equation}
Next we consider the radical square zero algebra for a quiver $Q$.
First, let us follow \cite{San} and \cite{San2} to introduce the notation $\diamond$:
Given two paths $\alpha\in Q_m$ and $\beta\in Q_n$. Suppose that
\begin{equation*}
\begin{split}
\alpha&=a_1a_2\cdots a_m,\\
\beta&=b_1b_2\cdots b_n.
\end{split}
\end{equation*}
where $a_i, b_j\in Q_1$. Let $i=1, \cdots, m. $ If $a_i//\beta,$ we denote by $$\alpha\diamond_i\beta$$ the path in $Q_{m+n-1}$
obtained by replacing the arrow $a_i$ by the path $\beta$. Namely, we define
\begin{equation*}
\alpha\diamond_i\beta:=
\begin{cases}
a_1\cdots a_{i-1}b_1\cdots b_na_{i+1}\cdots a_{m}, & \mbox{if $a_i// \beta$},\\
0 & \mbox{otherwise.}
\end{cases}
\end{equation*}
\begin{lemma}\label{lemma-bracket-quiver}
Let $k$ be a field. Let $Q$ be any finite and connected quiver. Then via the linear isomorphism in Lemma \ref{lemma-new},
we have the following
\begin{enumerate}
\item Let $$(x, y)\in k(Q_1//Q_1)\subset\mathop{\mathrm{Hom}}\nolimits_{E-E}(r, A)$$ and $$(\gamma_m, \beta_{p+1})\in k(Q_{m}//Q_{p+1})\subset\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^{p}(A)).$$
Then $$[(x, y), (\gamma_m, \beta_{p+1})]=\sum_{i=1}^{p+1}\delta_{b_i, x}(\gamma_m, \beta_{p+1}\diamond_iy)-\sum_{i=1}^m\delta_{a_i, y}(\gamma_{m}\diamond_i x, \beta_{p+1}),$$
where $a_i, b_i$ are the $i$-th arrow in $\gamma_m$ and $\beta_{p+1}$,
respectively.
\item Let $$(x, \gamma_{p+1})\in k(Q_1//Q_{p+1})\subset\mathop{\mathrm{Hom}}\nolimits_{E-E}(r, \Omega^p(A))$$ and $$(y, \beta_{q+1})\in k(Q_1//Q_{q+1})\subset\mathop{\mathrm{Hom}}\nolimits_{E-E}(r, \Omega^q(A)).$$
Then
\begin{equation*}
\begin{split}
[(x, \gamma_{p+1}), (y, \beta_{q+1})]=&\sum_{i=1}^{q+1}(-1)^{(i-1)p}\delta_{x, b_i}(y, \beta_{q+1}\diamond_i \gamma_{p+1})-\\
&(-1)^{pq}\sum_{i=1}^{p+1}(-1)^{(i-1)q}\delta_{y, a_i} (x, \gamma_{p+1}\diamond_i \beta_{q+1}),
\end{split}
\end{equation*}
where $a_i, b_i$ are the $i$-th arrow in $\gamma_{p+1}$ and $\beta_{q+1}$,
respectively.
\end{enumerate}
\end{lemma}
{\it Proof.}\hspace{2ex} This is a direct consequence of Formula (\ref{equa-ger}).
\begin{rem}
For general cases, the formula for the Gerstenhaber bracket $[\cdot, \cdot]$ is quite complicate. But
however, we can use PROP theory to describe it in Section \ref{section6}.
\end{rem}
\section{$c$-crown}
We shall use the notion of a $c$-crown from Definition \ref{defn-crown}. We
will study the case of particular $c$-crowns in the following subsections.
In this section, we assume that the base field $k$ is not of characteristic two.
\subsection{The case: $c=1$}
Let us first consider the $1$-crown (i.e. the one loop quiver).
$$
\begin{tikzcd}
Q:=\bullet \arrow[loop right]{r}{a}
\end{tikzcd}$$
Its radical square zero algebra is $A=k[a]/(a^2)$, the algebra of
dual numbers. Since $A$ is a commutative symmetric algebra, from \cite[Corollary
6.4.1]{Bu}, we have that
\begin{equation*}
\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^m(A, A)=
\begin{cases}
\mathop{\mathrm{HH}}\nolimits^m(A, A) & \mbox{for $m>0$,}\\
\mathop{\mathrm{Tor}}\nolimits_{-m-1}^{A^e}(A, A) & \mbox{for $m<-1$.}
\end{cases}
\end{equation*}
\begin{prop}[Proposition 3.4 \cite{Cib3}]\label{Prop-cib3}
Let $Q$ be the one loop quiver and $A$ be its radical square zero algebra over a field $k$.
Assume that $k$ is not of characteristic two.
Then for every $n>0$, we have
$$\dim \mathop{\mathrm{HH}}\nolimits^n(A, A)=1.$$
\end{prop}
\begin{rem}\label{rem-cib3}
Since $\mathop{\mathrm{HH}}\nolimits^m(A, A)\cong \mathop{\mathrm{Tor}}\nolimits_m^{A^e}(A, A)^*$, we also have
$$\dim \mathop{\mathrm{Tor}}\nolimits_m^{A^e}(A, A)=1.$$
\end{rem}
\begin{lemma}
For any $n\in \mathbb{Z}$, we have
$$\dim \mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^n(A, A)=1.$$
\end{lemma}
{\it Proof.}\hspace{2ex} From Proposition \ref{Prop-cib3} and Remark \ref{rem-cib3}, it is sufficient to show that
$$\dim \mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^0(A, A)=\dim \mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^{-1}(A, A)=1.$$
Recall that from \cite[Corollary
6.4.1]{Bu}, we have an exact sequence,
\begin{equation*}
\xymatrix{
0 \ar[r] & \mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^{-1}(A, A)\ar[r] & \mathop{\mathrm{HH}}\nolimits_0(A, A)\ar[d]^-{\cong} \ar[r] & \mathop{\mathrm{HH}}\nolimits^0(A, A)\ar[d]^-{\cong} \ar[r] & \mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^0(A, A)\ar[r] & 0\\
& & k[a]/(a^2)\ar[r]^-{\mu_a} & k[a]/(a^2)
}
\end{equation*}
where $\mu_a$ is the map multiplied by $a$.
Hence we have $$\dim \mathop{\mathrm{Ker}}\nolimits(\mu_a)=\dim \mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^{-1}(A, A)=1$$
and $$\dim \mathop{\mathrm{coker}}\nolimits(\mu_a)=\dim \mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^0(A, A)=1.$$
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
\begin{rem}
The graded Lie algebra structure on the positive side (i.e. the Hochschild cohomology $\mathop{\mathrm{HH}}\nolimits^*(A, A)$) of $\mathop{\mathrm{HH}}\nolimits^*_{\mathop{\mathrm{sg}}\nolimits}(A, A)$
has been investigated in \cite{San}.
Next we will describe the graded Lie algebra structure on the negative side.
\end{rem}
Recall that we have for $m\in\mathbb{Z}_{\geq 0}$,
$$\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^{-m}(A, A)\cong \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+1}(A))$$ since
$A$ is a symmetric algebra. From Remark \ref{rem-coho}, it follows that
$$\mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+1}(A))\cong \frac{k(Q_1//Q_{m+2})}{\mathop{\mathrm{Im}}\nolimits(D_{0, m+1})}\oplus\mathop{\mathrm{Ker}}\nolimits(D_{1, m+1}).$$
Recall that
$$D_{i, m+1}: k(Q_i//Q_{m+1})\rightarrow k(Q_{i+1}//Q_{m+2})$$
is defined as follows,
$$D_{i, m+1}(\gamma_i, \beta_{m+1}):=\sum_{a\in Q_1} (a\gamma_i, a\beta_{m+1})+(-1)^{i+m}\sum_{a\in Q_1}(\gamma_i
a, \beta_{m+1}a).$$
Note that if $m$ is odd,
then $$D_{0, m+1}=0$$ and $D_{1, m+1}$ is a bijection.
Similarly if $m$ is even,
then $D_{0, m+1}$ is a bijection and $$D_{1, m+1}=0.$$
Hence we have
\begin{equation}\label{equ-isom3}
\mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+1}(A))=
\begin{cases}
k(Q_1//Q_{m+2}) & \mbox{if $m$ is odd},\\
k(Q_1//Q_{m+1}) & \mbox{if $m$ is even}.
\end{cases}
\end{equation}
\begin{prop}\label{prop-Gerb}
Let $Q$ be the one loop quiver
$$
\begin{tikzcd}
Q:=\bullet \arrow[loop right]{r}{a}
\end{tikzcd}$$
and $A$ be its radical square zero algebra over
a field $k$. Assume that $k$ is not of characteristic two. Then for $m, n\in\mathbb{Z}_{\geq 1}$, we have the following cases:
\begin{enumerate}
\item If both $m$ and $n$ are odd, then we have the following commutative diagram,
\begin{equation*}
\xymatrix{
\mathop{\mathrm{HH}}\nolimits^1(A,\Omega^{m+1}(A))\times \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{n+1}(A))\ar[d]^{\cong}\ar[r]^-{[\cdot, \cdot]} & \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+n+2}(A))\ar[d]^-{\cong}\\
k(Q_1//Q_{m+2}) \times k(Q_1//Q_{n+2} \ar[r]^-{\{\cdot, \cdot\}}) & k(Q_1//Q_{m+n+3})
}
\end{equation*}
where the bracket $\{\cdot,\cdot\}$ is defined as follows,
$$\{(a, a^{m+2}), (a, a^{n+2})\}=(n-m)(a, a^{m+n+3}).$$
\item If both $m$ and $n$ are even, then we have the following commutative diagram,
\begin{equation*}
\xymatrix{
\mathop{\mathrm{HH}}\nolimits^1_{\mathop{\mathrm{sg}}\nolimits}(A,\Omega^{m+1}(A))\times \mathop{\mathrm{HH}}\nolimits^1_{\mathop{\mathrm{sg}}\nolimits}(A, \Omega^{n+1}(A))\ar[d]^{\cong}\ar[r]^-{[\cdot, \cdot]} & \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+n+2}(A))\ar[d]^-{\cong}\\
k(Q_1//Q_{m+1}) \times k(Q_1//Q_{n+1} \ar[r]^-{\{\cdot, \cdot\}}) & k(Q_1//Q_{m+n+3})
}
\end{equation*}
where $$\{(a, a^{m+1}), (a, a^{n+1})\}=0.$$
\item If $m$ is even and $n$ is odd, then the following diagram commutes,
\begin{equation*}
\xymatrix{
\mathop{\mathrm{HH}}\nolimits^1_{\mathop{\mathrm{sg}}\nolimits}(A,\Omega^{m+1}(A))\times \mathop{\mathrm{HH}}\nolimits^1_{\mathop{\mathrm{sg}}\nolimits}(A, \Omega^{n+1}(A))\ar[d]^{\cong}\ar[r]^-{[\cdot, \cdot]} & \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+n+2}(A))\ar[d]^-{\cong}\\
k(Q_1//Q_{m+1}) \times k(Q_1//Q_{n+2} \ar[r]^-{\{\cdot, \cdot\}}) & k(Q_1//Q_{m+n+2})
}
\end{equation*}
where $$\{(a, a^{m+1}), (a, a^{n+2})\}:=-m(a, a^{m+n+2}).$$
\end{enumerate}
\end{prop}
{\it Proof.}\hspace{2ex}
The assertion (1) comes from Lemma \ref{lemma-bracket-quiver}.
Now let us prove the assertion (2).
Recall that $(a, a^{m+1})$ represents the element
in $\mathop{\mathrm{Hom}}\nolimits_{E-E}(r, \Omega^{m+1}(A))$, which sends
$a$ to $$ea^{m+1}+(-1)^{m+1}a^{m+1}e\in \Omega^{m+1}(A).$$
From Formula (\ref{equa-ger2}) it follows that
$$[(a, a^{m+1}), (a, a^{n+1})]\in k(Q_1//Q_{m+n+2})\subset \mathop{\mathrm{Hom}}\nolimits_{E-E}(r, \Omega^{m+n+2}(A)).$$
So from Formula (\ref{equ-isom3}), we have
$$[(a, a^{m+1}), (a, a^{n+1})]=0$$ in
$\mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+n+2}(A))$.
It remains to verify the assertion (3).
From Formula (\ref{equa-ger2}) again, we have
\begin{equation*}
\begin{split}
[(a, a^{m+1}), (a, a^{n+2})](a)=&\sum_{i=0}^{n+1}(-1)^{i(m+1)}(\mathop{\mathrm{id}}\nolimits^{\otimes i}\otimes (a, ea^{m+1}+(-1)^{m+1}a^{m+1}e))(a^{n+2})-\\
&\sum_{i=0}^{m+1}(-1)^{(i+m+1)(n+1)} (\mathop{\mathrm{id}}\nolimits^{\otimes i}\otimes (a, a^{n+2}))(ea^{m+1}+(-1)^{m+1}a^{m+1}e)\\
&=-m(ea^{m+n+2}-a^{m+n+2}e),
\end{split}
\end{equation*}
hence we have
$$[(a, a^{m+1}), (a, a^{n+2})]=-m(a, a^{m+n+2})\in \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+n+2}(A)).$$
Therefore, we have completed the proof.
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
From \cite[Corollary 5.20]{Wang} it follows that
$(\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^{*}(A, A), \cup, [\cdot, \cdot], \Delta)$ is a BV algebra. Now let us describe the BV algebra structure in this concrete example.
First, we can write the formula for the Connes B-operator.
\begin{lemma}\label{lemma-bv-1}
Let $Q$ be the one loop quiver and $A$ be its radical square zero algebra.
Then for $m\in\mathbb{Z}_{\geq 0}$, we have
\begin{enumerate}
\item If $m$ is even, then we have the following commutative diagram,
\begin{equation*}
\xymatrix{
\mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+1}(A))\ar[d]^{\cong} \ar[r]^-{B}& \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+2}(A))\ar[d]^{\cong}\\
k(Q_1//Q_{m+1}) \ar[r]^{\Delta=0} & k(Q_1// Q_{m+3})
}
\end{equation*}
\item If $m$ is odd, then the following diagram commutes,
\begin{equation*}
\xymatrix{
\mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+1}(A))\ar[d]^{\cong} \ar[r]^-{B}& \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+2}(A))\ar[d]^{\cong}\\
k(Q_1//Q_{m+2}) \ar[r]^{\Delta} & k(Q_1// Q_{m+2}),
}
\end{equation*}
where $\Delta$ is defined as follows,
$$\Delta( (a, a^{m+2}))=m(a, a^{m+2}).$$
\end{enumerate}
\end{lemma}
{\it Proof.}\hspace{2ex} The proof is completely analogous to the one of Proposition \ref{prop-Gerb}.
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
Similarly, we can also write down the formula for the cup product $\cup$.
\begin{lemma}\label{lemma-cup-1}
Let $Q$ be the one loop quiver and $A$ be its radical square zero $k$-algebra, where $k$
is not of characteristic two.
Then we have the following cases for $m, n\in \mathbb{Z}_{>0}$,
\begin{enumerate}
\item If both $m$ and $n$ are odd, then we have the following commutative diagram,
\begin{equation*}
\xymatrix{
\mathop{\mathrm{HH}}\nolimits^1_{\mathop{\mathrm{sg}}\nolimits}(A,\Omega^{m+1}(A))\times \mathop{\mathrm{HH}}\nolimits^1_{\mathop{\mathrm{sg}}\nolimits}(A, \Omega^{n+1}(A))\ar[d]^{\cong}\ar[r]^-{\cup} & \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+n+1})\ar[d]^-{\cong}\\
k(Q_1//Q_{m+2}) \times k(Q_1//Q_{n+2}) \ar[r]^-{0} & k(Q_1//Q_{m+n+1})
}
\end{equation*}
where we used the connecting isomorphism $$\theta_{1, m+n+1}:\mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+n+1}(A))
\rightarrow \mathop{\mathrm{HH}}\nolimits^2(A, \Omega^{m+n+2}(A)).$$
\item If both $m$ and $n$ are even, then
\begin{equation*}
\xymatrix{
\mathop{\mathrm{HH}}\nolimits^1_{\mathop{\mathrm{sg}}\nolimits}(A,\Omega^{m+1}(A))\times \mathop{\mathrm{HH}}\nolimits^1_{\mathop{\mathrm{sg}}\nolimits}(A, \Omega^{n+1}(A))\ar[d]^{\cong}\ar[r]^-{\cup} & \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+n+1})\ar[d]^-{\cong}\\
k(Q_1//Q_{m+1}) \times k(Q_1//Q_{n+1}) \ar[r]^-{\cup'} & k(Q_1//Q_{m+n+1})
}
\end{equation*}
where $$(a, a^{m+1})\cup'(a, a^{n+1})=(a, a^{m+n+1}).$$
\item If $m$ is even and $n$ is odd, then
\begin{equation*}
\xymatrix{
\mathop{\mathrm{HH}}\nolimits^1_{\mathop{\mathrm{sg}}\nolimits}(A,\Omega^{m+1}(A))\times \mathop{\mathrm{HH}}\nolimits^1_{\mathop{\mathrm{sg}}\nolimits}(A, \Omega^{n+1}(A))\ar[d]^{\cong}\ar[r]^-{\cup} & \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+n+1})\ar[d]^-{\cong}\\
k(Q_1//Q_{m+1}) \times k(Q_1//Q_{n+2}) \ar[r]^-{\cup'} & k(Q_1//Q_{m+n+2})
}
\end{equation*}
where $$(a, a^{m+1})\cup'(a, a^{n+2})=-(a, a^{m+n+2}).$$
\end{enumerate}
\end{lemma}
{\it Proof.}\hspace{2ex} The proof is completely analogous to the one of Proposition \ref{prop-Gerb}.
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
\begin{rem}\label{rem-witt}
Let us recall the Witt algebra $\mathcal{W}$ (cf. e.g. \cite{Zim}). As a vector space,
$$\mathcal{W}=\bigoplus_{n\in\mathbb{Z}}k\langle L_n\rangle,$$
where $k\langle L_n\rangle$ is a one-dimensional $k$-vector space with a basis $L_n.$
The Lie bracket is defined
$$[L_m, L_n]=(m-n)L_{m+n}.$$
Clearly, the even part of $\mathcal{W}^{}$ $$\mathcal{W}^{even}:=\bigoplus_{m\in \mathbb{Z}} k\langle L_{2m}\rangle$$
is a Lie subalgebra of $\mathcal{W}$.
Let us construct a natural representation $\mathcal{M}$ of $\mathcal{W}$. As a vector space,
$$\mathcal{M}=\bigoplus_{n\in\mathbb{Z}} k\langle M_n\rangle,$$
where $k\langle M_n\rangle $ is a one-dimensional $k$-vector space with a basis $M_n$.
Define the action of $L_m$ on $M_n$ as follows: $$[L_m, M_n]:=-nM_{m+n}.$$ Then one can check that
it induces a representation on $\mathcal{M}$ of $\mathcal{W}$. Denote
$$\mathcal{M}^{even}:=\bigoplus_{n\in\mathbb{Z}}k\langle M_{2n+1} \rangle.$$
Trivialy $\mathcal{M}^{even}$ is a representation of $\mathcal{W}^{even}$.
We can give a geometric interpretation of the Witt algebra $\mathcal{W}$ and
its representation $\mathcal{M}$. Recall that $\mathcal{W}$ can be considered as the Lie algebra of vector
fields with Laurent polynomial coefficients, i.e. those of the form
$$f(t)\frac{d}{dt},$$
with $f(t)\in k[t, t^{-1}]$. Then $$L_n:=-t^{n+1}\frac{d}{dt}$$ for $n\in \mathbb{Z}$ is
a basis of $\mathcal{W}$. It is straightforward to verify that
$$[L_m, L_n]=(m-n)L_{m+n}.$$
Note that $\mathcal{M}$ can be considered as the Laurent polynomial ring $k[t, t^{-1}]$. Clearly it has a basis $$M_n:=t^n$$ for $n\in \mathbb{Z}$ and
$\mathcal{W}$ acts on $\mathcal{M}$ by derivations, that is,
for any $g(t)\in k[t, t^{-1}]$, we define
$$[L_m, g(t)]:= -t^{m+1} \frac {dg(t)}{dt}.$$
Hence we have,
$$[L_m, M_n]=-nt^{m+n}=-nM_{m+n}.$$
It is straightforward to verify that this action defines a Lie module
structure on $\mathcal{M}$ over the Witt algebra $\mathcal{W}$.
On the other hand, note that $\mathcal{M}$ is a commutative algebra and
we can also define an action of $\mathcal{M}$ on $\mathcal{W}$ as follows.
For $f(t)\in \mathcal{M}$ and $g(t)\frac{d}{dt}\in \mathcal{W}$,
$$f(t)\cdot g(t)\frac{d}{dt}:=f(t)g(t)\frac{d}{dt}\in \mathcal{W}.$$
In particular, we have
$$M_n\cdot L_m=L_{m+n}.$$
Clearly, by this action $\mathcal{W}$ is a module over the commutative
algebra $\mathcal{M}$.
Moreover, we can construct a BV algebra $(\mathcal{M}\times \mathcal{W}, \cup,
[\cdot, \cdot], \Delta)$ as follows.
The grading is
\begin{equation*}
(\mathcal{M}\times \mathcal{W})_n:=
\begin{cases}
M_m & \mbox{if $n=2m$;}\\
L_m & \mbox{if $n=2m+1$.}
\end{cases}
\end{equation*}
As a graded commutative algebra,
$$(\mathcal{M}\times \mathcal{W}, \cup)\cong (\mathcal{M} \ltimes \mathcal{W}, \cdot)$$
and as a graded Lie algebra,
$$(\mathcal{M}\times \mathcal{W}, [\cdot, \cdot])\cong (\mathcal{M}\rtimes \mathcal{W}, [\cdot, \cdot]).$$
The BV operator is defined as follows:
\begin{equation*}
\begin{split}
\Delta_{2m}(M_{m}):&=0,\\
\Delta_{2m+1}(L_{m}):&=-mM_{m}.
\end{split}
\end{equation*}
Then one can check that $(\mathcal{M}\times \mathcal{W}, \cup, [\cdot,\cdot], \Delta)$
is a BV algebra.
Similarly, we can also construct a BV algebra $(\mathcal{M}^{even}\times \mathcal{W}^{even},
\cup, [\cdot, \cdot], \Delta)$.
The grading is
\begin{equation*}
(\mathcal{M}^{even}\times \mathcal{W}^{even})_n=
\begin{cases}
k\langle L_{n-1}\rangle & \mbox{if $n$ is odd},\\
k\langle M_{n}\rangle & \mbox{if $n$ is even}.
\end{cases}
\end{equation*}
As a graded commutative algebra,
$$(\mathcal{M}^{even}\times \mathcal{W}^{even}, \cup)\cong(\mathcal{M}^{even}\ltimes \mathcal{W}^{even}, \cdot)$$
and as a graded Lie algebra,
$$(\mathcal{M}^{even}\times \mathcal{W}^{even}, [\cdot, \cdot])\cong
\mathcal{M}^{even}\rtimes (\mathcal{W}^{even}, [\cdot, \cdot]).$$
The BV operator is defined by
\begin{equation*}
\begin{split}
\Delta_{2m}(M_{2m}):&=0,\\
\Delta_{2m+1}(L_{2m}):&=-2mM_{2m}.
\end{split}
\end{equation*}
\end{rem}
Now let us describe the Gerstenhaber algebra structure on $\mathop{\mathrm{HH}}\nolimits^*_{\mathop{\mathrm{sg}}\nolimits}(A, A)$.
Denote, for $m\in \mathbb{Z}$,
\begin{equation*}
\mathbb{L}_{2m}:=\mathop{\mathrm{HH}}\nolimits^{2m+1}_{\mathop{\mathrm{sg}}\nolimits}(A, A)
\end{equation*}
and
\begin{equation*}
\mathbb{M}_{2m}:= \mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^{2m}(A, A).
\end{equation*}
Then we have the following proposition.
\begin{prop}\label{prop-de-ger}
Let $Q$ be the one loop quiver
$$
\begin{tikzcd}
Q:=& \bullet \ar[loop, in=135, out=225, distance=3em]{}{a}
\end{tikzcd}$$
and $A$ be its radical square zero algebra over
a field $k$, where $k$ is not of characteristic two. Then we have a Lie algebra isomorphism
$$(\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^{odd}(A, A)[1], [\cdot, \cdot])\cong(\mathcal{W}^{even}, [\cdot, \cdot]).$$
and an isomorphism of commutative algebras
$$(\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^{even}(A, A), \cup)\cong (\mathcal{M}^{even}, \cdot).$$
Moreover, $(\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^*(A, A), \cup, [\cdot, \cdot], \Delta)$ is isomorphic to the BV algebra $(\mathcal{M}^{even}\times \mathcal{W}^{even}, \cdot, [\cdot,\cdot], \Delta).$
\end{prop}
{\it Proof.}\hspace{2ex} The statements are a direct consequence of Proposition \ref{prop-Gerb}, Lemma \ref{lemma-bv-1} and
\ref{lemma-cup-1}.
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
\subsection{The case: $c\geq 2$}
In this section, we fix $Q$ to be a $c$-crown with $c\geq 2$.
Denote by $A$ its radical square zero $k$-algebra, where $k$ is not of characteristic two.
\begin{prop}[Proposition 3.3 \cite{Cib3}]\label{prop3.3}
Let $Q$ be a $c$-crown with $c\geq 2$. Let $n$ be an even multiple of $c$. Then
$$\dim_k \mathop{\mathrm{HH}}\nolimits^n(A, A)=\dim_k\mathop{\mathrm{HH}}\nolimits^{n+1}(A, A)=1.$$
The cohomology vanishes in all other degrees.
\end{prop}
Next let us consider the singular Hochschild cohomology $\mathop{\mathrm{HH}}\nolimits^*_{\mathop{\mathrm{sg}}\nolimits}(A, A)$.
Note that $A$ is a self-injective algebra (but not a symmetric algebra).
From \cite[Corollary
6.4.1]{Bu}, it follows that for $m\geq 1,$ $$\mathop{\mathrm{HH}}\nolimits^m_{\mathop{\mathrm{sg}}\nolimits}(A, A)\cong \mathop{\mathrm{HH}}\nolimits^m(A, A).$$
For the negative side of $\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^*(A, A)$, we have for $m>1$,
$$\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^{-m}(A, A)\cong \mathop{\mathrm{Tor}}\nolimits^{A^e}_{m-1}(A, \mathop{\mathrm{Hom}}\nolimits_{A^e}(A, A^e))\cong \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+1}(A)).$$
From Remark \ref{rem-coho}, we have that
\begin{equation*}
\mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{m+1}(A))\cong \frac{k(Q_1//Q_{m+2})}{\mathop{\mathrm{Im}}\nolimits(D_{0, m+1})}\oplus \mathop{\mathrm{Ker}}\nolimits(D_{1, m+1}).
\end{equation*}
Recall that
\begin{equation*}
\begin{split}
D_{0, m+1}(e, \gamma_{m+1})&=\sum_{a\in Q_1}(a, a\gamma_{m+1})
+(-1)^m\sum_{a\in Q_1}(a, \gamma_{m+1}a),\\
D_{1, m+1}(x, \gamma_{m+1})&=\sum_{a\in Q_1}(ax, a\gamma_{m+1})+
(-1)^{m+1}\sum_{a\in Q_1}(xa, \gamma_{m+1}a).
\end{split}
\end{equation*}
Anaglous to Proposition \ref{prop3.3} above, we have the following proposition.
\begin{prop}
Let $Q$ be a $c$-crown with $c\geq 2$. If $m$ is an even multiple of $c$,
then $$\dim_k \mathop{\mathrm{HH}}\nolimits^{m}_{\mathop{\mathrm{sg}}\nolimits}(A, A)=\dim_k\mathop{\mathrm{HH}}\nolimits^{m+1}_{\mathop{\mathrm{sg}}\nolimits}(A, A)=1.$$
The singular Hochschild cohomology vanishes in all other degrees.
\end{prop}
The Lie algebra structure on the positive side of $\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^*(A, A)$ has been obtained
in \cite{San}. Similarly, we obtain the following proposition for the negative side of $\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^*(A, A)$.
\begin{prop}
Let $Q$ be a $c$-crown with $c\in 2\mathbb{Z}_{>0}$ and $A$ be its radical square zero algebra over a field $k$, where $k$ is not of characteristic two. Denote by $\gamma$ the oriented cycle (i.e. $\gamma=a_1a_2\cdots a_c$).
Then we have the following cases:
\begin{enumerate}
\item Let $(a, a\gamma^p)\in \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{cp}(A))$ and $(a, a\gamma^q)\in \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{cq}(A))$ be nontrivial elements respectively, where $a\in Q_1$.
Then $$[(a, a\gamma^p), (a, a\gamma^q)]=(q-p) (a, a\gamma^{p+q}),$$
\item Let $$x:=\sum_{a\in Q_1}(a, a\gamma^p)\in \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{cp+1}(A))$$ and $$y:=\sum_{a\in Q_1}(a, a\gamma^q)\in \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{cq+1})$$ be nontrivial elements, respecitively,
then $$[x, y]=0.$$
\item Let $$x:=\sum_{a\in Q_1}(a, a\gamma^p)\in \mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{cp+1}(A))$$ and $(b, b\gamma^q)\in\mathop{\mathrm{HH}}\nolimits^1(A, \Omega^{cq}(A))$ be nontrivial elements, respectively,
then $$[x, (b, b\gamma^q)]=-p\sum_{a\in Q_1}(a, a\gamma^{p+q}).$$
\end{enumerate}
\end{prop}
Let us denote, for $p\in \mathbb{Z}$,
\begin{equation*}
\mathbb{L}_p:=\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^{cp+1}(A, A)
\end{equation*}
and
\begin{equation*}
\mathbb{M}_p:=\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^{cp}(A, A).
\end{equation*}
Then we have the following proposition.
\begin{prop}\label{equ-bv00}
Let $Q$ be a $c$-crown with $c\in2\mathbb{Z}_{>0}$ and $A$ be its radical
square zero $k$-algebra. Assume that $k$ is not of characteristic two. Then
$$(\mathbb{L}:=\bigoplus_{i\in\mathbb{Z}} \mathbb{L}_i, [\cdot, \cdot])$$ is isomorphic to the Witt algebra $\mathcal{W}$ and
$$(\mathbb{M}:=\bigoplus_{i\in \mathbb{Z}}\mathbb{M}_i, \cup)$$ is isomorphic to the graded commutative algebra $\mathcal{M}$.
Moreover, $(\mathop{\mathrm{HH}}\nolimits^*_{\mathop{\mathrm{sg}}\nolimits}(A, A), \cup, [\cdot, \cdot])$
is isomorphic to the Gerstenhaber algebra $(\mathcal{M}\times \mathcal{W}, \cup, [\cdot, \cdot])$.
\end{prop}
{\it Proof.}\hspace{2ex} The proof is completely analogous to the one of Proposition \ref{prop-de-ger}.
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
\begin{rem}
From Remark \ref{rem-witt}, it follows that $(\mathcal{M}\times \mathcal{W}, \cup, [\cdot, \cdot])$
is a BV algebra. Hence
the singular Hochschild cohomology $\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^*(A, A)$ has also a BV algebra
structure. In fact, this is a corollary
of \cite[Theorem 0.1]{LZZ} and \cite[Corollary 3]{Vol} since in this case, the Nakayama automorphism of $A$ has finite order.
\end{rem}
\section{Two loops quiver}\label{two loops quiver}
In this section, we consider the two loops quiver $Q$, namely,
$$\begin{tikzcd}
Q:= & \bullet \ar[%
,loop
,out=135
,in=225
,distance=4em
]{}{a} \ar[
, loop
,out=315
,in=45
,distance=4em]{}{b}
\end{tikzcd}
$$
and its radical square zero algebra is $$A\cong k[x, y]/(x^2, y^2, xy),$$
where $k$ is a field of characteristic zero.
Recall that
\begin{prop}[Proposition 4.4. \cite{San2}]\label{prop-san}
Assume that $Q$ is the two loops quiver with the loops $a$ and $b$. Let $A$ be its radical square zero algebra over a field $k$ of characteristic zero. Then $\mathop{\mathrm{HH}}\nolimits^1(A, A)\cong k(Q_1//Q_1)$ and the elements
in $k(Q_1// Q_1)$
\begin{equation*}
\begin{split}
H:=&(a, a)-(b, b),\\
E:=&(b,a),\\
F:=&(a, b)
\end{split}
\end{equation*}
generate a copy of the Lie algebra $\mathfrak{sl}_2(k)$ in $\mathop{\mathrm{HH}}\nolimits^1(A, A)$. Moreover, the Lie algebra $\mathop{\mathrm{HH}}\nolimits^1(A, A)$ is isomorphic to $\mathop{\mathfrak{sl}}\nolimits_2\times k$, where
$$I:=(a, a)+(b, b)$$ is a non-zero element such that $[I, \mathop{\mathrm{HH}}\nolimits^1(A, A)]=0$.
\end{prop}
\begin{rem}
If we denote the path $a$ by 1 and the path $b$ by 2.
We also denote $$(x, y):=E_{y, x},$$
where $\{x, y\}\subset\{1, 2\}$.
Then we have
\begin{equation*}
\begin{split}
H=&E_{1, 1}-E_{2, 2}\\
E=&E_{1, 2}\\
F=&E_{2, 1},
\end{split}
\end{equation*}
where $E_{i, j}$ is the basis of $\mathop{\mathfrak{gl}}\nolimits_2(k)$, which is defined as follows,
$$E_{i, j}(e_k)=\delta_{i, k} e_j.$$
In \cite{San2}, the author also gave a description
of $\mathop{\mathrm{HH}}\nolimits^m(A, A)$ as a Lie module over $\mathop{\mathrm{HH}}\nolimits^1(A, A)$, for
any $m\in \mathbb{Z}_{>0}$. Next we will completely describe the Gerstenhaber
algebra structures on $\mathop{\mathrm{HH}}\nolimits^*(A, A)$ and $\mathop{\mathrm{HH}}\nolimits^*_{\mathop{\mathrm{sg}}\nolimits}(A, A)$.
\end{rem}
To describe the Gerstenhaber algebra structure on $\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^*(A, A)$, let us first work on a more general setting. Let $V$ be a finite-dimensional vector space over a field $k$. Define, for $m, p\in \mathbb{Z}_{> 0}$,
$$T^{m, p}(V):=T^{m, p}_0(V)\oplus T^{m, p}_1(V),$$
where $$T_0^{m, p}(V):=\mathop{\mathrm{Hom}}\nolimits_k(V^{\otimes m},V^{\otimes p})$$
and $$T_1^{m, p}(V):=\mathop{\mathrm{Hom}}\nolimits_{k}(V^{\otimes m},
V^{\otimes p+1}),$$
and denote
$$T^{*, *}(V):=\bigoplus_{m, p\in\mathbb{Z}_{> 0}} T^{m, p}(V),$$
We will define a bracket $\{\cdot, \cdot\}$ on $T^{*, *}(V)$ as follows.
Let $f\in T_1^{m, p}(V)$ and $g\in T_1^{n, q}(V)$, we define
$\{f, g\}\in T_1^{m+n-1, p+q}(V)$ as follows,
\begin{equation}\label{defn-bracket-T}
\begin{split}
\{f, g\}:=&\sum_{i=1}^m(-1)^{(i-1)(q-n-1)}(f\otimes \mathop{\mathrm{id}}\nolimits^{\otimes q})(\mathop{\mathrm{id}}\nolimits^{\otimes i-1}\otimes g\otimes \mathop{\mathrm{id}}\nolimits^{\otimes m-i})+\\
&\sum_{i=1}^{q-1}(-1)^{i(p-m-1)}(\mathop{\mathrm{id}}\nolimits^{\otimes i}\otimes f\otimes \mathop{\mathrm{id}}\nolimits^{\otimes q-i})(g\otimes \mathop{\mathrm{id}}\nolimits^{\otimes m-1})-\\
&\sum_{i=1}^n(-1)^{(n-q+i)(p-m-1)}(g\otimes \mathop{\mathrm{id}}\nolimits^{\otimes p})(\mathop{\mathrm{id}}\nolimits^{\otimes i-1}
\otimes f\otimes \mathop{\mathrm{id}}\nolimits^{\otimes n-i})-\\
&\sum_{i=1}^{p-1}(-1)^{(m-p+i-1)(q-n-1)}(\mathop{\mathrm{id}}\nolimits^{\otimes i}
\otimes g\otimes\mathop{\mathrm{id}}\nolimits^{\otimes p-i})(f\otimes \mathop{\mathrm{id}}\nolimits^{\otimes n-1}).
\end{split}
\end{equation}
Let $f\in T_0^{m, p}(V)$ and $g\in T_1^{n, q}(V)$, we define
$\{f, g\}\in T_0^{m+n-1, p+q}(V)$ as follows,
\begin{equation*}
\begin{split}
\{f, g\}:=&\sum_{i=1}^{m-1}(-1)^{i(q-n-1)}(f\otimes
\mathop{\mathrm{id}}\nolimits^{\otimes q})(\mathop{\mathrm{id}}\nolimits^{\otimes i-1}\otimes g\otimes \mathop{\mathrm{id}}\nolimits^{m-i})-\\
& \sum_{i=1}^{p-2}(-1)^{(q-n-1)(m-p+i)}(\mathop{\mathrm{id}}\nolimits^{\otimes i}\otimes g\otimes
\mathop{\mathrm{id}}\nolimits^{\otimes p-i-1})(f\otimes \mathop{\mathrm{id}}\nolimits^{\otimes n-1}).
\end{split}
\end{equation*}
and define $\{g, f\}\in T_0^{m+n-1, p+q}(V)$ as follows,
\begin{equation*}
\begin{split}
\{g, f\}:=-(-1)^{(m-p-1)(n-q-1)} \{f, g\}.
\end{split}
\end{equation*}
Define $\{f, g\}=0$ if $f\in T_0^{m, p}(V)$ and $g\in T_0^{n, q}(V)$.
Clearly, $\{\cdot,\cdot\}$ is skew-symmetry, that is,
for $f\in T^{m, p}(V)$ and $g\in T^{n, q}(V)$,
$$\{f, g\}=-(-1)^{(m-p-1)(n-q-1)}\{g, f\}.$$
For $m, p\in\mathbb{Z}_{>0}$, we have the following canonical embedding,
\begin{equation*}
\begin{tabular}{ccccc}
$\theta_{m, p}$: &$T_1^{m,p}(V)$& $\rightarrow$ & $T_1^{m+1, p+1}(V)$\\
&$f$&$\mapsto$&$-f\otimes \mathop{\mathrm{id}}\nolimits_V.$\\
\end{tabular}
\end{equation*}
Similarly, we also have the following morphism for $m, p\in \mathbb{Z}_{>0}$,
\begin{equation*}
\begin{tabular}{ccccc}
$\phi_{m-1, p}$: &$T_0^{m-1,p}(V)$& $\rightarrow$ & $T_1^{m, p}(V)$\\
&$f$&$\mapsto$&$\mathop{\mathrm{id}}\nolimits_V\otimes f+(-1)^{p+m}f\otimes \mathop{\mathrm{id}}\nolimits_V.$\\
\end{tabular}
\end{equation*}
Then for any $p\in \mathbb{Z}_{\geq 0}$, we have a complex
\begin{equation}\label{equa-com}
\xymatrix@C=3pc{
\cdots\ar[r] & T^{m,p}(V) \ar[r]^-{\left( \begin{smallmatrix} 0 & 0\\ \phi_{m,p} &0 \end{smallmatrix}\right)}& T^{m+1, p}(V)\ar[r]^-{\left( \begin{smallmatrix} 0 & 0\\ \phi_{m+1,p} &0 \end{smallmatrix}\right)} & T^{m+2, p}(V)\ar[r] &\cdots.
}
\end{equation}
\begin{rem}
If $m\neq p$, then $\phi_{m, p}$ is injective and if $m=p$, $\mathop{\mathrm{Ker}}\nolimits(\phi_{m,m})$
is a one-dimensional $k$-vector space with a basis $\{\mathop{\mathrm{id}}\nolimits_{V^{\otimes m}}\}$.
Let us denote the homology group of the complex (\ref{equa-com}) by
$$K^{m, p}(V):= \mathop{\mathrm{Ker}}\nolimits(\phi_{m, p})\oplus \frac{T_1^{m, p}(V)}{\mathop{\mathrm{Im}}\nolimits(\phi_{m-1,p})}.$$
Then we have
\begin{equation*}
K^{m, p}(V)=
\begin{cases}
\frac{ T_1^{m, p}(V)}{\mathop{\mathrm{Im}}\nolimits(\phi_{m-1, p})} &
\mbox{if $m\neq p$};\\
\mathop{\mathrm{id}}\nolimits_{V^{\otimes m}}\oplus
\frac{T_1^{m, m}(V)}{\mathop{\mathrm{Im}}\nolimits(\Phi_{m-1,m})} & \mbox{if $m=p$}.
\end{cases}
\end{equation*}
\end{rem}
\begin{lemma}
For $m, p\in \mathbb{Z}_{\geq 0}$, $\theta_{m, p}$ induces a morphism (still denoted by $\theta_{m,p}$),
\begin{equation*}\label{lemma4.6}
\theta_{m, p}: K^{m, p}(V)\rightarrow K^{m+1, p+1}(V).
\end{equation*}
\end{lemma}
\begin{rem}
For $m=p$, we define $$\theta_{m, m}|_{\mathop{\mathrm{Ker}}\nolimits(\phi_{m, m})}:=\mathop{\mathrm{id}}\nolimits.$$
We have an inductive system
\begin{equation*}
\xymatrix@C=3pc{
\cdots \ar[r] & K^{m, p}(V)\ar[r]^-{\theta_{m, p}} & K^{m+1, p+1}(V)\ar[r]^-{\theta_{
m+1, p+1}} & \cdots
}
\end{equation*}
Let us denote the colimit of this inductive system by
$K_{\mathop{\mathrm{sg}}\nolimits}^{m-p}(V)$ and we denote
$$K_{\mathop{\mathrm{sg}}\nolimits}^*(V):=\bigoplus_{n\in \mathbb{Z}} K_{\mathop{\mathrm{sg}}\nolimits}^n(V).$$
\end{rem}
Now let us go back to the two loops quiver $Q$. Recall that $$A=r\oplus E$$ is the Wedderburn-Malcev decomposition of $A$, where $r=kQ_1$ and $E=kQ_0$.
$$\begin{tikzcd}
Q:= & \bullet \ar[%
,loop
,out=135
,in=225
,distance=4em
]{}{1} \ar[
, loop
,out=315
,in=45
,distance=4em]{}{2}
\end{tikzcd}
$$
Let $V:= k\oplus k$. Let $e_1, e_2\in k^2$ are the canonical basis of $V$. Then $$\mathop{\mathrm{End}}\nolimits(V)\cong \mathop{\mathfrak{gl}}\nolimits_2(k).$$
We can identify $k(Q_m//Q_p)$ with $\mathop{\mathrm{Hom}}\nolimits_k(V^{\otimes m}, V^{\otimes p})$
as follows,
\begin{equation}\label{formula-cor}
\begin{tabular}{ccccc}
&$k(Q_m//Q_p)$& $\rightarrow$ & $\mathop{\mathrm{Hom}}\nolimits_k(V^{\otimes m}, V^{\otimes p})$\\
&$(x_1\cdots x_m, y_1\cdots y_p)$&$\mapsto$&$\delta_{e_{x_1}\otimes \cdots\otimes e_{x_m}, e_{y_1}\otimes \cdots \otimes e_{y_p}}$\\
\end{tabular}
\end{equation}
where $$x_i, y_j\in\{1, 2\}$$ for $1\leq i\leq m$ and $1\leq j\leq p,$
and $\delta_{e_{x_1}\otimes \cdots\otimes e_{x_m}, e_{y_1}\otimes \cdots \otimes e_{y_p}}$ is defined
as follows,
\begin{equation*}
\delta_{e_{x_1}\otimes \cdots\otimes e_{x_m}, e_{y_1}\otimes \cdots \otimes e_{y_p}}(e_{x_1'}\otimes \cdots\otimes e_{x_m'})=
\begin{cases}
e_{y_1}\otimes \cdots \otimes e_{y_p} & \mbox{if $x_i=x_i'$ for $1\leq i \leq m$};\\
0 &\mbox{othewise}.
\end{cases}
\end{equation*}
Hence we have the following isomorphism for $m, p\in \mathbb{Z}_{\geq 0}$.
\begin{equation*}
\begin{tabular}{ccccc}
$F_{m,p}:$ &$k(Q_m//Q_p)\oplus k(Q_m//Q_{p+1})$& $\rightarrow$ & $T^{m, p}(V).$\\
\end{tabular}
\end{equation*}
\begin{prop}\label{prop-PROP-two}
Let $Q$ be the two loops quiver and $A$ be its radical square zero algebra over a field $k$ of characteristic zero. Then
\begin{enumerate}
\item for any $p\in\mathbb{Z}_{\geq0}$, we have a chain homomorphism,
\begin{equation*}
\xymatrix@C=2.5pc{
\cdots\ar[r] & k(Q_m//Q_p)\oplus k(Q_m//Q_{p+1})\ar[d]^{F_{m, p}} \ar[r]^-{\left( \begin{smallmatrix} 0 & 0\\ D_{m,p} &0 \end{smallmatrix}\right)}& k(Q_{m+1}//Q_p)\oplus k(Q_{m+1}//Q_{p+1})\ar[d]^{F_{m+1, p}}\ar[r] &\cdots\\
\cdots\ar[r] & T^{m,p}(V) \ar[r]^-{\left( \begin{smallmatrix} 0 & 0\\ \phi_{m,p} &0 \end{smallmatrix}\right)}& T^{m+1, p}(V)\ar[r] & \cdots
}
\end{equation*}
where we recall that $D_{m,p}$ is defined in Proposition \ref{prop-new}. As a consequence,
$F_{m, p}$ induces an isomorphism for $m, p\in \mathbb{Z}_{\geq 0}$,
$$F_{m, p}: \mathop{\mathrm{HH}}\nolimits^{m}(A, \Omega^p(A))\rightarrow K^{m, p}(V).$$
\item The following diagram commutes, for $m, p\in \mathbb{Z}_{>0}$.
\begin{equation*}
\xymatrix{
\mathop{\mathrm{HH}}\nolimits^{m}(A, \Omega^p(A)) \ar[d]_-{F_{m, p}}\ar[r]^-{\theta_{m, p}} & \mathop{\mathrm{HH}}\nolimits^{m+1}(A, \Omega^{p+1}(A))\ar[d]^-{F_{m+1, p+1}}\\
K^{m, p}(V)\ar[r]^-{\phi_{m, p}}& K^{m+1, p+1}(V)
}
\end{equation*}
As a consequence, $\phi_{m, p}$ is injective for $m, p\in\mathbb{Z}_{\geq 0}$.
\item The following diagram commutes for $m, n\in \mathbb{Z}_{>0}$ and $ p, q \in \mathbb{Z}_{\geq 0}$
\begin{equation*}
\xymatrix{
\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A))\times \mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E n}, \Omega^q(A)) \ar[d]_-{\cong}\ar[r]^-{[\cdot,\cdot]} &\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m+n-1}, \Omega^{p+q}(A))\ar[d]_-{\cong}\\
T^{m, p}(V) \times T^{n, q}(V) \ar[r]^{\{\cdot,\cdot\}} & T^{m+n-1, p+q}(V)
}
\end{equation*}
where $[\cdot,\cdot]$ is the Gerstenhaber bracket defined by Formula (\ref{defn-bracket}) in Section \ref{section3}, $\{\cdot, \cdot\}$
is the Lie bracket defined in (\ref{defn-bracket-T}),
and we identify $$\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_Em}, \Omega^p(A))$$ with
$$k(Q_m//Q_p)\oplus k(Q_m//Q_{p+1})$$ for $m, p\in \mathbb{Z}_{\geq 0}$ by
Lemma \ref{lemma-new}.
\end{enumerate}
\end{prop}
{\it Proof.}\hspace{2ex} The proofs of Assertions (1) and (2) are straightforward from the definitions. Let us
verify the Assertion (3). Let $$(x_1 \cdots x_m, y_1 \cdots y_{p+1})
\in \mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A))$$ and $$(a_1 \cdots
a_n, b_1\cdots b_{q+1})\in \mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E n}, \Omega^q(A))$$
By Formula (\ref{formula-cor}), they correspond, respectively, the elements
$$\delta_{e_{x_1}\otimes \cdots \otimes e_{x_m}, e_{y_1}\otimes
\cdots \otimes e_{y_{p+1}}}\in T^{m ,p }$$ and
$$ \delta_{e_{a_1}\otimes \cdots \otimes e_{a_{n}}, e_{b_1}\otimes
\cdots \otimes e_{b_{q+1}}}\in T^{n, q}.$$
Then under this correspondence, we observe that
Formula (\ref{formula-cor}) and (\ref{equa-ger}) have the same type.
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
Therefore, as a consequence, we obtain the following corollary.
\begin{cor}\label{cor-last}
Let $V$ be a finite dimensional vector space of dimensional 2 over a field $k$ of characteristic zero. Then
\begin{enumerate}
\item $T^{*,*}(V)[1]$ is a differential graded Lie algebra, where the grading
is defined as follows, for $n\in \mathbb{Z}$,
$$T^{*, *}(V)_n:=\bigoplus_{\substack{m, p\in\mathbb{Z}_{\geq 0}\\ m-p=n}} T^{m , p}(V)$$
and the differential is induced by
$$\xymatrix@C=3pc{
T^{m,p}(V) \ar[r]^-{\left( \begin{smallmatrix} 0 & 0\\ \phi_{m,p} &0 \end{smallmatrix}\right)}& T^{m+1, p}(V),
}$$
In particular, $K^{*, *}(V)$ is a $\mathbb{Z}$-graded Lie algebra with the induced
grading and induced Lie bracket,
\item The morphism in Lemma \ref{lemma4.6}
$$\theta_{m, p}:K^{m, p}(V)\rightarrow K^{m+1, p+1}(V)$$
induces a module homomorphism of graded Lie algebra,
$$\theta: K^{*,*}(V)\rightarrow K^{*, *}(V),$$
that is,
$$\theta([f, g])=[\theta(f), g]=-(-1)^{(m-p-1)(n-q)}[f, \theta(g)].$$
Hence $K^*_{\mathop{\mathrm{sg}}\nolimits}(V)$ is a graded Lie algebra which is isomorphic to
$\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^*(A, A)$, where $A$ is the radical square zero algebra of the two loops quiver $Q$.
\end{enumerate}
\end{cor}
{\it Proof.}\hspace{2ex} This is an immediate consequence of Proposition \ref{prop-PROP-two}.
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
\begin{rem}\label{rem-conter1}
Since the cup product of $\mathop{\mathrm{HH}}\nolimits^*(A, A)$ vanishes except in degree zero and
the Lie bracket does not, in general,
it is impossible to endow $(\mathop{\mathrm{HH}}\nolimits_{\mathop{\mathrm{sg}}\nolimits}^*(A, A), \cup, [\cdot,\cdot])$ or $(\mathop{\mathrm{HH}}\nolimits^*(A, A), \cup, [\cdot, \cdot])$ with a BV algebra structure.
\end{rem}
\section{A PROP interpretation of the Gerstenhaber bracket}\label{section6}
In this section, we will give an interpretation of the Gerstenhaber bracket from the point of view of PROP theory, in the case of radical square zero algebras. In fact, this interpretation generalizes the construction in Section \ref{two loops quiver}.
\subsection{Definition of PROP}
Let us start with the definition of PROP. For more details on PROP and operad theory, we refer the reader to \cite{Fio, KaMa, Lod, LoVa, Mar, Val}. Let $k$ be a commutative base ring, we denote by $k$-$\mathop{\mathrm{Mod}}\nolimits$ the category of $k$-modules. Note that $k$-$\mathop{\mathrm{Mod}}\nolimits$ is a tensor category with the tensor product $\otimes_k$. For simplicity, we write $\otimes_k$ as $\otimes$ in this section.
\begin{defn}
A strict $k$-linear tensor category (strict $k$-linear monoidal category)
is a triple $(\EuScript C, \otimes, {\bf 1})$,
where $\EuScript C$ is a $k$-linear category, ${\bf 1}$ a distinguished object and
$$\otimes: \EuScript C\times \EuScript C\rightarrow \EuScript C$$ is a $k$-linear functor,
satisfying
$$(X\otimes Y)\otimes Z=X\otimes (Y\otimes Z)$$
and $$X\otimes {\bf 1}=X={\bf 1}\otimes X$$
for any $X, Y, Z\in \EuScript C$.
\end{defn}
\begin{defn}\label{defn-PROP}
A $k$-linear non-symmetric PROP is a strict monoidal category $\mathcal{P}=(P, \odot, {\bf 1})$
enriched over $k$-$\mathop{\mathrm{Mod}}\nolimits$ such that
\begin{enumerate}
\item the objects are indexed by the set $\mathbb{Z}_{\geq 0}$ and
\item the tensor product satisfies $$m\odot n=m+n,$$ for any $m, n\in \mathbb{Z}_{\geq 0}$ (hence the unit ${\bf 1}$ equals $0$).
\end{enumerate}
\end{defn}
\begin{rem}
For a (non-symmetric) PROP $\mathcal{P}$, denote $$\mathcal{P}(m, n):=\mathop{\mathrm{Hom}}\nolimits_{\mathcal{P}}(m, n).$$
Each $\mathcal{P}(m,n)$ is a $k$-module. Therefore
a PROP is a collection $$\mathcal{P}=\{\mathcal{P}(m,n)\}_{m, n\in\mathbb{Z}_{\geq 0}}$$ of
$k$-modules, together with two types of compositions:
\begin{enumerate}
\item (horizontal composition)
$$\odot: \mathcal{P}(m_1, n_1)\otimes_k\cdots\otimes_k \mathcal{P}(m_s, n_s)\rightarrow \mathcal{P}(m_1+\cdots+ m_s,
n_1+\cdots+n_s)$$
induced by the tensor product $\odot$ of $P$, for all $m_1, \cdots
m_s, n_1, \cdots, n_s\geq 0$ and
\item (vertical composition)
$$\circ: \mathcal{P}(m, n)\otimes_k \mathcal{P}(n, p)\rightarrow \mathcal{P}(m, p),$$
given by the categorical composition, for all $m, n, p\geq 0$.
\end{enumerate}
and the following compatibility:
For any $f_i\in \mathcal{P}(m_i, p_i)$ and $g_i \in \mathcal{P}(p_i, q_i)$, where $i=1, 2$,
$$(g_1\circ f_1)\odot (g_2\circ f_2)=(g_1\odot g_2)\circ (f_1\odot f_2).$$
We remark that in Definition \ref{defn-PROP}, the category $k$-$\mathop{\mathrm{Mod}}\nolimits$ can be
replaced by any arbitrary symmetric monoidal category. Given a (non-symmetric) PROP
$\mathcal{P}$, we can construct an operad $\mathcal{O}_{\mathcal{P}}$ as follows (cf. Page 8, \cite{Fio}):
For $n\in\mathbb{Z}_{\geq 0}$, we define $$\mathcal{O}_{\mathcal{P}}(n):=\mathcal{P}(n, 1),$$
and the structural product map is the following composition,
$$\mathcal{P}(n, 1)\otimes \mathcal{P}(j_1, 1)\otimes \cdots \mathcal{P}(j_n, 1)\rightarrow \mathcal{P}(n, 1)\otimes \mathcal{P}(j_1+\cdots
+j_n, n)\rightarrow \mathcal{P}(j_1+\cdots+j_n, 1)$$
for $n, j_1, \cdots, j_n\in \mathbb{Z}_{\geq 0}$,
\end{rem}
\begin{expl}\label{expl}
\begin{enumerate}
\item The endomorphism PROP $\mathop{\mathrm{End}}\nolimits_V$ of a $k$-module $V$ is the following collection
$$\mathop{\mathrm{End}}\nolimits_V=\{\mathop{\mathrm{End}}\nolimits_V(m, n)\}_{m,n\geq 0}$$
where $\mathop{\mathrm{End}}\nolimits_V(m, n)$ is the space of $k$-linear maps $\mathop{\mathrm{Hom}}\nolimits_k(V^{\otimes m}, V^{\otimes n}).$ The horizontal composition is induced
by the tensor product $\otimes_k$ and the vertical
composition is induced by the ordinary composition of $k$-linear maps. The associated operad $\mathcal{O}_{\mathop{\mathrm{End}}\nolimits_V}$ is the linear operad of $V$, that is, for $n\in \mathbb{Z}_{\geq 0}$,
$$\mathcal{O}_{\mathop{\mathrm{End}}\nolimits_V}(n):=\mathop{\mathrm{Hom}}\nolimits_k(V^{\otimes k}, V),$$
with the natural structural product map.
\item The direct sum PROP of a $k$-module $V$ is defined as a PROP by the following collection
$$\mathcal{P}_{\oplus}:=\{\mathop{\mathrm{Hom}}\nolimits_k(V^{\oplus m}, V^{\oplus n})\}_{m, n\geq 0}, $$
where $V^{\oplus 0}:=0$. The horizontal composition is
induced by the direct sum in $k$-$\mathop{\mathrm{Mod}}\nolimits$,
$$\oplus: \mathop{\mathrm{Hom}}\nolimits_{k}(V^{\oplus m_1}, V^{\oplus n_1})\otimes \cdots \otimes \mathop{\mathrm{Hom}}\nolimits_k(V^{\oplus m_s}, V^{\oplus n_s})\rightarrow \mathop{\mathrm{Hom}}\nolimits_{k}(V^{\oplus m_1+\cdots +m_s}, V^{\oplus n_1+\cdots+ n_s})$$
for any $m_1, \cdots, m_s, n_1,\cdots, n_s\in \mathbb{Z}_{\geq 0}$.
The vertical composition is induced by the ordinary composition of $k$-linear maps.
In Appendix A, we apply this PROP in the case of $V:=k$ to give a new Lie algebra structure
on $\mathop{\mathfrak{gl}}\nolimits_{\infty}(k)$.
\item The $r$-multiple loops quiver PROP is defined by the following collection
$$\mathcal{P}_r(m, p)_{m,p\in \mathbb{Z}_{\geq 0}}:=\mathop{\mathrm{Hom}}\nolimits_k(V^{\otimes m}, V^{\otimes p-1})\oplus \mathop{\mathrm{Hom}}\nolimits_k(V^{\otimes m}, V^{\otimes p}),$$
where $V$ is a $k$-vector space of dimension $r$ and we use the notation $$V^{\otimes -1}:=0.$$ The horizontal composition is
$$\mu: \mathcal{P}_r(m, p)\otimes_k \mathcal{P}_r(n, q) \rightarrow \mathcal{P}_r(m+n, p+q),$$
where
\begin{equation*}
\mu(f\otimes g):=
\begin{cases}
0 & \mbox{if $f\in \mathop{\mathrm{Hom}}\nolimits_k(V^{\otimes m}, V^{\otimes p-1})$ and
$g\in \mathop{\mathrm{Hom}}\nolimits_k(V^{\otimes n}, V^{\otimes q-1})$},\\
f\otimes_k g & \mbox{otherwise}.
\end{cases}
\end{equation*}
and the vertical composition is defined as follows,
$$\nu: \mathcal{P}_r(m, p)\otimes_k \mathcal{P}_r(p, n) \rightarrow \mathcal{P}_r(m, n),$$
where
\begin{equation*}
\nu(f\otimes g):=
\begin{cases}
0& \mbox{if $f\in \mathop{\mathrm{Hom}}\nolimits_k(V^{\otimes m}, V^{\otimes p-1})$},\\
g\circ f & \mbox{otherwise}.
\end{cases}
\end{equation*}
It is straightforward to verify that $\mathcal{P}_r$ is a well-defined
PROP. Note that when $r=2$, $\mathcal{P}_2$ corresponds to the construction of
Section \ref{two loops quiver}.
\end{enumerate}
\end{expl}
\subsection{A Lie bracket on PROP}
Recall that in \cite{KaMa}, the authors proved that the positive total space $$\bigoplus_{n\in \mathbb{Z}_{>0}}\mathcal{O}(n)$$
of a (non-symmetric) operad $\mathcal{O}$ is endowed with a $\mathbb{Z}$-graded Lie algebra structure.
Next we will extend this result to a PROP $\mathcal{P}$. Namely, we will show that there is a $\mathbb{Z}$-graded Lie algebra
structure on the positive total space $$\bigoplus_{m,p\in \mathbb{Z}_{>0}} \mathcal{P}(m, p)$$ of a (non-symmetric) PROP $\mathcal{P}$.
Let us denote the identity element (with respect to the categorical composition) in $\mathcal{P}(1, 1)$ by $$\mathop{\mathrm{id}}\nolimits\in \mathcal{P}(1, 1).$$
Let $m, n, p, q\in \mathbb{Z}_{>0}$. Let $f\in \mathcal{P}(m, p)$ and $g\in \mathcal{P}(n, q)$. Define
\begin{equation}
f\star_i g:=
\begin{cases}
(f\odot \mathop{\mathrm{id}}\nolimits^{\odot q-1})\circ (\mathop{\mathrm{id}}\nolimits^{\odot i-1}\odot g\odot \mathop{\mathrm{id}}\nolimits^{\odot m-i})& \mbox{for $1\leq i\leq m,$}\\
(\mathop{\mathrm{id}}\nolimits^{\odot-i}\odot f\odot \mathop{\mathrm{id}}\nolimits^{\odot q-1+i})\circ (g\odot \mathop{\mathrm{id}}\nolimits^{\odot m-1}) & \mbox{for $-q+1\leq i\leq -1$}.
\end{cases}
\end{equation}
Then we denote
\begin{equation}\label{equ-star}
f\star g:=\sum_{i=1}^m(-1)^{(i-1)(q-n)} f\star_i g+\sum_{i=1}^{q-1}(-1)^{i(p-m)} f\star_{-i} g
\end{equation}
and the Lie bracket is defined as follows,
$$[f, g]:=f\star g-(-1)^{(m-p)(n-q)} g\star f.$$
Note that $$[f, g]\in \mathcal{P}(m+n-1, p+q-1).$$
In the following, we will follow \cite{MeVa, Val, Val2} to give the planar graph presentations of elements in $\mathcal{P}$ and the operation $\star$.
Let $f\in \mathcal{P}(m, p)$. We associate to $f$ the following graph:
\begin{equation}
\xymatrix@C=1pc{
\ar[dr]_-{1} &\ar[d]^-{2} &\ldots &\ar[dll]^-{m}\\
& \bullet^{f} \ar[drr]^{p}\ar[d]^{2}\ar[dl]_{1}\\
&&\cdots& &\\
}
\end{equation}
where the inputs of the vertex $\bullet^{f}$ are labelled by integers $\{1, \cdots, m\}$ from left to right and the outputs are labelled by integers $\{1, \cdots, p\}$ from left to right.
Let $g\in \mathcal{P}(n, q)$.
For $1\leq i\leq m$, we define the graph presentation of the operation $f\star_i g$ as follows.
\begin{equation}\label{graph1}
\xymatrix@C=1pc{
\ar[dd]_-{1}&\ar[dd]_-{2} &\ldots & \ar[dd]_-{i-1}&\ar[dr]_-{i} & \ar[d]^-{i+1} & \ldots&\ar[dll]^-{i+n-1}&\ar[dd]_-{i+n}& \ldots& \ar[dd]_-{m+n-1} \\
& & & & & \bullet^{g}\ar[dl]_-{i}\ar[d]^-{m}\ar[drr]^-{i+p-1}& & \\
\ar[drr] &\ar[dr] & \cdots&\ar[dl] &\ar[dll]& \cdots \ar[dlll] & \cdots &\ar[dd]_-{q+1} &\ar[dd]^-{q+2} &\cdots &\ar[dd]^{q+p-1}\\
& &\bullet^{f}\ar[dl]_-{1}\ar[d]^-{2}\ar[drr]^-{q} & & & \\
&& &\cdots & & & \cdots & & & \cdots & & &
}
\end{equation}
Similarly, for $1\leq i\leq q$, we define
the graph presentation of $f\star_{-i} g$ as follows:
\begin{equation}\label{graph2}
\xymatrix@C=1pc{
\ar[rd]^-{1}& \ar[d]^-{2} & \ldots & \ar[dll]^{n}& & & \ar[dd]_-{n+1} & \ldots & \ar[dd]_-{m+n-q+i}& \ar[dd] & \cdots & \ar[dd]^-{n+m-1} \\
&\bullet^g \ar[dl]_-{1}\ar[dr]_-{i}\ar[drr]_-{i+1}\ar[drrrr]^-{q}&\\
\ar[dd]_-{1} & \cdots & \ar[dd]_-{i} & \ar[d] & \cdots & \ar[dll] & \ar[dlll] & \cdots & \ar[dlllll] & \ar[dd]^-{i+p+1}& \cdots& \ar[dd]^-{p+q-1}&\\
& & & \bullet^f \ar[d]_-{i+1} \ar[drrrr]_-{i+p} &&& & \\
& \cdots& & &\cdots &&&&&&&&
}
\end{equation}
\begin{rem}
Analogous to \cite{MeVa}, we use the following graph to present $f\star g$:
$$\vcenter{\xymatrix{*+[F-,]{\ g \ } \ar@{-}[d] \\ *+[F-,]{\ f \ } & }}.$$
Namely, it is the sum of all graphs presented in Diagram (\ref{graph1}) and (\ref{graph2}).
\end{rem}
\begin{prop}\label{prop-PROP}
Let $k$ be a commutative ring. Let $\mathcal{P}$ be a $k$-linear (non-symmetric) PROP.
Then the Lie bracket $[\cdot,\cdot]$ defined above gives a $\mathbb{Z}$-graded Lie algebra structure on the positive total space $$\mathcal{P}:=\bigoplus_{m, p\in \mathbb{Z}_{> 0}} \mathcal{P}(m, p)$$
where the $\mathbb{Z}$-grading is defined as follows: for $n\in \mathbb{Z}$,
$$\mathcal{P}_n:=\bigoplus_{\substack{m, p\in\mathbb{Z}_{> 0}\\ m-p=n}}\mathcal{P}(m, p).$$
In particular, the following natural embedding of the total space is a homomorphism of graded Lie algebras
$$\mathcal{O}_{\mathcal{P}}\rightarrow \mathcal{P},$$
where the Lie bracket of $\mathcal{O}_{\mathcal{P}}$ is defined in \cite{KaMa}.
\end{prop}
{\it Proof.}\hspace{2ex}
The proof is completely analogous to the one of \cite[Proposition 4]{MeVa}.
Let $f\in \mathcal{P}(m, p), g\in \mathcal{P}(n, q), h\in \mathcal{P}(l, r)$.
Then $(f\star g)\star h$ is spanned by graphs of the form.
\begin{equation*}
\vcenter{\xymatrix@R=8pt@C=8pt{*+[F-,]{\ h \ } \ar@{-}[ddr] &&\\
&&*+[F-,]{\ g \ } \ar@{-}[dl] \\
& *+[F-,]{\ f \ } & }} \quad , \quad
\vcenter{\xymatrix@R=8pt@C=8pt{ && *+[F-,]{\ h \ } \ar@{-}[dl] \ar@{-}[dl]\\
& *+[F-,]{\ g \ } \ar@{-}[dl] \\
*+[F-,]{\ f \ } & & }}
\quad \textrm{or} \quad
\vcenter{\xymatrix@R=8pt@C=8pt{*+[F-,]{\ h \ } \ar@{-}[dr] \ar@{-}[dr] \ar@{-}[dd] && \\
& *+[F-,]{\ g \ } \ar@{-}[dl] \\
*+[F-,]{\ f \ } & & }}\ .
\end{equation*}
Similarly, $f\star (g\star h)$ is spanned by graphs of the form.
$$\vcenter{\xymatrix@R=8pt@C=8pt{& *+[F-,]{\ h \ } & \\
&&*+[F-,]{\ g\ }
\ar@{-}[ul] \\
*+[F-,]{\ f \ } \ar@{-}[uur]}} \quad , \quad
\vcenter{\xymatrix@R=8pt@C=8pt{ && *+[F-,]{\ h \ } \ar@{-}[dl] \\
& *+[F-,]{\ g\ } \ar@{-}[dl] \\
*+[F-,]{\ f\ } & & }} \quad \textrm{or} \quad
\vcenter{\xymatrix@R=8pt@C=8pt{*+[F-,]{\ h\ } \ar@{-}[dr] \ar@{-}[dd] && \\
& *+[F-,]{\ g \ } \ar@{-}[dl] \\
*+[F-,]{\ f \ } & & }} \ . $$
Note that we have the same coefficient for the same type in $(f\star g)\star h$ and $f\star (g\star h)$, so
we obtain that $(f\star g)\star h-f\star (g\star h)$ is
spanned by
$$\vcenter{\xymatrix@R=8pt@C=8pt{& *+[F-,]{\ h \ } & \\
&&*+[F-,]{\ g\ }
\ar@{-}[ul] \\
*+[F-,]{\ f \ } \ar@{-}[uur]}} \quad , \quad
\vcenter{\xymatrix@R=8pt@C=8pt{*+[F-,]{\ h \ } \ar@{-}[ddr] &&\\
&&*+[F-,]{\ g \ } \ar@{-}[dl] \\
& *+[F-,]{\ f \ } && }}\ .$$
Let us compare the coefficients of those graphs.
Note that the coefficient of $(f\star g)\star h-f\star (g\star h)$ in the Jacobi identity is
$(-1)^{(m-p)(l-r)}$ and the one of $(g\star f)\star h-g\star( f\star h)$ is
$-(-1)^{(m-p)(n-q+l-r)}.$
So the coefficient of the following graph from $(f\star g)\star h-f\star (g\star h)$
$$\vcenter{\xymatrix@R=8pt@C=8pt{& *+[F-,]{\ h \ } & \\
&&*+[F-,]{\ g\ }
\ar@{-}[ul] \\
*+[F-,]{\ f \ } \ar@{-}[uur]}}\ $$
is $(-1)^{(m-p)(l-r)}$ and
the coefficient of the following graph from $(g\star f)\star h-g\star( f\star h)$
$$\vcenter{\xymatrix@R=8pt@C=8pt{& *+[F-,]{\ h \ } & \\
&&*+[F-,]{\ f\ }
\ar@{-}[ul] \\
*+[F-,]{\ g \ } \ar@{-}[uur]}}\ $$
is $-(-1)^{(m-p)(n-q+l-r)}.$
On the other hand, we also note that
these two graphs have the same type up to coefficient $(-1)^{(m-p)(n-q)}$.
Note that
$$(-1)^{(m-p)(l-r)}-(-1)^{(m-p)(n-q)} (-1)^{(m-p)(n-q+l-r)}=0,$$
hence the sum of these two graphs is zero in the Jacobi identity. Similarly, we can check that the other graphs
will be cancelled by comparing the coefficients. Therefore we obtain the Jacobi identity.
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
\begin{rem}
The graphs above have a differential meaning from the ones in \cite{MeVa} although they have the same shapes. In general, the $\mathbb{Z}$-graded
Lie algebra structure on the positive total space of $\mathcal{P}$ is also differential from the one constructed in \cite[Proposition 4]{MeVa}, since the graphs presneted in (\ref{graph1}) and
(\ref{graph2}) are not connected.
But these two differential $\mathbb{Z}$-graded Lie algebra structures induce exactly the same $\mathbb{Z}$-graded Lie algebra structure on the positive total space of $\mathcal{O}_{\mathcal{P}}$ constructed in
\cite{KaMa}.
\end{rem}
\subsection{An interpretation of Gerstenhaber bracket}
Let us go back to radical square zero algebras. Let $k$ be a field. Let $Q$ be a finite and connected quiver, $A$ be its radical square zero
algebra over $k$ and $$A=E\oplus r$$ be the Wedderburn-Malcev decomposition.
Recall that we have the following space for $m, p\in \mathbb{Z}_{\geq 0}$
$$\mathcal{P}_A(m, p+1):=\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A))$$
where $\Omega^p(A)$ is the $p$-th kernel in the projective resolution $\mathcal{R}(A)$ (cf. Lemma \ref{lemma1cib}).
Let us denote $$\mathcal{P}_A(m, 0):=\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, E)$$ for $m\geq 0$.
Recall that from Lemma \ref{lemma-new}, we have the following isomorphism for $m, p\in\mathbb{Z}_{\geq 0}$,
$$\mathop{\mathrm{Hom}}\nolimits_{E-E}(r^{\otimes_E m}, \Omega^p(A))\cong k(Q_m//Q_p\cup Q_{p+1}).$$
Next we will give a $k$-linear PROP structure on the collection
$$\{\mathcal{P}_A(m, p)\}_{m, p\in \mathbb{Z}_{\geq 0}}.$$
First, let us define the horizontal composition as follows, for $m, p, n, q\in \mathbb{Z}_{\geq 0}$,
\begin{equation*}
\xymatrix{
\mathcal{P}_A(m, p+1)\otimes_k \mathcal{P}_A(n, q+1) \ar[r] \ar[d]^{\cong}& \mathcal{P}_A(m+n, p+q+2)\ar[d]^-{\cong}\\
k(Q_m//Q_p\cup Q_{p+1})\otimes_k k(Q_n//Q_q\cup Q_{q+1})\ar[r]^-{\mu} &k(Q_{m+n}//Q_{p+q+1}\cup Q_{p+q+2})
}
\end{equation*}
where $\mu$ is defined in the following way,
\begin{equation*}
\mu((\alpha, \beta)\otimes_k(\alpha', \beta')):=
\begin{cases}
0 & \mbox{if $(\alpha, \beta)\in k(Q_m//Q_p)$ and $(\alpha', \beta')\in k(Q_n//Q_q)$},\\
(\alpha\otimes_E\alpha', \beta\otimes_E \beta') &\mbox{otherwise}.\\
\end{cases}
\end{equation*}
The vertical composition is defined as follows, for $m, p, n\in \mathbb{Z}_{\geq 0}$,
\begin{equation*}
\xymatrix{
\mathcal{P}_A(m, p+1)\otimes_k \mathcal{P}_A(p+1, n+1) \ar[r] \ar[d]^{\cong}& \mathcal{P}_A(m, n+1)\ar[d]^-{\cong}\\
k(Q_m//Q_p\cup Q_{p+1}) \otimes_k k(Q_{p+1}//Q_{n}\cup Q_{n+1})\ar[r]^-{\nu} & k(Q_m//Q_n\cup Q_{n+1})
}
\end{equation*}
where $\nu$ is defined in the following way,
\begin{equation*}
\nu((\alpha, \beta)\otimes_k(\beta', \gamma')):=
\delta_{\beta,\beta'} (\alpha, \gamma').
\end{equation*}
From Proposition \ref{prop-PROP}, it follows that
the positive total space $$\bigoplus_{\substack{m, p\in\mathbb{Z}_{> 0}\\ m-p=n}} \mathcal{P}_A(m, p)$$
is a $\mathbb{Z}$-graded Lie algebra.
\begin{prop}
Let $Q=(Q_0, Q_1, s, t)$ be a finite connected quiver and $A$ be its radical square zero algebra over a field $k$.
Then the Lie algebra structure on the positive total space
$$\bigoplus_{\substack{m, p\in\mathbb{Z}_{> 0}\\ m-p=n}} \mathcal{P}_A(m, p)$$
coincides with the Gerstenhaber Lie algebra structure defined in (\ref{defn-bracket}) of Section \ref{section3} .
\end{prop}
{\it Proof.}\hspace{2ex} It is clear that these two Lie brackets coincide in the positive total spaces of
$\mathcal{P}_A$.
\hspace*{\fill}\mbox{$\rule{1ex}{1.4ex}$}
\begin{rem}
Section 4 provides a concrete example in the case of two loops quiver.
Similarly, for a $r$-multiple loops quiver $Q$ ($r\geq 2$),
$$\begin{tikzcd}
Q:= & \bullet \ar[%
,loop
,out=25
,in=67
,distance=4em
]{}{1}
\ar[
,loop
,out=70
,in=135
,distance=4em
]{}{r}
\ar[
, loop
,in=20
,out=315
,distance=4em]{}{2}
\ar[
,loop
,in=305
,out=235
,distance=4em]{}{\cdots}
\end{tikzcd}
$$
let $A$ be its radical
square zero algebra, then the PROP $\mathcal{P}_A$ constructed above is
isomorphic to the PROP $\mathcal{P}_r$ defined in Example \ref{expl}.
\end{rem}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,208 |
Чабар је град у Републици Хрватској у Приморско-горанској жупанији. Према попису из 2011. у граду је живело 3.770 становника, а у самом насељу је живело 412 становника.
Географија
Град Чабар се налази у западној Хрватској, на крајњем северу Приморско-горанске жупаније. Подручје града Чабра у ствари представља подручје бивше општине Чабар која укључује 5 већих насеља бивше општине и то Чабар, Презид, Тршће, Герово и Плешце као и још нека мања мјеста и засеоке.
Становништво
Према попису становништва из 2001. године на подручју града Чабра има 4.387 становника. Броја становника задњих година стално опада. Разлог за опадање броја становника је слаба географска повезаност и тешки услови живота.
Попис 1991.
На попису становништва 1991. године, насељено место Чабар је имало 597 становника, следећег националног састава:
Историја
Историја града Чабра залази још у римско доба, када је на садашњем мјесту Презид био изграђен римски зид за одбрану од Германа. У средњем веку за Чабар се највише везује име Петра Зринског који је уз Дворац, у Чабру изградио фабрику ексера и шрафова, тј. ковачницу, који су се преко луке у Бакру даље развозили. Што се тиче индустријске револуције, за подручје града Чабра значајан је и Вилим Вилхар, који је на Милановом врху крај Презида израдио једну од првих пилана на парни погон.
Економија
У граду је развијена дрвна индустрија, као и туризам (посебно зимски спортови)
Споменици и знаменитости
Дворац Петра Зринског
Тунел од Дворца Петра Зринског до Тропетарске стене
Прва ковачница у Хрватској
Црква светог Антуна Падованског
Црква светог Вида
Стари римски зид Лимес
Образовање
У граду се налазе две основне школе:
ОШ "Петар Зрински"
СШ "Владимир Назор"
Култура
Галерија Вилима Свечњака
Кућа Весел у Президу
Споменик краљу Томиславу на тргу "Трг краља Томислава" у Чабру
Референце
Литература
Спољашње везе
Чабар
Градови у Хрватској
Википројект географија/Насеља у Хрватској
Насељена места у Хрватској
Насељена места у Приморско-горанској жупанији
Горски Котар | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,472 |
Q: Constrain order of parameters in R JAGS I am puzzled by a simple question in R JAGS. I have for example, 10 parameters: d[1], d[2], ..., d[10]. It is intuitive from the data that they should be increasing. So I want to put a constraint on them.
Here is what I tried to do but it give error messages saying "Node inconsistent with parents":
model{
...
for (j in 1:10){
d.star[j]~dnorm(0,0.0001)
}
d=sort(d.star)
}
Then I tried this:
d[1]~dnorm(0,0.0001)
for (j in 2:10){
d[j]~dnorm(0,0.0001)I(d[j-1],)
}
This worked, but I don't know if this is the correct way to do it. Could you share your thoughts?
Thanks!
A: If you are ever uncertain about something like this, it is best to just simulate some data to determine if the model structure you suggest works (spoiler alert: it does).
Here is the model that I used:
cat('model{
d[1] ~ dnorm(0, 0.0001) # intercept
d[2] ~ dnorm(0, 0.0001)
for(j in 3:11){
d[j] ~ dnorm(0, 0.0001) I(d[j-1],)
}
for(i in 1:200){
y[i] ~ dnorm(mu[i], tau)
mu[i] <- inprod(d, x[i,])
}
tau ~ dgamma(0.01,0.01)
}',
file = "model_example.R")```
And here are the data I simulated to use with this model.
library(run.jags)
library(mcmcplots)
# intercept with sorted betas
set.seed(161)
betas <- c(1,sort(runif(10, -5,5)))
# make covariates, 1 for intercept
x <- cbind(1,matrix(rnorm(2000), nrow = 200, ncol = 10))
# deterministic part of model
y_det <- x %*% betas
# add noise
y <- rnorm(length(y_det), y_det, 1)
data_list <- list(y = as.numeric(y), x = x)
# fit the model
mout <- run.jags('model_example.R',monitor = c("d", "tau"), data = data_list)
Following this, we can plot out the estimates and overlay the true parameter values
caterplot(mout, "d", reorder = FALSE)
points(rev(c(1:11)) ~ betas, pch = 18,cex = 0.9)
The black points are the true parameter values, the blue points and lines are the estimates. Looks like this set up does fine so long as there are enough data to estimate all of those parameters.
A: It looks like there is an syntax error in the first implementation. Just try:
model{
...
for (j in 1:10){
d.star[j]~dnorm(0,0.0001)
}
d[1:10] <- sort(d.star) # notice d is indexed.
}
and compare the results with those of the second implementation. According to the documentation, these are both correct, but it is advised to use the function sort.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,030 |
CAMERA_MIN_DIST = 0.1
CAMERA_MAX_DIST = 6
MOVE_SPEED = 23 -- Movement speed as world units per second
MOVE_SPEED_X = 2.5 -- Movement speed for isometric maps
MOVE_SPEED_SCALE = 1 -- Scaling factor based on tiles' aspect ratio
LIFES = 3
zoom = 2 -- Speed is scaled according to zoom
demoFilename = ""
character2DNode = nil
function CreateCollisionShapesFromTMXObjects(tileMapNode, tileMapLayer, info)
-- Create rigid body to the root node
local body = tileMapNode:CreateComponent("RigidBody2D")
body.bodyType = BT_STATIC
-- Generate physics collision shapes from the tmx file's objects located in "Physics" layer
for i=0, tileMapLayer:GetNumObjects() -1 do
local tileMapObject = tileMapLayer:GetObject(i) -- Get physics objects (TileMapObject2D)
local objectType = tileMapObject.objectType
-- Create collision shape from tmx object
local shape
if objectType == OT_RECTANGLE then
shape = tileMapNode:CreateComponent("CollisionBox2D")
local size = tileMapObject.size
shape.size = size
if info.orientation == O_ORTHOGONAL then
shape.center = tileMapObject.position + size / 2
else
shape.center = tileMapObject.position + Vector2(info.tileWidth / 2, 0)
shape.angle = 45 -- If our tile map is isometric then shape is losange
end
elseif objectType == OT_ELLIPSE then
shape = tileMapNode:CreateComponent("CollisionCircle2D") -- Ellipse is built as a circle shape as there's no equivalent in Box2D
local size = tileMapObject.size
shape.radius = size.x / 2
if info.orientation == O_ORTHOGONAL then
shape.center = tileMapObject.position + size / 2
else
shape.center = tileMapObject.position + Vector2(info.tileWidth / 2, 0)
end
elseif objectType == OT_POLYGON then
shape = tileMapNode:CreateComponent("CollisionPolygon2D")
elseif objectType == OT_POLYLINE then
shape = tileMapNode:CreateComponent("CollisionChain2D")
else break
end
if objectType == OT_POLYGON or objectType == OT_POLYLINE then -- Build shape from vertices
local numVertices = tileMapObject.numPoints
shape.vertexCount = numVertices
for i=0, numVertices - 1 do
shape:SetVertex(i, tileMapObject:GetPoint(i))
end
end
shape.friction = 0.8
if tileMapObject:HasProperty("Friction") then
shape.friction = ToFloat(tileMapObject:GetProperty("Friction"))
end
end
end
function CreateCharacter(info, createObject, friction, position, scale)
character2DNode = scene_:CreateChild("Imp")
character2DNode.position = position
character2DNode:SetScale(scale)
local animatedSprite = character2DNode:CreateComponent("AnimatedSprite2D")
local animationSet = cache:GetResource("AnimationSet2D", "Urho2D/imp/imp.scml")
animatedSprite.animationSet = animationSet
animatedSprite.animation = "idle"
animatedSprite:SetLayer(3) -- Put character over tile map (which is on layer 0) and over Orcs (which are on layer 1)
--
local body = character2DNode:CreateComponent("RigidBody2D")
body.bodyType = BT_DYNAMIC
body.allowSleep = false
local shape = character2DNode:CreateComponent("CollisionCircle2D")
shape.radius = 1.1 -- Set shape size
shape.friction = friction -- Set friction
shape.restitution = 0.1 -- Slight bounce
if createObject then
character2DNode:CreateScriptObject("Character2D") -- Create a ScriptObject to handle character behavior
end
-- Scale character's speed on the Y axis according to tiles' aspect ratio (for isometric only)
MOVE_SPEED_SCALE = info.tileHeight / info.tileWidth
end
function CreateTrigger()
local node = scene_:CreateChild("Trigger") -- Clones will be renamed according to object type
local body = node:CreateComponent("RigidBody2D")
body.bodyType = BT_STATIC
local shape = node:CreateComponent("CollisionBox2D") -- Create box shape
shape.trigger = true
return node
end
function CreateEnemy()
local node = scene_:CreateChild("Enemy")
local staticSprite = node:CreateComponent("StaticSprite2D")
staticSprite.sprite = cache:GetResource("Sprite2D", "Urho2D/Aster.png")
local body = node:CreateComponent("RigidBody2D")
body.bodyType = BT_STATIC
local shape = node:CreateComponent("CollisionCircle2D") -- Create circle shape
shape.radius = 0.25 -- Set radius
return node
end
function CreateOrc()
local node = scene_:CreateChild("Orc")
node.scale = character2DNode.scale -- Use same scale as player character
local animatedSprite = node:CreateComponent("AnimatedSprite2D")
-- Get scml file and Play "run" anim
local animationSet = cache:GetResource("AnimationSet2D", "Urho2D/Orc/Orc.scml")
animatedSprite.animationSet = animationSet
animatedSprite.animation = "run"
animatedSprite:SetLayer(2) -- Make orc always visible
local body = node:CreateComponent("RigidBody2D")
local shape = node:CreateComponent("CollisionCircle2D") -- Create circle shape
shape.radius = 1.3 -- Set shape size
shape.trigger = true
return node
end
function CreateCoin()
local node = scene_:CreateChild("Coin")
node:SetScale(0.5)
local animatedSprite = node:CreateComponent("AnimatedSprite2D")
-- Get scml file and Play "idle" anim
local animationSet = cache:GetResource("AnimationSet2D", "Urho2D/GoldIcon.scml")
animatedSprite.animationSet = animationSet
animatedSprite.animation = "idle"
animatedSprite:SetLayer(2)
local body = node:CreateComponent("RigidBody2D")
body.bodyType = BT_STATIC
local shape = node:CreateComponent("CollisionCircle2D") -- Create circle shape
shape.radius = 0.32 -- Set radius
shape.trigger = true
return node
end
function CreateMovingPlatform()
local node = scene_:CreateChild("MovingPlatform")
node.scale = Vector3(3, 1, 0)
local staticSprite = node:CreateComponent("StaticSprite2D")
staticSprite.sprite = cache:GetResource("Sprite2D", "Urho2D/Box.png")
local body = node:CreateComponent("RigidBody2D")
body.bodyType = BT_STATIC
local shape = node:CreateComponent("CollisionBox2D") -- Create box shape
shape.size = Vector2(0.32, 0.32) -- Set box size
shape.friction = 0.8 -- Set friction
return node
end
function PopulateMovingEntities(movingEntitiesLayer)
-- Create enemy, Orc and moving platform nodes (will be cloned at each placeholder)
local enemyNode = CreateEnemy()
local orcNode = CreateOrc()
local platformNode = CreateMovingPlatform()
-- Instantiate enemies and moving platforms at each placeholder (placeholders are Poly Line objects defining a path from points)
for i=0, movingEntitiesLayer:GetNumObjects() -1 do
-- Get placeholder object (TileMapObject2D)
local movingObject = movingEntitiesLayer:GetObject(i)
if movingObject.objectType == OT_POLYLINE then
-- Clone the moving entity node and position it at placeholder point
local movingClone = nil
local offset = Vector2.ZERO
if movingObject.type == "Enemy" then
movingClone = enemyNode:Clone()
offset = Vector2(0, -0.32)
elseif movingObject.type == "Orc" then
movingClone = orcNode:Clone()
elseif movingObject.type == "MovingPlatform" then
movingClone = platformNode:Clone()
else
break
end
movingClone.position2D = movingObject:GetPoint(0) + offset
-- Create script object that handles entity translation along its path (load from file)
local mover = movingClone:CreateScriptObject("LuaScripts/Utilities/2D/Mover.lua", "Mover")
-- Set path from points
mover.path = CreatePathFromPoints(movingObject, offset)
-- Override default speed
if movingObject:HasProperty("Speed") then
mover.speed = movingObject:GetProperty("Speed")
end
end
end
-- Remove nodes used for cloning purpose
enemyNode:Remove()
orcNode:Remove()
platformNode:Remove()
end
function PopulateCoins(coinsLayer)
-- Create coin (will be cloned at each placeholder)
local coinNode = CreateCoin()
-- Instantiate coins to pick at each placeholder
for i=0, coinsLayer:GetNumObjects() -1 do
local coinObject = coinsLayer:GetObject(i) -- Get placeholder object (TileMapObject2D)
local coinClone = coinNode:Clone()
coinClone.position2D = coinObject.position + coinObject.size / 2 + Vector2(0, 0.16)
end
-- Init coins counters
local character = character2DNode:GetScriptObject()
character.remainingCoins = coinsLayer.numObjects
character.maxCoins = coinsLayer.numObjects
-- Remove node used for cloning purpose
coinNode:Remove()
end
function PopulateTriggers(triggersLayer)
-- Create trigger node (will be cloned at each placeholder)
local triggerNode = CreateTrigger()
-- Instantiate triggers at each placeholder (Rectangle objects)
for i=0, triggersLayer:GetNumObjects() -1 do
local triggerObject = triggersLayer:GetObject(i) -- Get placeholder object (TileMapObject2D)
if triggerObject.objectType == OT_RECTANGLE then
local triggerClone = triggerNode:Clone()
triggerClone.name = triggerObject.type
triggerClone:GetComponent("CollisionBox2D").size = triggerObject.size
triggerClone.position2D = triggerObject.position + triggerObject.size / 2
end
end
end
function Zoom(camera)
if input.mouseMoveWheel then
zoom = Clamp(camera.zoom + input.mouseMoveWheel * 0.1, CAMERA_MIN_DIST, CAMERA_MAX_DIST)
camera.zoom = zoom
end
if input:GetKeyDown(KEY_PAGEUP) then
zoom = Clamp(camera.zoom * 1.01, CAMERA_MIN_DIST, CAMERA_MAX_DIST)
camera.zoom = zoom
end
if input:GetKeyDown(KEY_PAGEDOWN) then
zoom = Clamp(camera.zoom * 0.99, CAMERA_MIN_DIST, CAMERA_MAX_DIST)
camera.zoom = zoom
end
end
function CreatePathFromPoints(object, offset)
local path = {}
for i=0, object.numPoints -1 do
table.insert(path, object:GetPoint(i) + offset)
end
return path
end
function CreateUIContent(demoTitle)
-- Set the default UI style and font
ui.root.defaultStyle = cache:GetResource("XMLFile", "UI/DefaultStyle.xml")
local font = cache:GetResource("Font", "Fonts/Anonymous Pro.ttf")
-- We create in-game UIs (coins and lifes) first so that they are hidden by the fullscreen UI (we could also temporary hide them using SetVisible)
-- Create the UI for displaying the remaining coins
local coinsUI = ui.root:CreateChild("BorderImage", "Coins")
coinsUI.texture = cache:GetResource("Texture2D", "Urho2D/GoldIcon.png")
coinsUI:SetSize(50, 50)
coinsUI.imageRect = IntRect(0, 64, 60, 128)
coinsUI:SetAlignment(HA_LEFT, VA_TOP)
coinsUI:SetPosition(5, 5);
local coinsText = coinsUI:CreateChild("Text", "CoinsText")
coinsText:SetAlignment(HA_CENTER, VA_CENTER)
coinsText:SetFont(font, 24)
coinsText.textEffect = TE_SHADOW
coinsText.text = character2DNode:GetScriptObject().remainingCoins
-- Create the UI for displaying the remaining lifes
local lifeUI = ui.root:CreateChild("BorderImage", "Life")
lifeUI.texture = cache:GetResource("Texture2D", "Urho2D/imp/imp_all.png")
lifeUI:SetSize(70, 80)
lifeUI:SetAlignment(HA_RIGHT, VA_TOP)
lifeUI:SetPosition(-5, 5);
local lifeText = lifeUI:CreateChild("Text", "LifeText")
lifeText:SetAlignment(HA_CENTER, VA_CENTER)
lifeText:SetFont(font, 24)
lifeText.textEffect = TE_SHADOW
lifeText.text = LIFES
-- Create the fullscreen UI for start/end
local fullUI = ui.root:CreateChild("Window", "FullUI")
fullUI:SetStyleAuto()
fullUI:SetSize(ui.root.width, ui.root.height)
fullUI.enabled = false -- Do not react to input, only the 'Exit' and 'Play' buttons will
-- Create the title
local title = fullUI:CreateChild("BorderImage", "Title")
title:SetMinSize(fullUI.width, 50)
title.texture = cache:GetResource("Texture2D", "Textures/HeightMap.png")
title:SetFullImageRect()
title:SetAlignment(HA_CENTER, VA_TOP)
local titleText = title:CreateChild("Text", "TitleText")
titleText:SetAlignment(HA_CENTER, VA_CENTER)
titleText:SetFont(font, 24)
titleText.text = demoTitle
-- Create the image
local spriteUI = fullUI:CreateChild("BorderImage", "Sprite")
spriteUI.texture = cache:GetResource("Texture2D", "Urho2D/imp/imp_all.png")
spriteUI:SetSize(238, 271)
spriteUI:SetAlignment(HA_CENTER, VA_CENTER)
spriteUI:SetPosition(0, - ui.root.height / 4)
-- Create the 'EXIT' button
local exitButton = ui.root:CreateChild("Button", "ExitButton")
exitButton:SetStyleAuto()
exitButton.focusMode = FM_RESETFOCUS
exitButton:SetSize(100, 50)
exitButton:SetAlignment(HA_CENTER, VA_CENTER)
exitButton:SetPosition(-100, 0)
local exitText = exitButton:CreateChild("Text", "ExitText")
exitText:SetAlignment(HA_CENTER, VA_CENTER)
exitText:SetFont(font, 24)
exitText.text = "EXIT"
SubscribeToEvent(exitButton, "Released", "HandleExitButton")
-- Create the 'PLAY' button
local playButton = ui.root:CreateChild("Button", "PlayButton")
playButton:SetStyleAuto()
playButton.focusMode = FM_RESETFOCUS
playButton:SetSize(100, 50)
playButton:SetAlignment(HA_CENTER, VA_CENTER)
playButton:SetPosition(100, 0)
local playText = playButton:CreateChild("Text", "PlayText")
playText:SetAlignment(HA_CENTER, VA_CENTER)
playText:SetFont(font, 24)
playText.text = "PLAY"
SubscribeToEvent(playButton, "Released", "HandlePlayButton")
-- Create the instructions
local instructionText = ui.root:CreateChild("Text", "Instructions")
instructionText:SetFont(font, 15)
instructionText.textAlignment = HA_CENTER -- Center rows in relation to each other
instructionText.text = "Use WASD keys or Arrows to move\nPageUp/PageDown/MouseWheel to zoom\nF5/F7 to save/reload scene\n'Z' to toggle debug geometry\nSpace to fight"
instructionText:SetAlignment(HA_CENTER, VA_CENTER)
instructionText:SetPosition(0, ui.root.height / 4)
-- Show mouse cursor
input.mouseVisible = true
end
function HandleExitButton()
engine:Exit()
end
function HandlePlayButton()
-- Remove fullscreen UI and unfreeze the scene
if ui.root:GetChild("FullUI", true) then
ui.root:GetChild("FullUI", true):Remove()
scene_.updateEnabled = true
else
-- Reload the scene
ReloadScene(true)
end
-- Hide Instructions and Play/Exit buttons
ui.root:GetChild("Instructions", true).text = ""
ui.root:GetChild("ExitButton", true).visible = false
ui.root:GetChild("PlayButton", true).visible = false
-- Hide mouse cursor
input.mouseVisible = false
end
function SaveScene(initial)
local filename = demoFilename
if not initial then
filename = demoFilename .. "InGame"
end
scene_:SaveXML(fileSystem:GetProgramDir() .. "Data/Scenes/" .. filename .. ".xml")
end
function ReloadScene(reInit)
local filename = demoFilename
if not reInit then
filename = demoFilename .. "InGame"
end
scene_:LoadXML(fileSystem:GetProgramDir().."Data/Scenes/" .. filename .. ".xml")
-- After loading we have to reacquire the character scene node, as it has been recreated
-- Simply find by name as there's only one of them
character2DNode = scene_:GetChild("Imp", true);
if character2DNode == nil then
return
end
-- Set what value to use depending whether reload is requested from 'PLAY' button (reInit=true) or 'F7' key (reInit=false)
local character = character2DNode:GetScriptObject()
local lifes = character.remainingLifes
local coins =character.remainingCoins
if reInit then
lifes = LIFES
coins = character.maxCoins
end
-- Update lifes UI and value
local lifeText = ui.root:GetChild("LifeText", true)
lifeText.text = lifes
character.remainingLifes = lifes
-- Update coins UI and value
local coinsText = ui.root:GetChild("CoinsText", true)
coinsText.text = coins
character.remainingCoins = coins
end
function SpawnEffect(node)
local particleNode = node:CreateChild("Emitter")
particleNode:SetScale(0.5 / node.scale.x)
local particleEmitter = particleNode:CreateComponent("ParticleEmitter2D")
particleEmitter.effect = cache:GetResource("ParticleEffect2D", "Urho2D/sun.pex")
end
function PlaySound(soundName)
local soundNode = scene_:CreateChild("Sound")
local source = soundNode:CreateComponent("SoundSource")
source:Play(cache:GetResource("Sound", "Sounds/" .. soundName))
end
function CreateBackgroundSprite(info, scale, texture, animate)
local node = scene_:CreateChild("Background")
node.position = Vector3(info.mapWidth, info.mapHeight, 0) / 2
node:SetScale(scale)
local sprite = node:CreateComponent("StaticSprite2D")
sprite.sprite = cache:GetResource("Sprite2D", texture)
SetRandomSeed(time:GetSystemTime()) -- Randomize from system clock
sprite.color = Color(Random(0, 1), Random(0, 1), Random(0, 1), 1)
-- Create rotation animation
if animate then
local animation = ValueAnimation:new()
animation:SetKeyFrame(0, Variant(Quaternion(0, 0, 0)))
animation:SetKeyFrame(1, Variant(Quaternion(0, 0, 180)))
animation:SetKeyFrame(2, Variant(Quaternion(0, 0, 0)))
node:SetAttributeAnimation("Rotation", animation, WM_LOOP, 0.05)
end
end
-- Create XML patch instructions for screen joystick layout specific to this sample app
function GetScreenJoystickPatchString()
return
"<patch>" ..
" <remove sel=\"/element/element[./attribute[@name='Name' and @value='Button0']]/attribute[@name='Is Visible']\" />" ..
" <replace sel=\"/element/element[./attribute[@name='Name' and @value='Button0']]/element[./attribute[@name='Name' and @value='Label']]/attribute[@name='Text']/@value\">Fight</replace>" ..
" <add sel=\"/element/element[./attribute[@name='Name' and @value='Button0']]\">" ..
" <element type=\"Text\">" ..
" <attribute name=\"Name\" value=\"KeyBinding\" />" ..
" <attribute name=\"Text\" value=\"SPACE\" />" ..
" </element>" ..
" </add>" ..
" <remove sel=\"/element/element[./attribute[@name='Name' and @value='Button1']]/attribute[@name='Is Visible']\" />" ..
" <replace sel=\"/element/element[./attribute[@name='Name' and @value='Button1']]/element[./attribute[@name='Name' and @value='Label']]/attribute[@name='Text']/@value\">Jump</replace>" ..
" <add sel=\"/element/element[./attribute[@name='Name' and @value='Button1']]\">" ..
" <element type=\"Text\">" ..
" <attribute name=\"Name\" value=\"KeyBinding\" />" ..
" <attribute name=\"Text\" value=\"UP\" />" ..
" </element>" ..
" </add>" ..
"</patch>"
end | {
"redpajama_set_name": "RedPajamaGithub"
} | 3,059 |
{"url":"https:\/\/gamedev.stackexchange.com\/questions\/154555\/objects-desync-position-when-speed-is-not-multiple-of-10","text":"# Objects desync position when speed is not multiple of 10\n\nWhile finishing up game loop and rendering integration, i made some blocks move back and forth in the x axis to test things out. Their initial positions are as follows:\n\nThe first block starts at 0,0(Top-left) and subsequent ones add 40 to both x and y.\n\nMove code\n\ns16 speed = 10;\n\nif(block->sprite->position.x >= viewportWidth - block->sprite->width)\n{\nblock->sprite->position.x = viewportWidth - block->sprite->width;\nblock->dir = -1;\n}\nelse if(block->sprite->position.x <= 0)\n{\nblock->sprite->position.x = 0;\nblock->dir = 1;\n}\n\nblock->sprite->position.x += speed * block->dir;\n\n\nHowever, if i change the speed to a number not multiple of 10(or using deltaTime to influence the speed), the blocks will de-sync when changin directions, as shown bellow:\n\nResult of using roundf(1000 * deltaTime) as speed\n\nResult of using 13 speed and removing the clamping of position\n\nI thought it was related to me using ints instead of floats for positions, tried to change but no luck. Looking at the game loop update, no block is being left out of any update phase either.\n\nWhat bit of math an i missing? What causes this?\n\nThanks.\n\nYour value of movespeed=10 is a special case where each block collides with the wall at the same distance into the wall.\n\nIn order to keep your spacing the same, the boxes must each travel a distance of movespeed every frame.\n\nThink of a 1D problem with a single point on a line that goes from 0 to 100. Using your algorithm lets start at P1 = 97 and go through the steps.\n\nP1 = 97 + 10 = 107\n\nNext frame\n\nP1 = 100 - 10 = 90\n\nSince the point was brought back to 100, then you subtract 10, the point traveled 17 in one frame, which makes it desync from other boxes that align more accurately with the wall.\n\nIn order to fix the problem, you need to make sure the point always travels the same distance i.e.\n\nIf P1 = 97 and you add 10, then P1 = 100 + (100-(97+10)) = 93\n\nMeaning that P1 traveled a distance of 3 towards the wall and then 7 away from the wall for a total of 10.\n\nA way to implements this would be to change the previous lines to the following...\n\nblock->sprite->position.x = 2*(viewportWidth - block->sprite->width) - (block->sprite->position.x+movespeed);\n\n\nand\n\nblock->sprite->position.x = -(block->sprite->position.x-movespeed);\n\n\nand include the normal movement code in an else bracket\n\nelse\n{\nblock->sprite->position.x += speed * block->dir;\n}","date":"2019-08-24 04:28:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.45398932695388794, \"perplexity\": 1302.8698727198578}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027319724.97\/warc\/CC-MAIN-20190824041053-20190824063053-00228.warc.gz\"}"} | null | null |
Born in Ipswich in 1990, youngest of the three children and only daughter of Sinclair W. J. Pooley and his wife Lyn K. née Jarrold, who married in the Samford district of Suffolk in 1970. Rebekah was educated at the Northgate School, Ipswich where in 2008 she won first prize in the landscape category for her stunning image of Table Mountain in South Africa. She studied fine art at the University of Suffolk and is a painter in oil and watercolour and a member of Artists Access to Art Colleges network. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,108 |
Miami Herald reimposes paywall on most coronavirus coverage
By Justine Coleman - 04/06/20 07:01 PM EDT
The Miami Herald reimposed its paywall on most of its coronavirus coverage Monday after previously providing the content for free.
The newspaper announced that it will reinstitute its paywall after local business partners "can no longer afford" to spend on advertisements, due to the economic impacts of the pandemic.
"We provided all of this coronavirus coverage for free," Aminda Marqués González, the executive editor and publisher of the Herald, said in a statement. "Now, we simply can't afford to do that anymore."
The Herald will still allow free access to stories that "address critical health and safety information," in addition to its coronavirus blog, she said. The email newsletter covering the coronavirus and its effects on South Florida will also remain free.
The newspaper encouraged readers to purchase subscriptions that amount to about the same as a monthly subscription to Netflix. Digital readership has increased by more than 100 percent during the past month.
"Our readers are relying on us for credible, fact-based news and information," González said. "Now we must rely on reader support to continue our unrelenting commitment to public service journalism."
The Herald is also requesting donations to support two new staff members that were hired because of a matching grant from Report for America.
Local news outlets have struggled in recent decades, but the coronavirus has taken a new toll on the industry, with journalists at a number of media outlets being laid off or furloughed during the pandemic.
Tags Miami Herald Coronavirus COVID-19 Newspaper | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,384 |
package info.smartkit.eip.cassandra;
import com.datastax.driver.core.querybuilder.QueryBuilder;
import com.datastax.driver.core.querybuilder.Select;
import com.datastax.driver.core.utils.UUIDs;
import com.google.common.collect.ImmutableSet;
import info.smartkit.eip.cassandra.domain.Event;
import org.hamcrest.core.IsEqual;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.cassandra.core.CassandraOperations;
import java.util.List;
import static org.hamcrest.CoreMatchers.hasItem;
import static org.hamcrest.CoreMatchers.is;
import static org.junit.Assert.assertThat;
public class CassandraTemplateIntegrationTest extends BaseIntegrationTest {
public static final String TIME_BUCKET = "2014-01-01";
@Autowired
private CassandraOperations cassandraTemplate;
@Test
public void supportsPojoToCqlMappings() {
Event event = new Event(UUIDs.timeBased(), "type1", TIME_BUCKET, ImmutableSet.of("tag1", "tag3"));
cassandraTemplate.insert(event);
Select select = QueryBuilder.select().from("event").where(QueryBuilder.eq("type", "type1")).and(QueryBuilder.eq("bucket", TIME_BUCKET)).limit(10);
Event retrievedEvent = cassandraTemplate.selectOne(select, Event.class);
assertThat(retrievedEvent, IsEqual.equalTo(event));
List<Event> retrievedEvents = cassandraTemplate.select(select, Event.class);
assertThat(retrievedEvents.size(), is(1));
assertThat(retrievedEvents, hasItem(event));
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,944 |
TicketSweep
Contemporary Music News
News • Releases
Features, Issue 5 September 2017 September 2017
CD Round-up
by Andy Hamilton
Catalogue No: 910 239-2 CD
Composer James Weeks comments that 'I don't want to write what I already know … It's a question of unlearning, of being ready always to let go of what you think you know. Technique has to be re-made, or at least re-purposed, at every moment, unless you want to re-write the same piece over and over again'. A consequence is that he avoids what art historians call a 'signature style' – an identifying approach that allows listeners to say 'That's Weeks!' after a few bars. Rejection of a signature style may be what qualifies the Romantic model of genius. But at a deep level there's a rich artistic personality in Weeks's work, one that takes many distinct forms.
Weeks's PhD was supervised by Michael Finnissy, and in the last decade, he has focused on solo and small ensemble works, exploring elemental materials and processes, either left bare or built up into dense polyphonic structures. With soprano Juliet Fraser, he founded new music vocal ensemble EXAUDI in 2002. Mala Punica/Walled Garden (Winter & Winter) for eight voices and ensemble, and commissioned by EXAUDI, consists of eight settings based on the Song of Songs. The title comes from the Latin text's references to pomegranates ('punica granatum'). The vocal sections were written in 2008–9. In 2015, Weeks framed and divided them with instrumental movements 'Walled Garden I', 'II' and 'III' for string and flute trios, played by the Hortus Ensemble, named after the central movement, 'Hortus Conclusus' ('Enclosed Garden'). The results are utterly beguiling – subtle, canonic pieces, haunting in their tremulous delicacy.
Catalogue No: MSV 28559
divineartrecords.com
Signs of Occupation (Métier) is, I guess, at the looser, more experimental end of the composer's output. It documents Weeks's chamber music from 2010–14, focusing on looping materials, and the clash of musical sounds with field recordings of the city. The cacophony of Looping Busker Music (2013) makes for a total contrast with the delicate, intricate vocal Mala Punica. It's a musical equivalent of 'My seven-year-old could have done that' – a racket that apparently might have been created by a group of primary school kids, or even the Thai Elephant Orchestra, but here performed by Vicky Wright (clarinet), Aisha Orazbayeva (violin), Tom Pauwels (guitar) and Mark Knoop (accordion). Equally contrasting are four rather subdued, minimal-sounding tracks for piano trio, three of them comprising a set; reduced fragments, looping a few or single notes, set against with field recordings of sounds heard by the composer while composing. The performers are Marcus Barcham Stevens (violin), Oliver Coates (cello) and Mark Knoop (piano). Subtle clashes in tuning help generate an ethos of eerie strangeness that demands attentive listening, gradually pulling the listener in.
The fourth piano trio, common ground (2014) is performed by Daniel Pioro (violin), Maxine Moore (viola) and Oliver Coates (cello). The title-track is named after a line from the poem 'Approaching Cleavel Point' by James Wilkes, which the poet reads, partly antiphonally with Andrew Sparling's clarinet. Again, the compositional approach presents brief figures and reiterated notes that obliquely parallel the poem's language. The solo Digger (2010), for guitar and spoken voice (Alastair Putt), forms a tribute to the heroic band of Diggers led by Gerard Winstanley, whose text this is. They occupied St George's Hill, Weybridge in 1649, asserting the rights of ordinary people to the 'common wealth' – democratic ideas which, after their brief appearance in the Civil War period, were submerged again for three centuries. The political theme of the album is affirmed by Christopher Fox's sleevenote: 'defiance … in the face of the insidious, yet relentless, occupation of our lives by big business'.
Catalogue No: NMC D234
nmcrec.co.uk
The title-piece of Howard Skempton's The Rime of the Ancient Mariner (NMC), for baritone and small ensemble, is the composer's longest work to date. It features narrator-vocalist Roderick Williams and the Birmingham Contemporary Music Group, conducted by Martyn Brabbins. The ensemble has seven instruments, often only one or two accompanying the vocalist at a time. The result is an immensely enjoyable way of experiencing this hypnotic work of Romantic Gothic; with his economical instrumentation, the composer offers a pellucid yet characterful setting – with a few cuts, towards the end. Williams's delivery is clear and direct, at the singing end of Sprechstimme. Instrumentally, the killing of the albatross is simply stated by piano, followed immediately by strings and piano, persuasively characterising the rising sun at the start of Part II. Also featured is the purely instrumental Only the Sound Remains, inspired by Edward Thomas's poem 'The Mill Water' – an eloquent and, for Skempton, richly-textured composition.
Unlike Weeks, I'd say, Skempton does have a signature style, one which he has developed. He's the leading living representative of the Cornelius Cardew experimental school – a student of Cardew, though the politics behind his art is muted in comparison. Apparently simple materials are developed – or accumulated – to create a powerful impact. His recent, longer works for larger forces synthesise the stylistic areas of his earlier output, small in scale and almost entirely solo and duo.
diatribe.ie
Darragh Morgan's For Violin and Electronics (Diatribe) features six pieces by living composers, all composed for the violinist, who just handles the live violin part, sometimes using 'extended' techniques. The compositions form a kind of dialectic around the polarities of fixed and interactive electronics. With fixed electronics, Jonty Harrison's Some of Its Parts and Ricardo Climent's Koorean Air require the violinist to sync perfectly with the tape part. In Harrison's excitingly brutal-sounding composition, composer and soloist seamlessly integrate the violin into an industrial soundscape – a brilliant resolution of electro-acoustic composition's inherent conflicts.
Scott Wilson's luminous, incandescent Flame – sorry! but the description is accurate – lies between fixed and interactive approaches; electronics features both live/real-time processing, and pre-recorded material reacting to the violin. With Paul Wilson's rather etiolated Trapped in Ice and Simon Emmerson's richly expressive Stringscape, with its keening violin part, Morgan explains, 'the electronics part in some ways follows the live violin line and reacts to it with a range of effects like granular processing'. Jonathan Nangle's haunting Where Distant City Lights Flicker on Half-Frozen Ponds seems to evoke a motionless winter landscape, employing what Morgan calls 'resonators … reacting to the natural harmonics of violin sonorities, often manipulated by harmonics and open strings'. It's an affecting conclusion to an excellent album.
Aisha Orazbayeva, Alastair Putt, Andrew Sparling, BCMG, Daniel Pioro, Darragh Morgan, EXAUDI, Hortus Ensemble, Howard Skempton, James Weeks, Jonathan Nangle, Jonty Harrison, Marcus Barcham Stevens, Mark Knoop, Martyn Brabbins, Maxine Moore, Oliver Coates, Paul Wilson, Ricardo Climent, Roderick Williams, Scott Wilson, Simon Emmerson, Tom Pauwels, Vicky Wright
More from Andy Hamilton
Composer James Weeks comments that 'I don't want to write what I...
In Search of the Smallest Voice
My Listening: Mira Benjamin
Previous articleMy Listening: Kate Romano
Next articleImportant Notice to all SLN Readers
Features, Issue 3
Ingenuity and Inadequacy
Promote Your News & Releases
Sounds Like Now is brought to you by with the generous support of its crowdfunders and | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,785 |
\section{Introduction}
\IEEEPARstart{R}adar automatic target recognition (RATR) is to identify the
unknown target from its radar echoes, which plays an important role in many applications,
such as surveillance, homeland security, and military
tasks~\cite{zyweck1996radar,bhanu1997guest,chiang2000model-based,sun2007adaptive,chen2009large,zhang2012multi-view,srinivas2014sar,ding2016convolutional,zhang2001a,du2005radar,YINGYANG2011,molchanov2014classification,Du2006A,Du2008Radar,feng2017radar,du2011bayesian,Pan2012Multi,Liao}.
Generally, the {researches} of RATR in high-resolution wideband radar can be divided into two categories, including RATR based on high-resolution range profile (HRRP) and that based on synthetic aperture radar (SAR) or inverse SAR (ISAR).
Compared with SAR and ISAR, HRRP owns the distinct advantage of being processed directly without first forming an image~\cite{du2011bayesian}.
Specifically, HRRP is composed of the amplitude of the coherent summations of the complex returns from target scatterers in each range cell, which represents the projection of the complex echoes from the target scattering center onto the radar line-of-sight (LOS).
Since HRRP contains abundant discriminative information, such as the target size and scatterer distribution, HRRP based RATR has received significant attention~\cite{zhang2001a,du2005radar,YINGYANG2011,molchanov2014classification,Du2006A,Du2008Radar,feng2017radar,du2011bayesian,Pan2012Multi,Liao}.
As the key step of HRRP based RATR, feature extraction has been widely studied, which can be motivated by different opinions according to the specific requirement.
For example, based on the integration of the bispectra of range profiles, Liao~et~al.~\cite{Liao} investigate bispectral features from real HRRP data.
Zhang~et~al.~\cite{zhang2001a} and Du~et~al.~\cite{du2005radar} further develop HRRP target recognition methods using high-order spectra features for their shift-invariance properties.
Molchanov~et~al.~\cite{molchanov2014classification} study the possibility of classifying aerial targets using the micro-Doppler signatures, where the novel features are computed in the form of cepstral coefficients and bicoherence estimates. Despite their usefulness for target recognition, those engineered features are hand-crafted and heavily rely on personal experiences, limiting their use in practice
~\cite{feng2017radar}.
To learn data-driven features, Du~et~al.~\cite{Du2007Radar} introduce principal component analysis (PCA) to extract the complex HRRPs' feature subspace, by minimizing the reconstruction error for the HRRP data. Du~et~al.~\cite{Du2012Radar} further propose a factor analysis (FA) model based on multitask learning to describe the Fast Fourier Transform (FFT)-magnitude feature of complex HRRP.
Some researchers~\cite{Feng2012Radar} adopt the K-Singularly Valuable Decomposition (K-SVD) dictionary learning method to extract the desired sparse over-complete features of HRRP data. Those methods have proven their effectiveness in practice, but they are all shallow models and only good at extracting linear features~\cite{du2019factorized}.
Inspired by the ability of deep learning methods in extracting multilayer non-linear features~\cite{Lecun2015Deep}, Feng~et~al.~\cite{feng2017radar} adopt stacked corrective auto-encoders (SCAE) to extract robust hierarchical features for HRRP, employing the average profile of each HRRP frame as the correction term.
Pan~et~al.~\cite{pan2017radar} propose a discriminant deep belief network (discriminant DBN) to recognize non-cooperative targets with imbalanced HRRP training data.
Nevertheless, all these models only depict the global structure of the target in a single HRRP sample, ignoring the sequential relationships across range cells within a single HRRP sample.
Several approaches have been proposed to exploit the temporal dependence in HRRP.
Du~et~al.~\cite{du2011bayesian} propose a Bayesian dynamic model for the HRRP sequence, where the spatial structure across range cells in HRRP is depicted by a hidden Markov model (HMM) and the temporal dependence between HRRP samples is depicted by the time evolution of the transition probabilities.
Pan~et~al.~\cite{Pan2011Multi,Pan2012Multi} characterize the spectrogram feature from a single HRRP sample via an HMM, which is a two-dimensional time-frequency representation providing the variation of the target in both the frequency and time domains.
Besides, Wang~et~al.~\cite{WangLDS} characterize the frequency spectrum amplitude of HRRP based on linear dynamical systems (LDS) to relax the requirement of a large training set.
Despite the tremendous success of HMM and LDS in many areas, efficiently capturing complex dependencies between range cells in HRRP remains a challenging problem due to their limited expressiveness. With the development of temporal one-dimensional convolution neural network (CNN) in raw audio generation tasks \cite{denoord2016wavenet}, Wan~et~al.~\cite{wan2019convolutional} use one-dimensional CNN to handle HRRP in the time domain and construct a two-dimensional CNN model for the corresponding spectrogram representation. Besides, recurrent neural networks (RNNs)~\cite{elman1990finding,graves2013speech,XuBin} have been developed to capture complex temporal behavior in high-dimensional sequences. While in principle RNN is a powerful model, it does not consider the kind of variability observed in highly structured data~\cite{PascanuICML2013,chung2015a} and ignores the weight uncertainty when updating its parameters with stochastic optimization such as stochastic gradient descent (SGD)~\cite{gan2017scalable}.
Moreover, learning both hierarchical and temporal representations has been a long-standing challenge for RNNs, in spite of the fact that hierarchical structures naturally exist in many sequential data and the learned latent hierarchical structures can provide useful information to other downstream tasks such as sequence generation and classification~\cite{chung2017hierarchical,koutnik2014a}.
In addition, the neural network representations are generally inaccessible and uninterpretable to humans~\cite{Zaheer2017a}.
More recently, several deep probabilistic dynamical models have been proposed to capture the relationships between latent variables across
multiple stochastic layers on video and music sequences, text streams, and motion capture data~\cite{gan2015deep,gong2017deep,guo2018deep}.
Deep Poisson gamma dynamical system ({DPGDS}) is a deep Bayesian top-down directed generative model (decoder) ~\cite{guo2018deep}, which takes advantage of the hierarchical structure to efficiently incorporate both between-layer and temporal dependencies. Different from classical Kalman filters, such as LDS, where the uses of linear transition and emission distribution limit the capacity to model complex phenomena \cite{krishnan2015deep}, DPGDS directly chains the observed data in a state space model (deep gamma Markov chain) that evolves with gamma noise. To take advantage of the temporal dependence within each HRRP sequence, we can construct a sequential HRRP RATR model with DPGDS, where each HRRP can be divided into multiple overlapping sequential HRRP segments as input. Despite being able to infer the multilayer contextual representation of {observed HRRP sequences} with scalable inference, the inference of {DPGDS} relies on a potentially large number of Markov chain Monte Carlo (MCMC) iterations to extract the latent representation of a new sample at the testing stage, which may be unattractive when real-time processing is desired. {Thus, the key challenge for DPGDS in RATR (unconventional deep Markov chain) is to infer the gamma distributed latent states with higher efficiency.} In this paper, we first generalize {DPGDS} to recurrent gamma belief network (rGBN) for real-time processing at the test stage, by equipping it with a novel inference network (encoder).
Specifically, to provide scalable training and fast out-of-sample prediction, the potential solution is to build an inference model with a variational auto-encoder (VAE)~\cite{kingma2014autoencoding}. But existing VAE variants are mostly restricted to non-sequential observed data and Gaussian distributed latent variables.
To address these constraints, motivated by related constructions in Zhang et al.~\cite{Zhang2018WHAI}, we construct a novel recurrent variational inference network (encoder) to map the observed HRRP samples directly to their latent temporal representations.
Then we provide the hybrid of stochastic-gradient-MCMC (SG-MCMC) and recurrent variational inference to infer both the posterior distribution (rather than a point estimate) of the global parameters of generative model and latent temporal representations.
To the best of our knowledge, the proposed rGBN, characterized by a top-down generative structure with temporal feedforward structure on each layer and a novel inference model, is the first deep probabilistic dynamical model for the HRRP RATR task. Although the features unsupervisedly learned by {rGBN} can be fed into a downstream classifier to make predictions, it is often beneficial to incorporate the target label information into the model ~\cite{li2015maxmargin}. To explore this potential, we further develop an end-to-end supervised rGBN (s-rGBN), whose extracted features are good for both classification and data representation.
The remainder of the paper is organized as follows. In Section~II, we present a brief description of HRRP, HMM, LDS, and RNN. We introduce our generative and inference networks in Section~III.
By jointly modeling HRRP samples and their labels, we further propose the supervised model, $i.e.$, {s-rGBN}, in Section~IV.
The detailed experimental results based on synthetic data and measured
HRRP data are reported in Section~V.
Section VI~concludes the paper.
\section{Preliminaries}
\subsection{Review of HRRP}
\begin{figure}[!htbt]
\vspace{-0.1cm}
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-2.cm}
\begin{center}
\centering
\includegraphics[height=3.5cm,width=4.2cm]{HRRP_target.jpg}
\caption{Radar returns from the scatterers on the target are projected onto the
line-of-sight (LOS), resulting in an HRRP. This figure is quoted from Zwart et al. \cite{2003Zwart}.}
\label{fig:HRRP sample}
\end{center}
\end{figure}
For a high resolution radar (HRR), the wavelength of the radar signal is far smaller than the target size.
Intuitively, HRRP is a representation of the time domain response of the target to an HRR pulse as a one-dimensional signature, which is the expression of the distribution of radar scattering centers along with the radar LOS~\cite{du2011bayesian}\cite{Du2012Radar}\cite{du2019factorized}, as shown in Fig.~\ref{fig:HRRP sample}. Suppose the transmitted signal of an HRR is $s(t)e^{j2 \pi f_c t}$, where $s(t)$ denotes the complex envelop and $f_c$ is the radar carrier frequency. Following the literatures~\cite{Du2006A,Du2007Radar}, the $n$th complex echo from the $d$th range cell ($n=1,2,...,N,d=1,2,...,D$) in the baseband is defined as follows
\begin{gather}\label{hrrp}
x_d(t,n) = \sum_i^{P_d} \sigma_{di} s\left(t-\textstyle \frac{2R_{di}(n)}{c}\right)e^{-j [\frac{4\pi}{\lambda}R_{di}(n)-\theta_{di0}] }
\end{gather}
where $P_d$ represents the number of target scatterers in the $d$th range cell, $\lambda$ is the wavelength of HRR, $\sigma_{di}$ and $\theta_{di0}$ represent the intensity and initial phase of the $i$th scatterer in the $d$th range cell, respectively.
$R_{di}(n)$ can be regarded as the radial distance between the $i$th scatterer in the $d$th range cell of the $n$th returned echo and the radar.
We set $R(n)$ as the radial distance between the target reference center in the $n$th returned echo and the radar, and set $L_{x}$ as the radial length of the target, usually $R(n)\gg L_{x}$.
Due to the target rotation, there are radial displacements for the scatterers.
Then $R_{d i}(n)\approx R(n)+\Delta r_{d i}(n)$, where $\Delta r_{d i}(n)$ represents the radial displacement of the $i$th scatterer in the $d$th range cell in the $n$th returned echo. When $s(\cdot)$ is a rectangular pulse signal with unit intensity, it is usually omitted. Equation \eqref{hrrp} can be approximated as $x_{d}(t, n) \approx x_{d}(n)= \sum_{i}^{P_{d}} \sigma_{d i} e^{-j \frac{4 \pi}{\lambda} R(n)} e^{j \phi_{d i}(n)}$, where $\phi_{d i}(n)=\theta_{d i 0}-\frac{4 \pi}{\lambda} \Delta r_{d i}(n)$ denotes the remained echo phase of the $i$th scatterer in the $d$th range cell of the $n$th returned echo, and $e^{-j \frac{4 \pi}{\lambda} R(n)}$ represents the initial phase of the $n$th returned echo related to the target distance and radar wavelength. Since $e^{-j \frac{4 \pi}{\lambda} R(n)}$ does not contain the target discriminative information, we can eliminate it and define the $n$th real HRRP sample $\xv(n)$ as
\begin{equation}\label{real_HRRP}
\small{
\begin{aligned}
&\xv(n) \triangleq \left[|x_{1}(n)|, |x_{2}(n)|, \ldots, |x_{D}(n)|\right] \\
&=\small \left[\left|\sum_{i}^{P_{1}} \sigma_{1 i} e^{j \phi_{1 i}(n)}\right|,\left|\sum_{i}^{P_{2}} \sigma_{2 i} e^{j \phi_{2 i}(n)}\right|, \ldots\left|\sum_{i}^{P_{D}} \sigma_{D i} e^{j \phi_{D i}(n)}\right|\right],
\end{aligned}}
\end{equation}
where $|\cdot|$ means taking absolute value, and $D$ is the dimensionality of $\xv(n)$.
{\subsection{Related dynamical models}}
Denoting a sample as $T$ sequentially observed $V$-dimensional vectors, we present a short review of the existing dynamical models used to model the temporal dependence across the range cells in HRRP \cite{du2011bayesian,Pan2012Multi,Pan2011Multi,WangLDS,XuBin}.
\textbf{Hidden Markov Model:} The joint likelihood of the observation and underlying state sequence can be expressed as
\begin{gather}
P(\xv_{n},\sv_n\,|\,\omegav_0,\Pimat,\phiv)=P(\sv_{1,n}\,|\,\omega_0)\prod_{t=1}^{T-1}P(\sv_{t,n}\,|\,\sv_{t-1,n},\Pimat) \notag \\
\times \prod_{t=1}^{T}P(\xv_{t,n}\,|\,\sv_{t,n},\phiv),
\end{gather}
where $\sv_{n}=\{\sv_{1,n},.., \sv_{t,n},..,\sv_{T,n}\}$ denotes the latent states of the $n$th sample, $\omegav_0$ is the initial state probability, $\Pimat$ is the state transition distribution and $\phiv$ are a set of parameters governing the emission probability.
\textbf{Linear Dynamical System:} It consists of the following observation and state equations
\begin{gather}
\xv_{t,n}\sim N(\Phimat\sv_{t,n},\Sigma),~\sv_{t,n}\sim N(\Pimat\sv_{t-1,n}, \Delta ),
\end{gather}
where $\Pimat$ is the transition matrix, $\Sigma$ and $\Delta$ are covariance matrices, and $\sv_{t,n}$ denotes the latent state that is projected to the observed space via the factor loading matrix $\Phimat$.
\textbf{Recurrent Neural Network:} At timestep $t$, RNN reads the symbol $\xv_{t,n}$ and updates its hidden state $\hv_{t,n}^{(l)}$ at layer $l$
\begin{equation}\label{RNN}
\small
\hv_{t,n}^{(l)}=\left\{\begin{array}{ll}{f \left(\Wmat_{hh}^{(l)}\hv_{t-1,n}^{(l)} + \Wmat_{xh}^{(l)}\xv_{t,n}\right)} ,& {\text { if } l=1 } ,\\
f\left(\Wmat_{hh}^{(l)}\hv_{t-1,n}^{(l)}+\Wmat_{xh}^{(l)}\hv_{t,n}^{(l-1)} \right) , & {\text { if } L\geq l>1}, \\
\end{array}\right.
\end{equation}
where $f$ is a deterministic non-linear function, $\Wmat_{hh}^{(l)}$ is the transition matrix, $\Wmat_{xh}^{(l)}$ is the weight matrix at layer~$l$, and the bias vectors are omitted for conciseness.
Although HMM and LDS have been widely studied, their representation power may be limited when modeling the HRRP sequential samples. Compared with LDS and HMM, RNN typically owns extra expressive power due to the existence of deep hidden states and flexible non-linear transition function. However, the internal structure of RNN is in general entirely deterministic, with the only source of variability provided by its conditional output probability model, which may be inappropriate to model the kind of variability observed in the HRRP data~\cite{chung2015a}.
\vspace{3mm}
\section{Variational Temporal Deep Generative Model}\label{my_model}
\subsection{Input representation}
According to~Xu et al. \cite{XuBin}, to discover the temporal dependence between the range cells within the single HRRP, we divide the $n$th $D$-dimensional real HRRP $\xv(n)$ in \eqref{real_HRRP} into $\xv_n\in \mathbb{R}_{+}^{V \times T}$, which consists of $T$ sequentially observed $V$-dimensional vector. Shown in Fig. \ref{fig:sequential feature of HRRP}, we denote the HRRP sequence as $\xv_n=[\xv_{1,n},..,\xv_{t,n},..\xv_{T,n}]$, where $\xv_{t,n}\in \mathbb{R}_{+}^{V \times 1}$ is the time sequential feature at timestep $t$ of sample $n$ and can be defined as
\begin{align}\label{temporal_HRRP}
{\xv_{t,n} = \xv(n)\left( a_t+1 : a_t+V \right),}
\end{align}
where {$V$ is the size of window function, intercepting $V$ range cells from the real HRRP sample}, $a_t=(V \!-\! O)\ast(t \!-\! 1)$ is the index at timestep $t$, and {$O$ denotes the overlap length across the windows, determining the degree of correlation between adjacent timesteps. Thus $T = \left\lfloor {(D-O)/(V-O)} \right\rfloor$}. {We denote $\Xmat=\{\xv_n\}_{n=1}^{N}$ as the HRRP dataset, which consists of $N$ independent and identically distributed (IID) HRRP sequences, and $\Ymat=\{y_n\}_{n=1}^{N} $ as corresponding labels, where $y_n$ is the label of $\xv_n$. Note sequential inputs $\xv_{1:T,n}$ from one HRRP sample $\xv_n$ share the same label. The dataset $\{\Xmat,\Ymat\}$ can be fed into a dynamic model as $N$ IID samples, each of which contains $T$ sequentially observed $V$-dimensional vectors (with identical label). For simplicity, we only exhibit} the modeling process for the $n$th HRRP sequence.
\vspace{-0.1cm}
\begin{figure}[!htbt]
\setlength{\abovecaptionskip}{-0.1cm}
\setlength{\belowcaptionskip}{-0.2cm}
\begin{center}
\centering
\includegraphics[height=3.2cm,width=8.1cm]{sliding_window_HRRP.pdf}
\caption{ Visualization of the $n$th real HRRP sample $\xv(n) \in \mathbb{R}_{+}^{D} $ (left) and its corresponding time sequential features $\xv_n \in \mathbb{R}_{+}^{V \times T}$ (right), where $V$ represents the length of window, $O$ denotes the length of overlap of the window, $\xv_{t,n}$ denotes the input of $n$th HRRP sequence at timestep $t$.}
\label{fig:sequential feature of HRRP}
\end{center}
\end{figure}
\vspace{-0.2cm}
\subsection{Generative Model}
To characterize the sequential feature within a single HRRP sample, we generalize the deep Poisson gamma dynamical system (DPGDS) \cite{guo2018deep} to rGBN, whose generative model is sketched in Fig.~\ref{fig:deep_dynamical_model}~(a). Specifically, we consider the deep architecture with $L$ layers, and denote $ \sv_{t,n}^{(l)}\in\mathbb{R}_+^{K_l}$ as the latent state of $\xv_{t,n}$ in \eqref{temporal_HRRP} at layer $l$, time step $t$, where $K_l$ is the number of states at layer $l$.
Different from HMM and LDS whose single-layer latent state at time step $t$ only relies on the state at previous time step $t-1$,
the latent state $ \sv_{t,n}^{(l)}$ of {rGBN}, from the top layer to bottom, is formulated as
\begin{align}\label{DPGDS_hidden}
\small
\sv_{t,n}^{(l)} \sim \mbox{Gam}\left(b^{(l)}\left( \Pimat^{(l)} \sv_{t-1,n}^{(l)} + \Phimat^{(l+1)} \sv_{t,n}^{(l+1)}\right) , 1/b^{(l)} \right),
\end{align}
where $x\sim \mbox{Gam}(a,1/b)$ denotes the gamma distribution with shape $a$, scale $1/b$, and mean $a/b$; $\Pimat^{(l)}\in \mathbb{R}_{+}^{K_{l} \times K_{l}}$ the transition matrix of layer $l$, $\Phimat^{(l)}\in \mathbb{R}_{+}^{K_{l-1} \times K_{l}}$ the weight matrix connecting layers $l-1$ and $l$, {$K_{0}=V$}, and $1/b^{(l)}$ the gamma scale parameter at layer $l$. When $t=1$, $\sv_{1,n}^{(l)} \sim \mbox{Gam}\left(b^{(l)}\Phimat^{(l+1)} \sv_{1,n}^{(l+1)} , 1/b^{(l)} \right)$ for $1\leq l< L$ and
$\sv_{1,n}^{(L)} \sim \mbox{Gam}\left(b^{(L)}\textrm{1}_{K_L} , 1/b^{(L)} \right)$. When $t>1$, $\sv_{t,n}^{(L)} \sim \mbox{Gam}\left(b^{(L)}\Pimat^{(L)} \sv_{t-1,n}^{(L)} , 1/b^{(L)} \right)$.
The gamma shape parameter of $\sv_{t,n}^{(l)}$ can be divided into two parts. One is the product of the transition matrix $\Pimat^{(l)}$ and latent state $\sv_{t-1,n}^{(l)}$, capturing the temporal dependence at the current layer, while the other is the product of the connection weight matrix $\Phimat^{(l+1)}$ and latent state $\sv_{t,n}^{(l+1)}$, capturing the hierarchical dependence at the current time. Moving beyond RNN using deterministic non-linear transition functions, we construct a dynamic probabilistic model using gamma distributed non-negative hidden units. Therefore, the proposed model is characterized by its expressive structure, which not only captures the correlations between the latent states across all layers and time steps, but also models the variability of the latent states, improving its ability to model sequential HRRP.
As a generative model with the deep temporal structure, the observed HRRP sequence at time step $t$ can be drawn from $p(\xv_{t,n}\,|\,\Phimat^{(1)},\sv_{t,n}^{(1)})$.
To consider the non-negative nature of the HRRP data and facilitate inference, we introduce the Poisson randomized gamma (PRG) distribution defined as $\mbox{PRG}\left(\xv_{t,n}\,|\,\Phimat^{(1)} \sv_{t,n}^{(1)},c\right)$ in Zhou et al.~\cite{Zhou2016JMLR}, which has a point mass at $\xv_{t,n}=0$ and is continuous for $\xv_{t,n}>0$. Since the PRG distribution is generated as a Poisson mixed gamma distribution, the data likelihood can be expressed as
\begin{gather}\label{DPGDS_sample_PRG}
\xv_{t,n} \sim \mbox{Gam}\left(\rv_{t,n},1/c\right), \rv_{t,n}\sim \mbox{Pois}\left(\Phimat^{(1)} \sv_{t,n}^{(1)} \right),
\end{gather}
where $c>0$, $ x\sim \mbox{Pois}(\lambda)$ represents the Poisson distribution with mean $\lambda$ and variance $\lambda$, and $\Phimat^{(1)}\in \mathbb{R}_{+}^{V \times K_{1}}$ is the weight matrix of layer $1$.
For scale identifiability and ease of inference and interpretation, we place the Dirichlet distribution prior on each column of $\Phimat^{(l)}$ and $\Pimat^{(l)}$ , $i.e.$, $\phiv_k^{(l)}$ and $\piv_k^{(l)}$, by letting
\begin{gather}\label{Prior}
\phiv_k^{(l)}\sim \textrm{Dir}(\etav^{(l)},...,\etav^{(l)}) ,\\
\piv_k^{(l)}\sim \textrm{Dir}(\nuv^{(l)},...,\nuv^{(l)}) ,
\end{gather}
for $l \in {1,...,L}$, which makes the elements of each column be non-negative and sum to one. Note $\piv_k^{(l)} =(\piv_{1,k}^{(l)},..,\piv_{k_1,k}^{(l)},..,\piv_{K_l,k}^{(l)}) $ and $\piv_{k_1,k}^{(l)}$ describes how much the weight of state $k$ of the previous time at layer $l$ is transited to influence state $k_1$ of the current time at the same layer. Under the hierarchical dynamical model defined by \eqref{DPGDS_hidden} and \eqref{DPGDS_sample_PRG}, the joint likelihood of the observation HRRP and the temporal latent states can be constructed as
\begin{align}\label{probability}
&P\left(\xv_{t,n},\{\sv_{t,n}^{(l)}\}_{l=1}^{L} \,|\, \{\Phimat^{(l)},\Pimat^{(l)},b^{(l)}\}_{l=1}^{L}, c \right) \notag\\
&=\left[\prod\limits_{l = 1}^{L} {p\left(\sv_{t,n}^{(l)}\,|\,\Phimat^{(l + 1)}\sv_{t,n}^{(l+ 1)},\Pimat^{(l)}\sv_{t-1,n}^{(l)}, b^{(l)} \right)} \right]\\
& \times p\left(\xv_{t,n}\,|\,\Phimat^{(1)}\sv_{t,n}^{(1)},c \right) . \notag
\end{align}
The parameters of the generative model comprise the transition and weight matrices, which we write as {$\{\Pimat^{(l)},\Phimat^{(l)}\}_{l=1}^{L}$}.
Fig. \ref{fig:deep_dynamical_model} (a) shows the graphical representation of the proposed generative model and Fig. \ref{fig:deep_dynamical_model}(b) is the unfolded representation of the model structure.
\begin{figure*}[!htbt]
\begin{center}
\setlength{\abovecaptionskip}{-0.2cm}
\setlength{\belowcaptionskip}{-0.4cm}
\centering
\includegraphics[height=4.8cm,width=18cm]{generative_model_all.pdf}
\caption{(a) The generative model of recurrent gamma belief network (rGBN); (b) The unfolded generative model of rGBN for the $n$th HRRP sample; (c) The recurrent variational inference model of rGBN (ignoring all bias terms); (d) The generative model of supervised recurrent gamma belief network (s-rGBN). }
\label{fig:deep_dynamical_model}
\end{center}
\end{figure*}
\textbf{Structure Analysis:}
As discussed above, if $x\sim \mbox{Gam}(a,1/b)$, the mean of $x$ is $a/b$; while if $x \sim \mbox{Pois}(\lambda)$, the mean of $x$ is $\lambda$.
Therefore, the expected value of $\xv_{t,n}$ in \eqref{DPGDS_sample_PRG} and $\sv_{t,n}^{(l)}$ in \eqref{DPGDS_hidden} can be expressed as
\begin{gather}\label{expected_observed}
\small \mathbb{E}\left[\xv_{t,n}\,|\, \rv_{t,n},c\right] = \mathbb{E}\left[\rv_{t,n}\,|\,\Phimat^{(1)} \sv_{t,n}^{(1)}\right]/c= \Phimat^{(1)}\sv_{t,n}^{(1)}/c ,
\end{gather}
\begin{align}\label{expected_latent1}
\small \mathbb{E}\left[\sv_{t,n}^{(l)} \,|\, \sv_{t,n}^{(l+1)},\sv_{t-1,n}^{(l)} ,\Phimat^{(l+1)} , \Pimat^{(l)} \right]
= \Phimat^{(l+1)}\sv_{t,n}^{(l+1)}+ \Pimat^{(l)}\sv_{t-1,n}^{(l)} .
\end{align}
Based on \eqref{expected_observed} and \eqref{expected_latent1}, for a three-hidden layer
rGBN shown in Fig. \ref{fig:deep_dynamical_model}(a), we have
\begin{align}\label{deep_temporal}
&\mathbb{E} [\xv_{t,n}\,\,|\,\, \sv_{t-1,n}^{(1)}, \sv_{t-2,n}^{(2)}, \sv_{t-3,n}^{(3)}]/c \notag\\
&= \Phimat^{(1)} \Pimat^{(1)} \sv_{t-1,n}^{(1)}
+ \Phimat^{(1)} \Phimat^{(2)} [\Pimat^{(2)}]^{2} \sv_{t-2,n}^{(2)}\\
&~~~~+ \Phimat^{(1)} \Phimat^{(2)}\left( \Pimat^{(2)}\Phimat^{(3)}+\Phimat^{(3)}\Pimat^{(3)}\right) [\Pimat^{(3)}]^{2} \sv_{t-3,n}^{(3)} , \notag
\end{align}
where the expected value of $\xv_{t,n}$ depends on the latent states at the previous time step in each layer, indicating our proposed model captures and transmits long-range temporal information through its higher hidden layers. In addition, the proposed model can be viewed as the generalization of LDS and HMM with
deep gamma distributed latent representations, and also can be considered as a probabilistic
construction of the traditionally deterministic RNN by adding uncertainty into the latent space via a deep generative model.
Ignoring the temporal structure of the equation, we notice that the information of the whole HRRP data set can be compressed into the inferred network $\{\Phimat^{(1)},\Phimat^{(2)}, \Phimat^{(3)}\}$, which depicts the global structure of the target in a single HRRP.
To be more specific, we can visualize the $\phiv_k^{(l)}$ at layer $l$ as $\left[\prod_{p=1}^{l-1}\Phimat^{(p)}\right]\phiv_k^{(l)}$,
which are often quite specific at the bottom layer and become increasingly more general when moving upwards. We will examine the weights of different layers to understand the general and specific aspects of the HRRP data, and illustrate how the weights of different layers are related to each other.
\textbf{Quantizing the HRRP data:}
While the non-negative real HRRP dataset can be modeled by PRG distribution, the latent count $\rv_{t,n}$ in \eqref{DPGDS_sample_PRG} need to be sampled in each iteration of the training stage. Specifically, the conditional posterior of $\rv_{t,n}$ given $\xv_{t,n}$, $\Phimat^{(1)} \sv_{t,n}^{(1)}$, and $c$ can be expressed as
\begin{align}
\resizebox{.88\hsize}{!} {$ p\left(\rv_{t,n}\,|\, \xv_{t,n},\Phimat^{(1)} \sv_{t,n}^{(1)},c \right) \!= \! \frac{\mbox{Gam}\left(\rv_{t,n},1/c\right) \mbox{Pois}\left(\Phimat^{(1)} \sv_{t,n}^{(1)} \right)}{\sum \limits_{\rv_{t,n} \!= \! 0}^{\infty} \mbox{Gam}\left(\rv_{t,n},1/c\right) \mbox{Pois}\left(\Phimat^{(1)} \sv_{t,n}^{(1)} \right)} $} .
\end{align}
More details about the PRG distribution can be found in Zhou et al.~\cite{Zhou2016JMLR} and are omitted here for brevity.
Despite the desired ability to depict non-negative continuous data, the PRG distribution may be time-consuming for the iterative procedure to infer latent counts in real-world applications. Considering the limited computation power in the training stage, HRRP data can be modeled with Poisson distribution, by directly discretizing
the HRRP data to counts ($i.e.$, non-negative integers) before training, where the sampling step of the counts in each iteration can be avoided.
Therefore, instead of modeling the observed data with \eqref{DPGDS_sample_PRG}, we assume that
\begin{gather}\label{Poisson_observed}
\lfloor \mu \xv_{t,n}\rfloor \sim \mbox{Pois}\left(\Phimat^{(1)} \sv_{t,n}^{(1)} \right),
\end{gather}
where the scaling factor $\mu$ controls the fineness of this discretization.
There is a trade-off between the benefit of the PRG distribution and restriction of computation resources.
Note that modeling the observed HRRP with the Poisson distribution can still obtain similar temporal dependencies in \eqref{deep_temporal} and hierarchical structure $\prod_{p=1}^{l}\Phimat^{(p)}$, omitted here for brevity.
\vspace{-0.2cm}
\subsection{Hybrid Inference Model} \label{my_inference}
In this section, we first infer the generative model global parameters $\{\Pimat^{(l)} \}_{l=1}^{L}$ and $\{ \Phimat^{(l)} \}_{l=1}^{L}$ with MCMC, then introduce a recurrent inference model to infer the latent states $\{ \sv_{t,n}^{(l)} \}_{t=1,l=1,n=1}^{T,L,N}$.
Finally, we provide a hybrid SG-MCMC and recurrent variational inference, which is scalable at the training stage and fast at the testing stage.
\textbf{MCMC inference for the generative network:}
Given the HRRP sequences, the inference task here is to find the weight matrices $\{ \Phimat^{(l)} \}_{l=1}^{L}$, transition matrices $\{\Pimat^{(l)} \}_{l=1}^{L}$, and latent states $\{ \sv_{t,n}^{(l)} \}_{t=1,l=1,n=1}^{T,L,N}$.
While it is difficult to infer the introduced model for the coupling of $\{ \sv_{t,n}^{(l)} \}_{t=1,l=1,n=1}^{T,L,N}$ with $\{ \Phimat^{(l)} \}_{l=1}^{L}$ and $\{\Pimat^{(l)} \}_{l=1}^{L}$, the latent variables of the proposed model can be trained with a backward-upward--forward-downward (BUFD) Gibbs sampler described in Guo et al.~\cite{guo2018deep}, based on a variety of variable augmentation techniques.
However, the Gibbs sampler needs to process all HRRP data samples in each iteration and hence has limited scalability. For scalable inference, we adopt the topic-layer-adaptive stochastic gradient Riemannian (TLASGR) MCMC algorithm \cite{cong2017deep,Zhang2018WHAI}, which is proposed to update simplex-constrained global parameters~\cite{cong2017fast} in a mini-batch learning setting. Relying on the Fisher information matrix (FIM)~\cite{girolami2011riemann} to automatically adjust relative
learning rates for different parameters across all layers, TLASGR-MCMC has proven its improved
sampling efficiency. Given the latent states, we first sample augmented latent counts, then use TLASGR-MCMC to sample generative model parameters $\{ \Phimat^{(l)} \}_{l=1}^{L}$ and $\{\Pimat^{(l)} \}_{l=1}^{L}$.
To be more specific, we can sample $\piv_k^{(l)}$, the $k$th column of the transition matrix $\Pimat^{(l)}$, as
\begin{align}\label{TLASGR update_Pi}
\resizebox{.13\hsize}{!} {$ \left( {\piv_k^{(l)}} \right)_{i + 1}$} \! &= \resizebox{.8\hsize}{!} {$ \! \bigg[ \! \left( {\piv_k^{(l)}} \! \right)_i \! + \! \frac{\varepsilon _i}{M_k^{(l)}} \! \left[ \left(\rho \tilde \zv_{:k\cdotv}^{(l)} \! + \! \nuv^{(l)}\right) \! - \! \left(\rho \tilde z_{\cdotv k\cdotv}^{(l)} \! + \! \nu_{\cdotv}^{(l)} \! \right) \! \left( {\piv_k^{(l)}} \! \right)_i \right] $}\nonumber \\
\! &+ \! \resizebox{.7\hsize}{!} {$ \mathcal{N} \!\left( 0, \!\frac{2 \varepsilon_i}{M_k^{(l)}} \!\left[ \mbox{diag}(\piv_k^{(l)})_i \!-\! (\piv_k^{(l)})_i (\piv_k^{(l)})_i^T \!\right] \!\right) \! \bigg]_\angle $},
\end{align}
where {${\left[ . \right]_\angle }$ denotes the simplex constraint that $\piv_{k_1,k}^{(l)}\geqslant0$ and $ \sum_{k_1 = 1}^{K_l} \piv_{k_1,k}^{(l)}=1$}, $M_k^{(l)}$ is calculated using the estimated FIM, $\varepsilon_i$ denotes the learning rate at the $i$th iteration, both ${\tilde \zv_{:k\cdotv}^{\left( l \right)}}$ and ${\tilde z_{\cdotv k\cdotv}^{\left( l \right)}}$ come from the augmented latent counts $\Zmat^{(l)}$, and ${\nuv^{\left( l \right)}}$ denotes the prior of ${\piv_k^{\left( l \right)}}$.
More details of TLASGR-MCMC can be found in Cong et al.~\cite{cong2017deep} and Guo et al.~\cite{guo2018deep}.
Despite the attractive properties, both the proposed Gibbs sampler and TLASGR-MCMC usually rely on an iterative procedure to learn
the temporal latent states of a new HRRP sample at the testing stage, which hinders real-time processing of the HRRP based RATR.
To allow fast out-of-sample prediction, we further build an inference network to learn the latent states, as described below.
\textbf{Recurrent variational inference model:}
Our inference model is motivated by variational auto-encoders (VAEs)~\cite{kingma2014autoencoding}. As an example of a directed graphical model, the joint distribution over the observed variables $\xv$ and latent variables $\zv$ can be defined as
$p(\xv,\zv) = p(\xv \,|\,\zv)p(\zv)$, where $p(\zv)$ is the prior placed on the latent variables. To admit efficient inference, VAEs approximate $p(\zv \,|\, \xv)$ with a variational family of distributions $q(\zv \,|\, \xv)$,
which adopts an inference network to map the observations directly to their latent space by optimizing the evidence lower bound (ELBO)~\cite{chung2015a}\cite{Jordan1999An}\cite{roberts18a}, expressed as
\begin{equation}\label{ELBO_ori}
L = \mathbb{E}_{q(\zv\,|\, \xv)} [\ln p(\xv \,|\, \zv)] - \mathbb{E}_{q(\zv \,|\, \xv)} [ \ln [q(\zv \,|\, \xv)/p(\zv)]].
\end{equation}
The approximate posterior $q(\zv\,|\,\xv)$ is often taken to be a Gaussian distribution as $N(\uv,diag(\sigmav))$, where the mean $\uv$ and standard deviation $\sigmav$ are the output of highly non-linear functions of the input $\xv$.
Given the Gaussian variational posterior, the second term of the ELBO in \eqref{ELBO_ori} is analytic.
Using the reparameterization trick~\cite{kingma2014autoencoding}, VAEs sample $\zv$ with $\zv=\uv + \sigmav \odot \epsilonv$, where $\epsilonv$ is a vector of standard Gaussian variables. Therefore, the gradient of the first term of the ELBO with respect to the parameters of the inference network can be constructed as
\begin{gather}\label{derivative}
\small
\nabla E_{q(\zv\,|\,\xv)} [\ln p(\xv\,|\,\zv) ] = E_{q(\epsilonv)} [ \nabla \ln p(\xv\,|\, \zv=\uv + \sigmav \odot \epsilonv ) ],
\end{gather}
whose estimation via Monte Carlo integration has low variance. The parameters of the inference model can be typically optimized via SGD.
VAEs provide an effective modeling paradigm for complex data distributions.
However, their success so far is mostly restricted to non-sequential data with Gaussian distributed latent variables and does not generalize well to model non-negative and sequential HRRP representations.
In this section, we propose a recurrent variational inference method to efficiently produce the multilayer temporal HRRP representations with \eqref{DPGDS_hidden} and \eqref{DPGDS_sample_PRG} or \eqref{Poisson_observed} as the generative model.
Given the global parameters $\{ \Pimat^{(l)}, \Phimat^{(l)}\}_{l=1}^{L}$, the task here is to infer the
latent states $\{ \sv_{t,n}^{(l)} \}_{t=1,l=1,n=1}^{T,L,N}$ via an inference network.
We first introduce a fully factorized distribution as
\begin{align}\label{posterior}
q\left(\left\{ \sv_{t,n}^{(l)} \right\}_{t=1,l=1,n=1}^{T,L,N}\right)= \prod\limits_{n = 1}^N {\prod\limits_{l = 1}^L {\prod\limits_{t = 1}^T {q\left( \sv_{t,n}^{(l)} \right) } } } .
\end{align}
With \eqref{probability} and \eqref{posterior}, the objective function becomes
\begin{small}
\begin{align}\label{elbo}
L\left( \left\{ q\left( {\sv_{t,n}^{(l)}}\right) \right \} _{l=1,t=1,n=1}^{L,T,N} ; \xv_{1:N}, \left\{ \Pimat ^{(l)}, \Phimat ^{(l)}\right\} _{l=1}^{L}\right) \notag \\
= \sum\limits_{n = 1}^N {\sum\limits_{t = 1}^T {{\mathbb{E}_{q\left( {\sv_{t,n}^{(1)}} \right)}}\left[ {\ln p\left( {\xv_{t,n}\,|\,{\Phimat ^{(1)}},\sv_{t,n}^{(1)}} \right)} \right]} } \\
\!- \!\sum\limits_{n = 1}^N \! {\sum\limits_{l = 1}^L \! {\sum\limits_{t = 1}^T {{\mathbb{E}_{q\left( {\sv_{t,n}^{(l)}} \right)}} \! \left[
{ \ln \frac{\!{q \! \left( {\sv_{t,n}^{(l)}} \! \right)\!} }{ p\left( {\sv_{t,n}^{(l)}\,|\, \av_{t,n}^{\left( l \right)},1/{b^{\left( l \right)}}} \right)}}\right]} } }\notag ,
\end{align}
\end{small}
where $\av_{t,n}^{\left( l \right)}:=b^{(l)}\left({{\Phimat ^{(l + 1)}}\sv_{t,n}^{(l + 1)} + {\Pimat ^{(l)}}\sv_{t{\rm{ - }}1,n}^{(l)}}\right)$ denotes the shape parameter of $\sv_{t,n}^{(l)}$.
Note under the BUFD Gibbs sampler~\cite{guo2018deep}, the conditional posterior of $\sv_{t,n}^{(l)}$ given augmented latent counts is gamma distributed, and hence
it might be more appropriate to use the gamma rather than Gaussian based distributions to construct
the variational distribution $q\left( {\sv_{t,n}^{(l)}} \right)$.
However, the gamma random variable is not reparameterizable with respect to its shape parameter.
This motivates us to choose a surrogate distribution, which can be easily reparameterized, to approximate the gamma distribution.
The Weibull distribution is a desirable choice for this purpose, as its probability density function resembles that of the gamma distribution, and the second term in the ELBO shown in \eqref{elbo} becomes analytic if it is used to construct $q\left( {\sv_{t,n}^{(l)}} \right)$ \cite{Zhang2018WHAI}.
We may directly follow Zhang~et~al.~\cite{Zhang2018WHAI} to construct a Weibull distribution based inference network as
\begin{gather}\label{weibu}
q\left( {\sv_{t,n}^{(l)}} \right) \sim \mbox{Weibull}\left( {{\kv_{t,n}^{(l)}},{\lambdav _{t,n}^{(l)}}} \right),
\end{gather}
whose parameters $\kv_{t,n}^{(l)}$ and $\lambdav _{t,n}^{(l)}$ are both deterministically transformed from the hidden unit $\hv _{t,n}$ and specified as
\begin{align}
&\kv_{t,n}^{(l)} = \ln[1+\exp(\Wmat_{hk}^{(l)}\hv_{t,n}^{(l)}+ \bv_1^{(l)})] , \label{MLP1} \\
&\lambdav_{t,n}^{(l)} = \ln[1+\exp(\Wmat_{h \lambda}^{(l)}\hv_{t,n}^{(l)}+ \bv_2^{(l)})] , \label{MLP2}
\end{align}
where $\hv_{t,n}^{(l)}$ denotes the output of highly non-linear function of the observed $\xv_{t,n}$. However, this construction does not take into consideration the temporal information transmitted from the previous time step.
To exploit the temporal information, we propose a recurrent inference network which induces temporal
dependencies across time steps, as illustrated in Fig.~\ref{fig:deep_dynamical_model}~(c).
Therefore, similar to the RNN in \eqref{RNN}, we define $\hv_{t,n}^{(l)}$ as
\begin{equation}\label{RNN_update_h}
\small \hv_{t,n}^{(l)}=\left\{\begin{array}{ll}{\!\!\textrm{tanh}\left(\Wmat_{xh}^{(l)}\xv_{t,n}+\Wmat_{hh}^{(l)}\hv_{t-1,n}^{(l)}+ \bv_3^{(l)}\right)} ,& {\!\!\!\!\!\text { if } l=1 } ,\\
\!\!\textrm{tanh}\left(\Wmat_{xh}^{(l)}\hv_{t,n}^{(l-1)}+\Wmat_{hh}^{(l)}\hv_{t-1,n}^{(l)}+ \bv_3^{(l)}\right), & {\!\!\!\!\!\text { if } 1 < l\leq L} ,\\
\end{array}\right.
\end{equation}
where at layer $l$, $\Wmat_{xh}^{(l)}\in \mathbb{R}^{K_{l} \times K_{l-1}}$ denotes the upward weight matrix, $\Wmat_{hh}^{(l)} \in \mathbb{R}^{K_{l} \times K_{l}}$ the forward weight matrix connecting the hidden states, and $\bv_3^{(l)}\in \mathbb{R}^{K_{l} }$ the bias vector.
Therefore, the variational parameters of $\sv_{t,n}^{(l)}$ are both non-linearly transformed with neural networks from the hidden state $\hv_{t,n}^{(l-1)}$ of layer $l-1$ at time $t$ and the hidden state $\hv_{t-1,n}^{(l)}$ of layer $l$ at time $t-1$, which are helpful for the inference network to take into account the temporal structure of the observed data.
While the sequential latent states of the inference model are similar to that of an RNN, the internal transition structure of an RNN is in general entirely deterministic. By contrast, the proposed model introduces uncertainty into the latent space to help better model the variability observed in highly structured HRRP data.
One can sample the Weibull distributed latent state in \eqref{weibu} using the reparameterization trick as
\begin{align}\label{sample_theta}
\small
\sv_{t,n}^{(l)} & \small = {\lambdav _{t,n}^{(l)}} \left(-\ln(1-{\epsilonv _{t,n}^{(l)}})\right) ^ {1/{\kv_{t,n}^{(l)}}},~
{\epsilonv_{t,n}^{(l)}}\sim \mbox{Uniform}(0,1).
\end{align}
For standard VAE, the generative model parameters and the corresponding inference network parameters can be typically jointly
optimized via SGD, seeking to maximize the ELBO in \eqref{ELBO_ori} with standard backpropagation technique. Instead of finding a point estimate of the global parameters of generative model like in VAEs, we adopt a hybrid MCMC/VAE inference algorithm by combining TLASGR-MCMC and the proposed recurrent variational inference network.
In specific, the generative model parameters $\{ \Pimat ^{(l)}, \Phimat ^{(l)}\} _{1,L}$ can be sampled with TLASGR-MCMC in \eqref{TLASGR update_Pi} and the neural network parameters $\Omegamat=\{\Wmat_{xh}^{(l)}, \Wmat_{hh}^{(l)},\Wmat_{hk}^{(l)}, \Wmat_{h \lambda}^{(l)},\bv_1^{(l)}, \bv_2^{(l)}, \bv_3^{(l)}\}_{1,L}$ can be updated via SGD by maximizing the ELBO in \eqref{elbo}. Applying the reparameterization trick of the Weibull distribution in \eqref{sample_theta}, the gradient of the ELBO with respect to $\Omegamat$ can be evaluated with low variance.
In practice, a single Monte Carlo sample from $q\left( \sv_{t,n}^{(l)} \right) $ is enough to obtain satisfactory performance.
The proposed inference network is depicted in Fig.~\ref{fig:deep_dynamical_model}~{(c)} and the training strategy is outlined in Algorithm \ref{Algorithm}.
For a new HRRP sample at the testing stage, given the generative model parameters and inferred network parameters $\Omegamat$, we can directly obtain the conditional posteriors of latent states using the inference network, without performing iterations.
\section{Supervised Variational Temporal Deep Generative Model}
The {rGBN} discussed above is an unsupervised model, which can infer the hierarchical latent states from HRRP samples under the condition of no class information.
Although the latent states can be used together with a downstream classifier to make label predictions, it is often beneficial to learn a joint model that considers both the HRRP samples and corresponding labels to discover more discriminative representations.
Therefore, we further develop a supervised rGBN (s-rGBN), providing multilayer latent representations that are good for both HRRP generation and classification.
Denote the $n$th sequential HRRP sample as a pair $\{\xv_n,y_n\}$, where $y_n \in \{1,2,...,C\}$ is the ground truth label of input $\xv_n$ and $C$ the total number of target classes. Assume that the HRRP label is generated from a categorical distribution {$p(y_{n}\,|\, \xv_n, \Omegamat)\sim \mbox{Categorical}(p_{1n},...,p_{Cn})$}, written as
\begin{gather}\label{label}
p(y_n\,|\, \xv_n, \Omegamat) = \prod \limits_{c = 1}^C {p_{cn}} ^{\textrm{I}\{y_n=c\}},
\end{gather}
where $p_{cn}$ is the probability of the current input $\xv_n$ classified to label $c$, $\sum_{c = 1}^C p_{cn}=1$, and $\textrm{I}\{y_n=c\}$ is an indicator function which is equal to one if $y_n=c$ and zero otherwise.
Since the introduced model is able to mine the deep hierarchical structure from the HRRP data, where the weight matrices at different layers reveal different levels of abstraction,
we combine the latent states across all hidden layers to define $p_{cn}$.
Note the sequential inputs from one HRRP sample share the same label, making its modeling different from a conventional sequence-to-sequence task addressed by RNNs.
We concatenate the latent state $\sv_{t,n}^{(l)}$ in \eqref{sample_theta} across all hidden layers and time steps to construct a latent feature vector of dimension $T\sum_{l = 1}^L {{K_l}} $, denoted as
\begin{gather}\label{feature}
\Smat_n = \left[(\sv_{1,n}^{(1)},...,\sv_{T,n}^{(1)}),...,(\sv_{1,n}^{(L)},...,\sv_{T,n}^{(L)})\right].
\end{gather}
Given $\Smat_n$, the label probability vector $\pv_{n}=(p_{1n},...,p_{Cn})$ is calculated with the softmax function as
\begin{gather}\label{softmax}
\small \pv_{n} =\left[ \frac{e^{\wv_1\Smat_n}}{\sum_{i=1}^{C} e^{\wv_i\Smat_n}},...,\frac{e^{\wv_c\Smat_n}}{\sum_{i=1}^{C} e^{\wv_i\Smat_n}},...,\frac{e^{\wv_C\Smat_n}}{\sum_{i=1}^{C} e^{\wv_i\Smat_n}}\right],
\end{gather}
where $\wv_c$ denotes the $c$th row of the learnable weight matrix $\Wmat_{sy}$. Since ${\Smat_n}$ is the concatenation of the latent states projected from $\xv_n$ using \eqref{weibu}, the label likelihood \eqref{label} can be rewritten as $\ p(y_n\,|\,\Wmat_{sy},\Smat_n)$. The generative model for both the observed HRRP samples and labels can be displayed in Fig. \ref{fig:deep_dynamical_model}{(d)}.
Given the generative process, our proposed {s-rGBN} can be trained by maximizing the ELBO of the joint likelihood of the HRRP samples and labels, expressed as
\begin{align}\label{loss_function}
\small
&\resizebox{0.7\hsize}{!}{$
L =\sum\limits_{n = 1}^N {\sum\limits_{t = 1}^T {{E_{q\left( {\sv_{t,n}^{(1)}} \right)}}\left[ {\ln p\left( {\xv_{t,n}\,|\,{\Phimat ^{(1)}},\sv_{t,n}^{(1)}} \right)} \right]} }$}\notag\\
&~~~~\resizebox{0.95\hsize}{!}{$ -\sum\limits_{n = 1}^N \! {\sum\limits_{l = 1}^L \! {\sum\limits_{t = 1}^T {{E_{q\left( {\sv_{t,n}^{(l)}} \right)}} \! \left[
{ \ln \frac{\!{q \! \left( {\sv_{t,n}^{(l)}} \! \right)\!} }{ p\left( {\sv_{t,n}^{(l)}\,|\, \av_{t,n}^{\left( l \right)},1/{b^{\left( l \right)}}} \right)} + \ln p(y_n\,|\,\Wmat_{sy},\Smat_n)}\right]} } } $},
\end{align}
where the inference network for the latent states is the same as that of rGBN shown in Fig.~\ref{fig:deep_dynamical_model}~{(c)}. We also sample $\{ \Phimat ^{(l)},\Pimat ^{(l)} \}_{1,L}$ with TLASGR-MCMC, and update the neural network parameters $\Omegamat$ and classifier weight matrix $\Wmat_{sy}$ via SGD by maximizing the ELBO in \eqref{loss_function}, using the Adam optimizer~\cite{kingma2015adam}.
Moreover, based on the concatenation in \eqref{feature}, we notice that the supervised network enforces direct supervision for the latent states across all time steps and layers, improving the discriminativeness and robustness of the extracted latent states~\cite{lee2015deeply-supervised}.
At the test stage, given the learned inference network and classifier, s-rGBN can be very efficient on predicting the label of an HRRP sample $\xv_m$, expressed as
\begin{gather}\label{softmax}
\setlength{\abovedisplayskip}{6pt}
\hat{y}_m = \mathop {\arg \max } \limits_{c \in C} p\left(y_m=c\,|\,\wv_c,\Smat_{m}\right),
\end{gather}
where $\Smat_{m}$ is the combination of $\sv_{t,m}^{(l)}$, which can be sampled with the reparameterization trick shown in \eqref{sample_theta}.
\begin{algorithm}[!t]
\scriptsize
\caption{Hybrid stochastic-gradient MCMC and recurrent variational inference for {rGBN}.}
\begin{algorithmic}
\STATE Set mini-batch size $M$, the number of layer $L$, the width of layer $K_l$ and hyperparameters.
\STATE Initialize inference model parameters $\Omegamat$, generative model parameters $ \{ \Pimat^{(l)}, \Phimat^{(l)}\}_{1,L}$.
\FOR{$iter = 1,2, \cdots$ }
\STATE
Randomly select a mini-batch of $m$ HRRP samples to form
a subset $\Xmat = \{ \xv_m \}_{1,M}$;\\
Draw random noise $\{ {{\epsilonv _{t,m}^{(l)}}} \}_{t=1,m = 1,l=1}^{T,M,L}$ from uniform distribution \eqref{sample_theta};
Sample latent states $\{\sv _{t,m}^{\left( l \right)}\}_{t=1,m = 1,l=1}^{T,M,L}$ from \eqref{sample_theta};\\
Compute subgradient $ g = {\nabla _{\Omegamat}} L $ according to \eqref{elbo}, and update $\Omegamat$ using subgradient $g$;\\
\FOR{$l = 1,2, \cdots,L$ and $ k =1, 2, \cdots, K_l $ }
\STATE Update $M_{k}^{(l)}$ according to~\cite{cong2017deep}; then $\piv_{k}^{( l)}$ with \eqref{TLASGR update_Pi};\\
Update $\phiv_{k}^{( l)}$ similar with \eqref{TLASGR update_Pi};\\
\ENDFOR\\
\ENDFOR\\
Return global parameters $\{\Omegamat, \{ \Pimat^{(l)},\Phimat^{(l)} \}_{l=1}^{L}\}$.
\end{algorithmic}\label{Algorithm}
\end{algorithm}
\section{Experimental Results}
\subsection{Results with Synthetic data}
To illustrate the proposed model and compare it to existing methods, here we consider several three-dimensional synthetic datasets. Each toy data discussed below can be denoted as $\xv \in \mathbb{R}^{V \times T}$ with $V=3, T=100$.
$\bullet$ \textbf{Toy data 1:} To verify whether our proposed model can learn the transition matrix accurately, we generate the data with {rGBN-PRG}, where we
assume the transition matrix as $\Pimat = \begin{bmatrix} 0.65&0.20\\0.35&0.80\\ \end{bmatrix}$, initial latent state as $ \sv_{1}\! =\! \begin{bmatrix} 100 &0\\ \end{bmatrix} ^ \mathrm{T}$, and the Dirichlet distributed weight matrix as $\Phimat\in \mathbb{R}^{3 \times 2}$.
The latent states can be generated with $\sv_{t} \sim \mbox{Gam}\left( \Pimat \sv_{t-1}, 1\right)$, where $\sv_{t}\in \mathbb{R}^{2 \times 1}, T=100$. The non-negative observed data can be sampled from $\rv_{t}\sim \mbox{Pois}\left(\Phimat \sv_{t} \right),\xv_{t} \sim \mbox{Gam}\left(\rv_{t},1\right) $, where $\xv_{t}\in \mathbb{R}^{3 \times 1}$.
$\bullet$ \textbf{Toy data 2:}
{$x_{1,t}=t$, $x_{2,t}=2\exp(-t/15)+\exp(-((t-25)/10)^2)$, and $x_{3,t}= 5\sin(t^2) +6 $ for $t=1,\ldots,100$.}
$\bullet$ \textbf{Toy data 3:}
$x_{1,t} = t$, $x_{2,t} = 2\mbox{mod}(t,3)$, $x_{3,t}= 20\exp(-t/3)$ for $t=1,\ldots,50$, and
$x_{1,t} = 2t+30$, $x_{2,t} = 3\mbox{mod}(t,2)+5$, and $x_{3,t}= 30t\exp(-t)+10$ for $t=51,\ldots,100$, where $ \mbox{mod}(t,k)$ denotes the modulo operation that returns the remainder after division of $t$ by $k$.
Following previous work \cite{acharya2015nonparametric,gong2017deep,Adams2009}, we choose the particular expressions in Toy 2 and Toy 3 to check if our proposed model can capture such a complex temporal structure. We set the number of latent states as $K = 2$, and compare the proposed single layer {rGBN-PRG}, LDS, and HMM with both mean squared error (MSE) and prediction MSE (PMSE). MSE is measured between the estimated value and the ground truth for all the models at $t=1:T-1$, which is observed at the training stage; PMSE is measured between the predicted value and ground truth at the last time step, which is unobserved at the training stage.
Table~\ref{Tab:Results on Synthetic Data} illustrates the performance of different methods evaluated based on the simulated data.
It is clear that the proposed model provides satisfactory performance in both fitting and prediction for all datasets, which shows the benefits of capturing complex temporal structure in the latent space. For Toy 1, both {rGBN} and HMM are capable of discovering a transition matrix, $e.g.$, $\hat \Pimat=\begin{bmatrix} 0.712&0.164\\0.288&0.836\\ \end{bmatrix}$ in {rGBN} and
$\hat \Pimat=\begin{bmatrix} 0.924&0.018\\0.076&0.982\\ \end{bmatrix}$ in HMM, so the estimated transition matrix of the proposed model is closer to the ground truth than that of HMM.
For Toy 2, while LDS obtains lower MSE than the proposed model does, its PMSE is much worse, suggesting LDS is prone to overfitting on the training data. For Toy 3, we can find that features of dimensions 2 and 3 have very complex temporal relationships. HMM performs worse in Toy 3, possibly because of the complicated structure in Toy 3 makes HMM difficult to model.
\linespread{1.15}
\begin{table}[!htbp]
\vspace{-0.3cm}
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-2.cm}
\centering
\caption{Results on Synthetic Data.}
\scalebox{0.85}{\begin{tabular}{ccccc}
\toprule[0.8pt]
Data& Measure & Our model & HMM & LDS \\
\hline
\multirow{2}*{\textbf{Toy1}} & MSE &\textbf{15.06} &20.08 &21.48 \\
& PMSE& \textbf{4.47} &9.83 &17.65\\
\hline
\multirow{2}*{\textbf{Toy2}} & MSE &2.11 & 27.59 & \textbf{1.21} \\
& PMSE& \textbf{2.47} &85.72 & 7.08\\
\hline
\multirow{2}*{\textbf{Toy3}} & MSE &\textbf{2.11} &53.98 &2.28 \\
& PMSE& \textbf{2.51} &250.69 &3.92\\
\bottomrule[0.8pt]
\end{tabular}\label{Tab:Results on Synthetic Data}}
\end{table}
\vspace{-0.5cm}
\subsection{{Measured HRRP data}}
A widely used HRRP dataset \cite{feng2017radar,du2011bayesian,Du2012Radar,XuBin}, consisting of the measurements from three real airplanes, is adopted to evaluate the performance of the proposed method. Yak-42 is a large and medium-sized jet aircraft, Cessna Citation is a small-sized jet aircraft, and An-26 is a medium-sized propeller aircraft.
The detailed parameters of the targets and measurement radar are presented in Table \ref{Tab:Parameters of radar and planes}.
We display the projections of the target trajectories onto the ground plane in Fig. \ref{fig:Projections}, where the measured data can be segmented into several parts. Following the experiment settings in previous studies \cite{feng2017radar,Du2012Radar,XuBin,duAGC}, the measured HRRP data used in our experiments has been divided into training and testing data. There are two preconditions for selecting the training and test data. For one thing, the training data should contain almost all of the target-aspect angles of the test data. For another, the elevation angles of the training data should be different from those of the test data to verify the generalization ability of the proposed model. Therefore, we take the second and fifth segments of Yak-42, the sixth and seventh segments of Cessna Citation, and the fifth and sixth segments of An-26 for training, and take the other segments for testing. There are in total 7800 HRRP samples for training, corresponding to 2600 profiles for each of the three classes, and 5200 HRRP samples for testing. The number of target classes is $C=3$.
\vspace{-0.1cm}
\subsection{Set up}
For the networks used in this paper, the elements of all weight matrices are initialized with Gaussian distributions whose standard deviations are set to $0.1$, and all bias terms are initialized to $0$.
For the proposed models, we set the mini-batch size $M$ as $100$.
The Adam optimizer \cite{kingma2015adam} with learning rate ${10}^{-4}$ is utilized for optimization. Here we only choose a linear SVM (LSVM)~\cite{libsvm} for classifying the representations extracted with unsupervised models, in order to highlight the discriminability of the learned features. Real 256-dimensional HRRP vectors in \eqref{real_HRRP} are utilized to train the non-dynamical models. We set $[K_{1},K_{2},K_{3}] = [30, 20, 10]$ for the proposed methods.
All experiments are implemented in {non-optimized software written in Python, on a Pentium PC with 3.7-GHz CPU and 64 GB RAM.}
Unless specified otherwise, this setting will be adopted across all experiments.
For Bayesian generative model, to ensure the corresponding random variables drawn from non-informative priors and all the posteriors leaned from data, we directly set $\{\eta^{(l)}=0.1,\nu^{(l)}=0.1,b^{(l)}=1,c=200\}$ without tuning exhaustively. Besides, we set scaling parameter $\mu=100$, and manipulate the training and test data by a window function with window size $V=30$ and overlap $O=15$, leaving $T=16$ for each HRRP sequence, since they have the corresponding reasonable range as a priori knowledge. Below, instead of exhaustively optimizing these parameters, we demonstrate the influence of each parameter on model performance.
\linespread{1.3}
\begin{table}
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-2.cm}
\centering
\caption{Parameters of radar and planes in the ISAR experiment.}
\scalebox{0.8}{\begin{tabular}{cccc}
\toprule[0.8pt]
\multirow{2}*{\textbf{Radar parameters}} & \textbf{Center Frequency} & \multicolumn{2}{c}{5520MHZ}\\
&\textbf{bandwidth} & \multicolumn{2}{c}{400MHZ}\\
\midrule[0.8pt]
\textbf{Planes}&\textbf{Length(m)}&\textbf{Width(m)}&\textbf{Height(m)}\\
\hline
{Yak-42} & 36.38 & 34.88 & 9.83\\
\hline
Cessna Citation &14.40 &15.90 &4.57\\
\hline
An-26& 23.80 &29.20& 8.58\\
\bottomrule[0.8pt]
\end{tabular}\label{Tab:Parameters of radar and planes}}
\end{table}
\vspace{-0.2cm}
\begin{figure}
\setlength{\abovecaptionskip}{-0.3cm}
\setlength{\belowcaptionskip}{-1cm}
\begin{center}
\centering
\includegraphics[height=3cm,width=8.8cm]{Projection.pdf}
\caption{Projections of the target trajectories onto the ground plane for (a) {Yak-42}, (b) Cessna Citation, and (c) An-26.}
\label{fig:Projections}
\end{center}
\end{figure}
\subsection{Influence of Model Parameters}
As shown in previous studies~\cite{feng2017radar,XuBin}, it is important to analyze how the recognition performance is influenced by the model parameters, which for the proposed models include the scaling parameter $\mu$, window size $V$, and overlap $O$.
\textbf{Scaling parameter $\mu$:}
As discussed in section \ref{my_model}, the sequential HRRP sample can be modeled with the PRG distribution in \eqref{DPGDS_sample_PRG}, where we refer our models as rGBN-PRG and s-rGBN-PRG. Considering the PRG link may be time-consuming due to its iterative procedure to infer latent counts, we discretize the HRRP sequences to produce counts ($i.e.$, non-negative integers) with the scaling parameter $\mu$. The input counts can be modeled with the Poisson distribution in \eqref{Poisson_observed}, where we refer our models as rGBN-Poisson and s-rGBN-Poisson.
Fig. \ref{fig:different_scalar} shows the recognition performances of our proposed models varying with $\mu$. Clearly, a large value of $\mu$ will lead to overfitting and a small one will drop too much detailed information of the HRRP data, resulting in underfitting. For our proposed models, although using the PRG link generally perform better than using the Poisson link, we can achieve a compromise between the performance and computation with the Poisson link.
\begin{figure}[!htbt]
\vspace{-0.3cm}
\begin{center}
\centering
\includegraphics[height=3.8cm,width=9cm]{scaling-rGBN.pdf}
\caption{Shown in (a)-(c) are the recognition performances for single-layer, two-layer, three-layer, respectively, as a function of the scaling parameter $\mu$, where the horizontal lines present the recognition performance of the PRG link, and the curves indicate the performances of the Poisson link for various values of $\mu$.}
\label{fig:different_scalar}
\end{center}
\end{figure}
\textbf{Window size $V$:}
Fig. \ref{fig:different_v} shows the variation of the classification accuracies of dynamical models with the value of window size $V\in [10,50]$, where the overlap length is fixed as $O = V /2$.
As $V$ decreases, we get the sequence with a higher computational and memory burden for all temporal models, thus increasing the inference difficulty and degrading their performance.
On the contrary, a large $V$ allows more information of the target to be contained in each subsequence $\xv_{t,n}$, but a too-large one may result in the loss of sequential information.
Fig. \ref{fig:different_v} shows that compared with RNN, our proposed models, which provide a stochastic generalization of the deterministic RNN by adding uncertainty into the latent space via a deep generative model, are more robust to the window length.
\begin{figure}[!htbt]
\begin{center}
\centering
\includegraphics[height=3.7cm,width=9cm]{windowsize-rGBN.pdf}
\caption{Shown in (a)-(c) are the recognition performance for single-layer, two-layer, three-layer, respectively, as a function of the window size $V$ for the dynamical models.}
\label{fig:different_v}
\end{center}
\end{figure}
\textbf{Overlap $O$:}
After fixing the window size as $V=30$, we compare the performance of different dynamical methods as a function of the overlap length $O$, which varies from 5 to 25 range cells.
Because the redundancy of the segments is determined by the overlap length across the windows, a lower correlation between adjacent inputs is encouraged as parameter $O$ decreases, which will weaken the sequential relationship between time steps. Instead, an improperly large one will raise the length of sequence fast, imposing higher computational and memory burden and reducing the performance. Besides, the proposed models are more robust to the overlap length compared with RNN.
\begin{figure}[!htbt]
\begin{center}
\centering
\includegraphics[height=3.7cm,width=9cm]{overlap-rGBN.pdf}
\caption{{Shown in (a)-(c) are the recognition performance for single-layer, two-layer, three-layer}, respectively, as a function of the overlap $O$ for various dynamical models.}
\label{fig:different_overlap}
\end{center}
\end{figure}
\vspace{-0.6cm}
\subsection{Recognition Performance}
\vspace{-0.1cm}
\begin{table}[htbp]
\setlength{\abovecaptionskip}{-3pt}
\setlength{\belowcaptionskip}{-3pt}
\centering
\caption{Comparison of Recognition Accuracy}
\label{tab:Margin_settings}
\scalebox{0.8}{\begin{tabular}{ccc}
\toprule
\textbf{Non-dynamical Models} &\textbf{Size} &\textbf{Accuracy} \\
\hline
MCC \cite{du2005radar} &- & 59.00\\
AGC \cite{duAGC} &- & 85.20\\
LSVM \cite{libsvm} &- & 87.14\\
LDA \cite{yu2001a} &2 & 82.16\\
PCA \cite{Du2007Radar} &200 & 83.81\\
VAE \cite{kingma2014autoencoding} &200 & 87.84\\
\hline
\multirow{3}*{DBN \cite{hinton2006a}} &200 & 88.28\\
&200-100 & 88.51\\
&200-100-50 & 89.16\\
\toprule
\textbf{Dynamical Models} &\textbf{Size} &\textbf{Accuracy} \\
\hline
LDS \cite{WangLDS} &30 &87.65 \\
HMM \cite{Pan2012Multi} &30 & 87.24\\
{TARAN \cite{XuBin}} &30 &90.10\\
\hline
{TCNN \cite{wan2019convolutional}} & 1 $\times$ 9 (32-32-64) &92.57\\
\hline
\multirow{3}*{RNN \cite{graves2013speech}} &30 & 88.99\\
&30-20 & 89.77\\
&30-20-10 & 91.64\\
\hline
\multirow{3}*{rGBN-Poisson} &30 & 88.36\\
&30-20 & 89.22\\
&30-20-10 &90.23\\
\hline
\multirow{3}*{rGBN-PRG} &30 &88.66 \\
&30-20 &89.67 \\
&30-20-10 &90.58\\
\hline
\multirow{3}*{s-rGBN-Poisson} &30 &90.47 \\
&30-20 &91.87 \\
&30-20-10 &92.91\\
\hline
\multirow{3}*{s-rGBN-PRG} &30 &91.02 \\
&30-20 &92.52 \\
&30-20-10 &\textbf{93.54} \\
\bottomrule
\end{tabular}\label{Tab:results}}
\end{table}
Similar to Feng~et~al.~\cite{feng2017radar}, we evaluate the proposed models (rGBN and s-rGBN) against several commonly used recognition methods for HRRP, such as maximum correlation coefficient (MCC)~\cite{du2005radar}, adaptive Gaussian classifier (AGC)~\cite{duAGC}, and LSVM using the original HRRP dataset as input~\cite{libsvm}.
A variety of feature extraction methods, including linear discriminant analysis (LDA)~\cite{yu2001a}, PCA~\cite{Du2007Radar}, VAE~\cite{kingma2014autoencoding}, LDS~\cite{WangLDS}, and HMM~\cite{Pan2012Multi}, are also included for comparison.
To demonstrate the advantages of constructing deep dynamical systems, we further consider RNN~\cite{graves2013speech} and deep belief network (DBN)~\cite{hinton2006a}, where DBN can be viewed as a stack of restricted Boltzmann machines (RBMs) for modeling the binary hidden units in the lower layers. In addition to the basic RNN, we also compare the results of the proposed method with target-aware recurrent attentional network (TARAN) \cite{XuBin}. TARAN first utilizes RNN to explore the sequential relationship between the range cells within an HRRP sample, then employs an attention mechanism to weight up each hidden state and discover the target area. We further compare our
model with temporal one-dimensional convolution neural network (TCNN) \cite{wan2019convolutional}, where the convolution operation only takes place along the range dimension. Note TARAN \cite{XuBin} presents the recognition result of the time sequential HRRP only with one hidden layer (hidden dimension is $30$), achieving the performance of 90.10\%, and TCNN \cite{wan2019convolutional} achieves a recognition accuracy of 92.57\% with the structure of 32-32-64 (kernel size is $1 \times 9$ in each layer).
For dynamical models, we set the number of hidden states of LDS and HMM as $30$, and set that of the RNN the same as that of the proposed models.
The feature dimension of both PCA and VAE is set as 200, while that of DBN as $[K_{1},K_{2},K_{3}] = [200, 100, 50]$, which all belong to non-dynamical feature extraction models.
According to the literature~\cite{feng2017radar}, the hidden dimension of LDA can be set as $C-1=2$.
The extracted features from the training set with different unsupervised models are utilized to train the LSVM classifier, where the regularization parameter is five-fold cross-validated on the training set.
The softmax function shown in \eqref{softmax} is adopted to predict the labels of the testing samples for supervised models, including s-rGBN and RNN.
Summarized in Table \ref{Tab:results} are the recognition results of various methods on the HRRP dataset.
The shallow dynamical models including LDS, HMM, and TARAN, which take into account the temporal information in HRRP, already clearly outperform other traditional models for HRRP RATR including AGC, MCC, LSVM, LDA, and PCA. Besides, the deep models tend to have better performance on classifying HRRP samples than shallow architectures do, for providing more discriminative hierarchical non-linear features.
As for DBN, although it builds a deep generative model, we find that its performance is inferior to our proposed deep dynamical models, which is not surprising as the former ignores the temporal dependence in HRRP samples. Compared with TCNN, RNN, and TARAN, which build the deterministic mapping and find point estimates for the global parameters, s-rGBN-PRG achieves better accuracy, proving the efficiency of the probabilistic framework and the hybrid Bayesian inference algorithm.
Note that our proposed s-rGBN-PRG and s-rGBN-Poisson perform better than the unsupervised rGBN-PRG and rGBN-Poisson, which verifies that introducing the label information into proposed models certainly benefits the recognition performance.
Though our proposed unsupervised rGBN schemes underperform supervised models, such as RNN at layer 3 and TCNN, they still can learn hierarchical latent states from HRRP data and obtain acceptable recognition rates in the absence of label information.
\setlength{\abovetopsep}{0.5ex}
\setlength{\belowrulesep}{0pt}
\setlength{\aboverulesep}{0pt}
\linespread{1.3}
\begin{table*}[!htbp]
\centering
\caption{The confusion matrices of the proposed s-rGBN at different layers, with the network architecture set as 30-20-10.}
\scalebox{0.8}{\begin{tabular}{c|ccc|ccc|ccc}
\toprule[0.8pt]
\textbf{Methods} & \multicolumn{3}{c|}{\textbf{ s-rGBN-layer1}} & \multicolumn{3}{c|}{\textbf{ s-rGBN-layer2}} &\multicolumn{3}{c}{\textbf{ s-rGBN-layer3}}\\
\cline{2-4}
\cline{5-7}
\cline{8-10}
& An-26 & Cessna Citation &{Yak-42} & An-26& Cessna Citation & {Yak-42} & An-26 & Cessna Citation & {Yak-42}\\
\midrule[0.8pt]
An-26& 86.17 &8.63 & 5.20& 87.87 &7.08 &5.05 & 89.39 &6.30 &4.31 \\
\hline
Cessna Citation &5.30 &93.45 &1.25 &4.30 &94.40 & 1.30 & 4.05 &95.30 &0.65 \\
\hline
{Yak-42} &3.08 &2.17 & 94.75 & 1.84 &1.33 &96.83 & 1.57 & 1.18 &97.25 \\
\hline
Average Recognition Rates& \multicolumn{3}{c|}{91.02} &\multicolumn{3}{c|}{92.52} & \multicolumn{3}{c}{\textbf{93.54}} \\
\bottomrule[0.8pt]
\end{tabular}\label{Tab:confusion matrices}}
\end{table*}
To investigate the performance of the methods on different targets, we list the confusion matrices and average recognition rates in Table \ref{Tab:confusion matrices}.
Each row of the confusion matrix should be the number of samples in a specific predicted class. For the sake of comparison, we set each row of the matrix as a predicted ratio rather than a number. Specifically, we set the confusion matrix of each method as $P \in \mathbb{R}_{+}^{C\times C}$, and $P_{c_1,c_2} = {N_{c_1,c_2}}/{N_{c_1}}$ , where $N_{c_1}$ denotes the number of HRRP samples for category $c_1$, $N_{c_1,c_2}$ denotes the the number of samples which are in an actual class $c_1$ but are classified into class $c_2$, and $\sum_{c_2=1}^C P_{c_1,c_2} = 1$. There is a clear trend that the classification performance for the HRRP samples based on s-rGBN is increasing with more layers, where the gains of the accuracy mainly come from the improvement at layer 2 for Yak-42 and Cessna Citation and that at layer 3 for An-26.
The An-26 aircraft is a propeller aircraft whose waveform has larger fluctuation, and hence its feature may be much more overdispersed in comparison to these of Cessna Citation and Yak-42. s-rGBN, whose learned neurons become more general with the increase of the layers, maybe more robust to the fluctuation in higher layers and hence learn discriminative features to improve the accuracy.
\subsection{Qualitative Analysis}
As discussed in Section III, in comparison to RNN that builds temporal non-linearity via ``black-box'' neural networks and other dynamical methods that only model single-layer temporal latent states, our proposed models can provide the desired interpretation, for being a deep Bayesian dynamical generative model.
In particular, in addition to quantitative evaluations, we further visualize the learned neurons, reconstruction examples, latent space interpolations on the HRRP test set, and learned latent states, at each layer.
\textbf{The structured neurons:}
To visualize the multilayer s-rGBN, it is straightforward to project the neuron $\phiv_{k}^{(l)}$ of layer $l$ to the bottom data layer.
Specifically, we first choose a node at the top layer, with a large coefficient, then grow the tree downward to include any lower-layer neurons that are connected to the node with non-negligible weights. For $\phiv_{k}^{(l)}$, we plot all its terms whose values are larger than 1\% of the largest element of the corresponding $\phiv_{k}^{(l)}$.
As shown in Fig.~\ref{fig:dictionary}, we find that the neurons at layer 1 are fundamental, similar to the original echo per range cell in HRRP.
As layer $l$ increases, the neurons become increasingly more general or sparsely utilized, which consist of several echoes from different range cells in HRRP, covering longer-range temporal dependencies. To sum up, our proposed model can not only extract meaningful neurons at each layer but also capture the relationships between the neurons of different layers.
\begin{figure}[!htbt]
\vspace{-0.3cm}
\setlength{\abovecaptionskip}{-0.3cm}
\setlength{\belowcaptionskip}{-1cm}
\begin{center}
\centering
\includegraphics[height=3cm,width=6.8cm]{Visualization_Neurons.pdf}
\caption{Visualization and hierarchy of example neurons in different layers. Neurons at layers 3, 2, and 1 are shown in blue, green, and red boxes, respectively.}
\label{fig:dictionary}
\end{center}
\end{figure}
\textbf{Reconstruction:}
In Fig. \ref{fig:reconstruction_all}, we show the reconstruction examples for the three airplane targets at layers 1, 2, and 3.
To reconstruct the HRRP samples, {given the learned global parameters $\{\Omegamat, \{ \Phimat^{(l)},\Pimat^{(l)} \}_{l=1}^{L}\}$}, we need to find the conditional posterior of latent state $\sv_{t,n}^{(l)}$ , whose variational parameters can be directly transformed from the observed HRRP examples using the neural networks in \eqref{MLP1}, \eqref{MLP2}, and \eqref{RNN_update_h}.
Then we sample the latent states using the reparameterization trick in \eqref{sample_theta}, where a single Monte Carlo sample from $q\left( \sv_{t,n}^{(l)} \right) $ is enough to obtain satisfactory performance.
We find that our model can not only retain the main structural information of the original test HRRP samples but also reconstruct the details of the targets in each sequential HRRP.
\begin{figure}[!htbt]
\vspace{-0.3cm}
\setlength{\abovecaptionskip}{-0.3cm}
\setlength{\belowcaptionskip}{-10cm}
\begin{center}
\centering
\includegraphics[height=6.9cm,width=8.3cm]{reconstruction.pdf}
\caption{The reconstruction performance of s-rGBN for three testing samples at different layers.
\textbf{\textrm{Columns 1-3}}: The reconstructing samples of An-26, Cessna Citation, and Yak-42, respectively;
\textbf{\textrm{Rows 1-3}}: The reconstructing samples based on s-rGBN layer-1, layer-2, and layer-3, respectively.
}
\label{fig:reconstruction_all}
\end{center}
\end{figure}
\textbf{Latent space interpolations:}
One might want to investigate whether s-rGBN has indeed {extracted} the abstract representations of HRRP data.
Following previous ideas \cite{Zhang2018WHAI,dumoulin2017adversarially}, we present the latent space interpolations on the HRRP test set examples.
Given two HRRP sequences $\xv_{1:T,1}$ and $\xv_{1:T,2}$, we project them into $\sv_{1:T,1}^{(1:3)}$ and $\sv_{1:T,2}^{(1:3)}$, with the 3-layer model learned before.
Using linear interpolation between $\sv_{1:T,1}^{(l)}$ and $\sv_{1:T,2}^{(l)}$ at layer $l$, we can produce new sequences by passing the intermediate points through the
dynamical generative model.
In Fig. \ref{fig:latent}, we display the reconstruction results of $\sv_{1:T,1}^{(3)}$ and $\sv_{1:T,2}^{(3)}$ in (a) and (f), respectively, and
the generated sequences from the linearly interpolated $\sv$-values in (b)-(e).
The proposed model can generate realistic and interpretable HRRP sequences
for all interpolated $\sv$-values. In other words, the inferred latent space of the model is on a manifold, indicating our proposed model has learned a generalizable latent state representation instead of concentrating its probability mass around the HRRP training samples.
\begin{figure}[!htbt]
\setlength{\abovecaptionskip}{-0.3cm}
\setlength{\belowcaptionskip}{-0.3cm}
\begin{center}
\centering
\includegraphics[height=4.3cm,width=7.7cm]{latent_states.pdf}
\caption{Latent space interpolations on HRRP test set. (a) and (f) are the samples generated from $\sv_{1:T,1}^{(3)}$ and $\sv_{1:T,2}^{(3)}$, respectively, and the others are generated from the latent states interpolated linearly from $\sv_{1:T,1}^{(3)}$to $\sv_{1:T,2}^{(3)}$.}
\label{fig:latent}
\end{center}
\end{figure}
\textbf{Latent states:}
To compare the latent states learned from different methods, we visualize high-dimensional features by mapping them to the two-dimensional subspace with $t$-distributed stochastic neighbor embedding ($t$-SNE). It is a non-linear dimensionality reduction technique well-suited for embedding high-dimensional data into a low-dimensional space~\cite{dermaaten2008TSNE}. As shown in Fig. \ref{fig:tsne}, each dot represents an HRRP sample and each color-shape pair denotes a category.
We visually illustrate that the features learned by the proposed supervised method are more discriminative than those by the unsupervised method and RNN. The phenomenon proves the benefits of learning a supervised hierarchical probabilistic model that considers both target and label generation.
\begin{figure}[!htbt]
\setlength{\abovecaptionskip}{-0.3cm}
\setlength{\belowcaptionskip}{-0.3cm}
\begin{center}
\centering
\includegraphics[height=5.5cm,width=7.7cm]{tsne.pdf}
\caption{Two-dimensional $t$-SNE projection of the test HRRP samples and their corresponding features at different layers.
Shown in (a) are the original test HRRP samples, and shown in (b)-(d) are learned features in rGBN, RNN, and s-rGBN, respectively.}
\label{fig:tsne}
\end{center}
\end{figure}
\vspace{-0.3cm}
\subsection{{Robustness to Training Data Size}}
\begin{figure}[!htbt]
\vspace{-0.2cm}
\setlength{\abovecaptionskip}{-0.3cm}
\setlength{\belowcaptionskip}{-7cm}
\begin{center}
\centering
\includegraphics[height=2.7cm,width=5.8cm]{training_size.pdf}
\caption{Comparison of the recognition performance between various methods with different HPPR training set sizes.}
\label{fig:trainingsize}
\end{center}
\end{figure}
\vspace{-0.1cm}
It is worth pointing that a practical RATR system should provide an acceptable recognition rate even with a few training samples~\cite{feng2017radar,WangLDS}, such as in the non-cooperative circumstance. In Fig. \ref{fig:trainingsize}, we depict how the recognition performance of each method varies with the size of the training dataset.
With a relatively small training dataset, the accuracy of deterministic RNN drops sharply; a possible explanation for this phenomenon is
that a point estimate by SGD ignores model uncertainty and the kind of variability observed in highly structured data is poorly modeled by the output probabilistic model alone. The parameters in the generative model of VAE (the weights) are updated by SGD towards a point estimate, leading to an obvious decrease in test accuracy.
The DBN is robust to the small training samples whose weights are updated with Gibbs sampling, but it does not efficiently incorporate both between-layer and temporal dependencies of the observed HRRP samples, leading to lower accuracy than that of the proposed methods.
By contrast, by exploiting temporal correlations, the proposed models achieve higher accuracy and maintain acceptable performance even when the training data size becomes much smaller.
We attribute that to the following three advantages: 1) building the novel dynamical probabilistic model to describe the complicated sequential HRRP samples;
2) developing the hybrid stochastic gradient MCMC and recurrent variational inference to update model parameters and provide model uncertainty;
3) fusing multi-stochastic-layer features to enhance robustness.
\subsection{Computational Complexity}
\begin{figure}[!htbt]
\vspace{-0.2cm}
\setlength{\abovecaptionskip}{-0.3cm}
\setlength{\belowcaptionskip}{-0.5cm}
\begin{center}
\centering
\includegraphics[height=3.5cm,width=4.2cm]{training_time.pdf}
\caption{Comparison of the test recognition performance as a function of training time between various methods.
}
\label{fig:training_time}
\end{center}
\end{figure}
\vspace{-0.3cm}
\begin{table}[!htbp]
\setlength{\abovecaptionskip}{-4pt}
\setlength{\belowcaptionskip}{-4pt}
\centering
\caption{Comparison of computational cost at the testing stage.}
\scalebox{0.8}{\begin{tabular}{cc}
\toprule
\textbf{Models} &\textbf{Testing a sample(s)} \\
\hline
LDS & 0.6035 \\
HMM & 0.4412\\
VAE & 0.0006\\
RNN layer-3 & 0.0008\\
s-rGBN-Poisson layer-3 & 0.0010\\
s-rGBN-PRG layer-3 &0.0010 \\
\bottomrule
\end{tabular}\label{Tab:Time}}
\end{table}
In this section, we evaluate the computational complexity of various methods.
In Fig. \ref{fig:training_time}, we compare various methods in terms of how their test recognition rates vary as the increase of training time.
At each training iteration, for both LDS and HMM, all training HRRP samples are processed, while for RNN, VAE, s-rGBN-PRG, and s-rGBN-Poisson, a mini-batch is randomly selected for training. As shown in Fig. \ref{fig:training_time}, both s-rGBN-PRG and s-rGBN-Poisson outperform RNN and VAE in providing higher performance as time progresses. Note that RNN is only a deterministic recurrent network that does not provide a probabilistic generative model in the latent space to model uncertainty and discover latent hierarchical structure, and VAE is restricted to model non-sequential data. Our mini-batch based inference algorithm converges faster than batch-based ones (including LDS and HMM), which further demonstrates the advantage of our proposed hybrid MCMC/VAE inference algorithm.
In Table \ref{Tab:Time}, we compare the computational complexity of various algorithms at the testing stage. Both HMM and LDS have high computational cost, due to the need to perform a number of iterations to infer the latent representation of each text sample. By contrast, by directly mapping a test data into its latent representation via non-linear transformation, RNN, VAE, and the proposed models equipped with the feed-forward inference network, are able to process the test data in real time.
\section{Conclusion}
For radar high-resolution range profile (HRRP) target recognition, we introduce recurrent gamma belief network (rGBN), a temporal deep generative model, to
efficiently capture the global structure of the targets and temporal dependence between the range cells in a single HRRP.
Scalable inference for rGBN is developed by integrating stochastic gradient MCMC
and recurrent variational inference into a hybrid inference scheme.
We further propose supervised rGBN to increase the discriminative power of the latent states by
jointly modeling the HRRP samples and their labels.
Experimental results on synthetic and measured HRRP data demonstrate that in comparison to existing models, the proposed ones not only exhibit
superior recognition performance and enhanced robustness to the variation of the training set size, but also provide highly interpretable latent structure.
\normalem
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,834 |
\section{Introduction}
Quantum computation is an attractive field on precision control of quantum systems. But quantum systems are inevitably influenced by the decoherence effect induced from the environment, which stands in the way of physical implementation of quantum computation. In order to overcome this difficulty, there are many strategies to correct or avoid errors during the implementation of the gate operations. The geometric phases \cite{Berry1984, Aharonov1987} and their non-Abelian extensions, quantum holonomies \cite{Wilczek1984, Anandan1988}, accompanying in evolutions of quantum systems, unveil important geometric structures in the description of the dynamics of quantum states. Geometric phases depend only on global geometric properties of the evolution paths so that they are largely insensitive to many types of local noise. Based on this distinct merit, holonomic quantum computation (HQC), first proposed by Zanardi and Rasetti \cite{Zanardi1999}, has emerged as a promising strategy to implement universal quantum computation in a robust way \cite{Duan2001a, Zhu2003a, Zhang2006, Thomas2011}.
It is well known that the geometric phases, either Abelian or non-Abelian, consist of both adiabatic and non-adiabatic parts. In adiabatic evolution, it requires that quantum evolutions fulfill the famous adiabatic condition, and thus quantum gates constructed in this way generally have very slow speeds. This is unacceptable as the time needed for an adiabatic quantum gate maybe on the order of the coherence time of the used quantum two-level systems in typical systems \cite{Wang2001,Zhu2002}. Therefore, one needs to consult quantum gates based on non-adiabatic evolution \cite{Wang2001,Zhu2002}, which does not require the adiabatic condition so that the gate speeds will only depend on the merits of the employed quantum systems. In this case, the built-in fault-tolerance of the non-adiabatic geometric phases provide a more practical way in implementing quantum computation. Therefore, many renewed efforts have recently been given in this direction both theoretically and experimentally \cite{Sjoqvist2012, Abdumalikov2013, Feng2013}.
Meanwhile, as an another promising way to avoid the effect of decoherence, decoherence-free subspace (DFS) can suppress the collective dephasing noise caused by the interaction between quantum systems and their environment \cite{Duan1997a, Zanardi1997, Lidar1998, Kielpinski2001, Ollerenshaw2003, Bourennane2004}. Therefore, many efforts have been paid to combine the HQC with DFS encoding \cite{Wu2005, Xu2012, Liang2014a, Zhang2014d, xu2014}. In this way, one can consolidate the best of the two quantum computation strategies, i.e.,
resilient against environment induced collective decoherence of the DFS approach and the operational robustness of the HQC against local noises.
Here, we propose a non-adiabatic HQC (NHQC) scheme with DFS encoding based on nitrogen-vacancy (NV) centers coupled commonly to the whispering-gallery mode (WGM) of a fused-silica microsphere cavity. The NV center system is considered as a promising candidate for physical implementation of quantum computation, due to its sufficiently long electronic spin lifetime as well as the possibility of coherent manipulation even at room temperature \cite{Stoneham2009, shi2010}. Since the electronic spins of the NV centers can be well initialized and manipulated in an optical way, quantum gates acting on the single-spin state can be obtained with very high efficiently \cite{Jelezko2004, Jiang2009}. Based on the symmetry structure of the system-environment interaction, we propose an encoding method of the logical qubits in the DFS, where any logical qubit state that undergoes a cyclic evolution will end up with a new state in the same subspace, without going out of the subspace. Moreover, based on the numerical simulation under realistic conditions, we show that our NHQC scheme in DFS can realize a universal set of quantum gates with high fidelity.
\begin{figure}[tbp]
\centering
\includegraphics[width= 9cm]{fig1}
\caption{(a) Schematic setup of the fused-silica microsphere cavity, where
$N$ identical NV centers (red dots) in diamond nanocrystals are equidistantly attached around the equator of the cavity. (b) Level diagram for a NV center, with $g$ ($\Omega _{L}$) is the coupling strength between NV center and the WGM (laser pulse). We encode a physical qubit in the subspace spanned by the states of $m_{s}=0$ and $m_{s}=-1$.} \label{setup}
\end{figure}
\section{Setup and effective Hamiltonian}
The setup we consider and the energy level configuration of the NV centers are schematically shown in Fig. \ref{setup}. In the cavity, the lowest-order WGM, corresponding to the light traveling around the equator of the microsphere, offers exceptional mode properties for reaching strong light-matter coupling. As shown in the Fig. \ref{setup}(a), $N$ NV centers, from separate diamond nano-crystals, are strongly coupled to the WGM of a microsphere cavity \cite{Park2006, Barclay2009}. The NV centers we considered can be modeled as three-level systems, as shown in Fig. 1(b), where the states $\left\vert ^{3}A,m_{s}=0\right\rangle $ and $\left\vert ^{3}A,m_{s}=-1\right\rangle $ are encoded as our qubit states $\left\vert 0\right\rangle $ and $\left\vert 1\right\rangle $, respectively. The state $\left\vert ^{3}E,m_{s}=0\right\rangle $ is labeled by $\left\vert e\right\rangle $ and leave aside the metastable $^{1}A$ state, which has not yet been fully understood \cite{Manson2006}. In our implementation, the optical transition $\left\vert 0\right\rangle \rightarrow \left\vert e\right\rangle$ and $\left\vert 1\right\rangle \rightarrow\left\vert e\right\rangle $ (with transition frequencies $\omega_{e0}$ and $\omega_{e1}$) are coupled by the WGM with frequency
$\omega _{c}$ and a classical laser filed with frequency $\omega _{L}$ \cite{Santori2006}, respectively. Both coupling is far-off resonant from their transition frequencies so that the $|e\rangle$ state can be adiabatically eliminated. Note that there is a degenerate state $\left\vert ^{3}A,m_{s}=1\right\rangle$ for the qubit state $\left\vert ^{3}A,m_{s}=-1\right\rangle$, these two degenerated states can be selectively addressed by the polarization of the optical field \cite{Santori2006}. The NV centers are fixed and the distance of two NV centers is much larger than the wavelength of the WGM, so that each driven laser field, with frequency $\omega_L$ and initial phase $\varphi_j$, can interact individually with an NV center and the direct coupling among the NV centers can be negligible. Then the interaction of the whole quantum system, in units of $\hbar =1$, can be written as
\begin{eqnarray}\label{hs}
H_0&=&\omega_c a^{\dag} a + \sum_i \omega_i |i\rangle\langle i|,\notag\\
H_{S}&=&H_0 +\sum _{j=1}^{N}\left[\eta_{j} a|e\rangle_j\langle 0|+
\Omega_{L,j}e^{-i(\omega_{_L} t+\varphi_j)}|e\rangle_j\langle 1| + \text{H.c.}\right],
\end{eqnarray}
where $a^{\dag}(a)$ is the creation (annihilation) operator of the WGM of the cavity, $\omega_i$ is the frequency of $i$th energy level of the identical NV centers with $i\in\{0, 1, e\}$, $\eta_j$ ($\Omega_{L,j}$) is the coupling strength between $j$th NV center and the cavity (laser). In the interaction picture with respect to the free Hamiltonian $H_0$, under the rotating-wave approximation (RWA), the interaction can be written as
\begin{equation} \label{int}
H_{I}=\sum _{j=1}^{N} g_{j}\left(a\sigma _{j}^{+}e^{-i (\delta_{\pm,j} t -\varphi_{j})}+\text{H.c}.\right),
\end{equation}
where we have assumed $\eta_j=G$ for simplicity, the effective cavity assisted interaction strength are $g _{j}=G \Omega _{L,j}(\frac{1}{\Delta+\delta _{\pm,j}}+\frac{1}{\Delta})$ with $\Delta=\omega _{e0}-\omega
_{c}$, $\sigma _{j}^{+}=\left\vert 1\right\rangle_{j} \left\langle 0 \right\vert $,
$\sigma _{j}^{-}=\left\vert 0\right\rangle_{j} \left\langle 1\right\vert $, and $\delta_{\pm,j}=\pm(\omega _{c}-\omega _{10}- \omega_{L,j})$.
When two laser fields are applied to a pair of NV centers (for example the $m$th and the $n$th), one can obtain the effective Hamiltonian, under the condition of $\delta_{\pm}\gg g_{mn}$, as
\begin{eqnarray}\label{eff1}
H_{m,n}=\sum \limits_{j=m,n} \frac{g^{2}_{mn}}{\delta_{\pm}} (-a^{\dag}a|0\rangle_{j}\langle 0|+aa^{\dag}|1\rangle_{j}\langle 1|)
+\frac{g^{2}_{mn}}{\delta_{\pm}}(e^{i\varphi_{mn}}\sigma_{m}\sigma^{+}_{n}+\mathrm{H.c.}),
\end{eqnarray}
where $\varphi_{mn}=\varphi_{m}-\varphi_{n}$. Neglecting the level shift terms, which can be compensated by using additional lasers \cite{Tamarat2006, Acosta2012}, the effective Hamiltonian between the two NV centers reduces to
\begin{eqnarray}\label{H}
H_{m,n}=\frac{g^{2}_{mn}}{\delta_{\pm}}(e^{i\varphi_{mn}}\sigma_{m}\sigma^{+}_{n}+\mathrm{H.c.}).
\end{eqnarray}
With this Hamiltonian, we next show how a universal set of non-adiabatic holonomic
gates can be implemented.
\section{Non-adiabatic holonomic One-qubit logical gates}
Now we turn to construct universal single logical qubit operations
for the NHQC in DFS. It is noted that a logical qubit consists of two
physical qubits is not sufficient for a dephasing environment \cite{Xu2012}. We here utilize three physical qubits to encode a logical qubit. The interaction between three physical qubits (NV centers) and the dephasing environment can be described by the interaction Hamiltonian, $H_{I}=S^{z}\bigotimes{E}$, where
$S^{z}=\sum^{3}_{i=1}\sigma_{i}^{z}$ is the collective dephasing operator and $E$ is an arbitrary environment operator. In this case, there exist a three-dimensional DFS of $$S_1=\{|100\rangle,|001\rangle,|010\rangle\},$$
where the computational basis, i.e., the logical qubit states, are
encoded as $|0\rangle_{L}=|100\rangle$, $|1\rangle_{L}=|001\rangle$, and $|a_{1}\rangle=|010\rangle$ is used as a third ancillary state.
In order to implement the single logical qubit operation, we apply operation $U_{12}(t)=\exp[-i \int_0^{\tau_1} H_{12} dt]$, which can be obtained from Eq. (\ref{H}) for an operate time of $\tau_{1}$, to the NV centers $1$ and $2$ by tuning laser beams with the same detuning $\delta_+$ from their respectively transition and phase difference $\varphi=\varphi_{1}-\varphi_2$. Meanwhile, we also apply $U_{23}(\tau_1)=\exp[-i \int_0^{\tau_1} H_{23} dt]$ on NV centers $2$ and $3$ by tuning laser beams with the same detuning $\delta_-$ and identical phase.
Then, in the DFS $S_1$ the effective Hamiltonian reads
\begin{eqnarray}\label{h1}
H_{1}=\lambda_{1}\left(\sin\frac{\theta}{2}e^{i\varphi}|a_{1}\rangle_{L}\langle0|
-\cos\frac{\theta}{2}|a_{1}\rangle_{L}\langle1|\right) +\mathrm{H.c.},
\end{eqnarray}
where the effective Rabi frequency
$\lambda_{1}=\sqrt{|g_{12}|^{4}+|g_{23}|^{4}}/\delta$ with $\delta=|\delta_{\pm}|$
and the phase $\theta=2 \arctan\left(|g_{12}|^{2}/|g_{23}|^{2}\right)$ can be tuned by the amplitude of the incident lasers. This Hamiltonian, consisting of a WGM and three physical qubits (three NV centers), is in the $\Lambda$ type with ancillary logical state $|a_{1}\rangle_{L}$ at the top while the logical qubit states $|0\rangle_{L}$ and $|1\rangle_{L}$ at the bottom. In the dressed state representation, the two degenerate states of Eq. (\ref{h1}) are
\begin{eqnarray}
|d\rangle_{L}&=&\cos\frac{\theta}{2}|0\rangle_{L}+\sin\frac{\theta}{2}e^{i\varphi}|1\rangle_{L}, \notag\\
|b\rangle_{L}&=&\sin\frac{\theta}{2}e^{-i\varphi}|0\rangle_{L}-\cos\frac{\theta}{2}|1\rangle_{L}.
\end{eqnarray}
It is obviously that the dark state $|d\rangle_{L}$ decouples from the 'bright' state $|b\rangle_{L}$ and the excited state $|a_{1}\rangle_{L}$, while the
'bright' state $|b\rangle_{L}$ couples to the excited state
$|a_{1}\rangle_{L}$ with the effective Rabi frequency $\lambda_{1}$.
As a result, the evolution operator $U_1(\tau_1)=\exp(-i\int_0^{\tau_1} H_1 dt)$ can realize the holonomic gates with following two conditions. The first one is that controlling the operation time to meet $\lambda_1 \tau_1=\pi$. This condition will ensure the states evolving in the subspace $|\psi_i(t)\rangle_L=U_1(t)|i\rangle_L (i=d,b)$ undergo the cyclic evolution $|\psi_i(\tau_1)\rangle_L=|\psi_i(0)\rangle_L$. Furthermore, the relation $\langle\psi_i(t)|H_1|\psi_j(t)\rangle=0\texttt{ }(i,j={d,b})$ is always hold, which means the evolution is pure geometric and the parallel-transport condition is natural satisfied. Therefore, we can obtain the non-adiabatic holonomic single qubit gates in the space spanned by \{$|0\rangle_{L}$, $|1\rangle_{L}$\} as
\begin{equation}\label{u1}
U_{1}(\theta,\varphi)=\left(\begin{array}{ccc}
\cos{\theta}&\sin{\theta}e^{-i\varphi}\\
\sin{\theta}e^{i\varphi}&-\cos{\theta}
\end{array}\right),
\end{equation}
where $\theta$ and $\varphi$ can be chosen by tuning the amplitude and relative phase of laser beams, and thus a set of universal single qubit gates can be realized non-adiabatically. For examples, we can implement a Hadamard gate with $U_1(\pi/4,0)$ and a phase gate as $U_1(\pi/2, \pi/4)U_1(\pi/2,0)$.
\begin{figure}[tbp]
\centering
\includegraphics[width=13cm]{fig2}
\caption{ Qubit states population and fidelity under single-qubit operator $U_1(\theta, \varphi)$ with initial state $|0\rangle_L$, the red (dot) line represents the fidelity. (a) $\theta=\pi/2$ and $\varphi=0$. (b) $\theta=\pi/4$ and $\varphi=0$.}\label{single}
\end{figure}
Inevitably, the implementation process will suffer from decoherence. Considering the main decoherence effect, the collective relaxation rate $\gamma$, dephasing rate $\gamma_{\phi}$ of NV centers and the decay rate $\kappa$ of the cavity, we simulate the performance of our scheme under realistic conditions with the Lindblad master equation \cite{Lindblad1976}
\begin{eqnarray}\label{master1}
\dot\rho=-i[H_I^{(3)}, \rho]+\frac 1 2 [\kappa\mathscr{L}(a)+\gamma\mathscr{L}(S^-)+\gamma_{\phi}\mathscr{L}(S^z)],
\end{eqnarray}
where $H_I^{(3)}$ is the Hamiltonian in the form of Eq. (\ref{int}) for the case of $N=3$ with the compensation of the Stark shift, $\rho$ is the density operator, $S^-=\sum_{i=1}^3 \sigma^-_i$ and $\mathscr{L}(\mathcal{A})=2\mathcal{A}\rho \mathcal{A}^\dagger-\mathcal{A}^\dagger \mathcal{A} \rho -\rho \mathcal{A}^\dagger \mathcal{A}$ is the Lindblad operator. In our simulation, we have used the following conservative set of experimental parameters. The NV centers are located near the microcavity surface and the maximum coupling between an NV center and the cavity could be $G=2\pi\times 1$ GHz with the mode volume of $V_m=100\mu m^3$ \cite{Neumann2009}. For $\delta \ll \Delta$, $g\simeq 2G \Omega_L/\Delta=2\pi \times 50$ MHz with $\Omega_L=2\pi \times 500$ MHz, $\Delta=2\pi \times 8$ GHz and $\delta=2\pi \times 1$ GHz to fulfill the condition of $\delta \gg g$. The qubit relaxation and dephasing rates are estimated to be $\gamma=\gamma_\phi \approx 2\pi \times 4$kHz \cite{Clark2003}. The cavity decay rate is $\kappa=\omega_c/Q\simeq 2\pi \times 0.5$MHz with $Q=10^9$ \cite{Vernooy1998}. Assume that the logical qubit is initially prepared in $|0\rangle_L$ state while the cavity is in the vacuum state, the time-depend state populations and fidelity under the $X$ and Hadamard gates are depicted in Fig. \ref{single}(a) and \ref{single}(b) with the fidelity to be $99.5$ and $99.6$, respectively. Note that, in the simulation, we use the Hamiltonian in Eq. (\ref{int}), and thus the obtained high fidelity also verifies the validity of the effective Hamiltonian in Eqs. (\ref{eff1}) and (\ref{H}). In addition, we assuming the collective dephasing of all the qubits, the violation of which will introduce very small infidelity to the gate operations, as shown in Fig. \ref{max1}(a) and \ref{max1}(b) for the two exemplified gates $U_1(\pi/2,0)$ and $U_1(\pi/4,0)$ respectively, where we have investigated different cases by choosing different $\Theta$ in the initial state of $|\psi\rangle=\cos\Theta|0\rangle_L+\sin\Theta|1\rangle_L$. In the simulation, we have choose different decoherence of the physical qubits as $\gamma_1=\gamma_{\phi1}=0.8\gamma$, $\gamma_2=\gamma_{\phi2}=\gamma$ and $\gamma_3=\gamma_{\phi3}=1.2\gamma$, respectively.
\begin{figure}[tbp]
\centering
\includegraphics[width=12cm]{fig3}
\caption{ Maximum fidelity as the function of $\Theta$ in unit of $\pi$ with line (circle) the identical (different) decoherence of qubits under gate operations (a) $U_1(\pi/2, 0)$, (b) $U_1(\pi/4, 0)$.}\label{max1}
\end{figure}
\section{Non-adiabatic holonomic two-qubit logical gates}
The implementing of the holonomic one-qubit logical gates can be scalable to two-qubit scenario straightforwardly. For two logical qubits interacting collectively with the dephasing environment, there will exist a six-dimensional DFS $$S_2= \{|100100\rangle,|100001\rangle,|001100\rangle,
|001001\rangle,|101000\rangle,|000101\rangle\},$$
where the former and latter three physical qubits encode the first and second logical qubits, respectively. Then we can encode the logical qubit states same as that of the single qubit case, \emph{i.e.},
$|00\rangle_{L}=|100100\rangle$, $|01\rangle_{L}=|100001\rangle$, $|10\rangle_{L}=|001100\rangle$, and $|11\rangle_{L}=|001001\rangle$. Meanwhile, $|a_{2}\rangle=|101000\rangle$ and $|a_{3}\rangle=|000101\rangle$ are used as ancillary states.
\begin{figure}[tbp]
\centering
\includegraphics[width=13cm]{fig4}
\caption{ Qubit states population and fidelity under two-qubit operator $U_2(\pi/4, 0)$, the red line represents the fidelity. (a) Initial state of qubit $|00\rangle_L$. (b) Initial state of qubit $|01\rangle_L$.}\label{two}
\end{figure}
In order to implement the logical two-qubit operations, we apply the interaction in Eq. (\ref{H}) on the NV centers $3$ and $4$, by tuning laser beams with detuning $\delta_+$ and phase difference $\phi=\varphi_{3}-\varphi_4$, for the operation time of $\tau_{2}$, i.e., $U_{34}(\tau_{2})=\exp[-i \int^{\tau_{2}}_{0} H_{34} dt]$. Meanwhile, we also apply $U_{36}(\tau_{2})=\exp[-i \int^{\tau_{2}}_{0} H_{36} dt]$ on the NV centers $3$ and $6$ by tuning laser beams with detuning $\delta_-$ and same phase. The total effective Hamiltonian then reads
\begin{eqnarray}
H_{2}=\lambda_{2}\left[\sin\frac{\vartheta}{2}e^{i\phi}(|a_{2}\rangle_{L}\langle00|
+|a_{3}\rangle_{L}\langle11|)
-\cos\frac{\vartheta}{2}(|a_{2}\rangle_{L}\langle01|
+|a_{3}\rangle_{L}\langle10|)\right] +\mathrm{H.c.},
\end{eqnarray}
where the effective Rabi frequency $\lambda_{2}=\sqrt{|g_{34}|^{4}+|g_{36}|^{4}}/\delta$ with $\delta=|\delta_{\pm}|$
and the phase $\vartheta=2 \arctan\left(|g_{34}|^{2}/|g_{36}|^{2}\right)$ can be tuned by the amplitude of the incident laser. Obviously, the effective Hamiltonian can be decomposed to two commuting parts as $H_{2}=\lambda_{2}(H_a+H_b)$ with
\begin{eqnarray}
H_a=\sin\frac{\vartheta}{2}e^{i\phi}|a_{2}\rangle_{L}\langle00|
-\cos\frac{\vartheta}{2}|a_{2}\rangle_{L}\langle01|+\mathrm{H.c.}, \nonumber \\
H_b=\sin\frac{\vartheta}{2}e^{i\phi}|a_{3}\rangle_{L}\langle11|
-\cos\frac{\vartheta}{2}|a_{3}\rangle_{L}\langle10|+\mathrm{H.c.}.
\end{eqnarray}
Therefore, $\exp\left[-i\int^{\tau_{2}}_{0}H_{2}dt\right]
=\exp\left[-i\pi H_a\right]\exp\left[-i\pi H_b\right]$
under the $\pi$ pulse criterion $\lambda_{2}\tau_{2}=\pi$, which acts nontrivially on the computational subspace $\{|00\rangle_{L},|01\rangle_{L}\}$ and $\{|10\rangle_{L},|11\rangle_{L}\}$, respectively. That is, $H_{2}$, consisting of a WGM and six NV centers, effectively reduces to two $\Lambda$-like Hamiltonian with $H_a$ ($H_b$) acts on the top state $|a_{2}\rangle_{L}$ ($|a_{3}\rangle_{L}$) and the bottom states $|00\rangle_{L}$ and$|01\rangle_{L}$ ($|10\rangle_{L}$ and $|11\rangle_{L}$). Analogous to the single-qubit gate case, the holonomic two-qubit logical gate in the subspace $\{|00\rangle_{L},|01\rangle_{L},|10\rangle_{L},|11\rangle_{L}\}$ can be written as
\begin{equation}\label{u2}
U_{2}(\vartheta,\phi)=\left(\begin{array}{cccc}
\cos{\vartheta}&\sin{\vartheta}e^{-i\phi}&0&0\\
\sin{\vartheta}e^{i\phi}&-\cos{\vartheta}&0&0\\
0&0&-\cos{\vartheta}&\sin{\vartheta}e^{-i\phi}\\
0&0&\sin{\vartheta}e^{i\phi}&\cos{\vartheta}
\end{array}\right).
\end{equation}
In general, we can realize nontrivial two-qubit holonomic logical gate in DFS by controlling the $\vartheta$ and $\phi$ separately, that is, adjusting the amplitude and phase of two laser beams, respectively. For example, acting $U_{2}(\vartheta,\phi)$ on logical qubit 1 and 2 and then acting $U_{1}(\theta,\varphi)$ on logical qubit $2$ with $\vartheta=\theta=\pi/4$ and $\phi=\varphi=\pi/2$, we can realize a CNOT gate in view of Eqs. (\ref{u1}) and (\ref{u2}). In addition, choosing the experimentally achievable parameters similar as in the single qubit case, numerical simulation of the populations and the fidelity of $U_2(\pi/4, 0)$ operation are shown in Fig. \ref{two}(a) and \ref{two}(b) with different initial state $|00\rangle_L$ and $|01\rangle_L$, respectively. The fidelity can reach about $99.5$ and $98.7$.
\section{Conclusion}
In summary, we have put forward a universal set of non-adiabatic holonomic gates in DFS by using NV centers coupled to the WGM of a cavity. By controlling the amplitude and relative phase of the driving lasers, we can realize arbitrary single-qubit and two-qubit operations. Numerical simulation shows that our scheme is stable to deviation of corresponding experimental parameters and insensitive to decoherence such as collective noises and local noises. Ultrahigh quality factor of the cavity and the exceptional spin properties of the NV centers make our scheme a promising candidate in experimental implementation of NHQC in DFS with high gate fidelity and short operation time.
\bigskip
\section*{Acknowledgments}
This work was supported by the NFRPC (No. 2013CB921804), the PCSIRT (No. IRT1243), and the project of Anhui Xinhua university (No. 2014zr010).
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,834 |
\section{Introduction}\label{sec:Intro}
Random graph inference is an active, interdisciplinary area of current research, bridging combinatorics, probability, statistical theory, and machine learning, as well as a wide spectrum of application domains from neuroscience to sociology. Statistical inference on random graphs and networks, in particular, has witnessed extraordinary growth over the last decade: for example, \cite{goldenberg2010survey,kolaczyk:_statistical} discuss the considerable applications in recent network science of several canonical random graph models.
Of course, combinatorial graph theory itself is centuries old---indeed, in his resolution to the problem of the bridges of K\"onigsberg, Leonard Euler first formalized graphs as mathematical objects consisting of vertices and edges. The notion of a random graph, however, and the modern theory of inference on such graphs, is comparatively new, and owes much to the pioneering work of \Erdos, \Renyi, and others in the late 1950s. E.N. Gilbert's short 1959 paper \cite{gilbert_1959} considered a random graph for which the existence of edges between vertices are independent Bernoulli random variables with common probability $p$; roughly concurrently, \Erdos ~and \Renyi ~provided the first detailed analysis of the probabilities of the emergence of certain types of subgraphs within such graphs \cite{Erdos_Renyi_1960_original}, and today, graphs in which the edges arise independently and with common probability $p$ are known as {\em \ErdosRenyi} (or ER) graphs.
The \ErdosRenyi (ER) model is one of the simplest generative models for random graphs, but this simplicity belies astonishingly rich behavior (\cite{alon_spencer_prob_method}, \cite{bollobas07}). Nevertheless, in many applications, the requirement of a common connection probability is too stringent: graph vertices often represent heterogeneous entities, such as different people in a social network or cities in a transportation graph, and the connection probability $p_{ij}$ between vertex $i$ and $j$ may well change with $i$ and $j$ or depend on underlying attributes of the vertices. Moreover, these heterogeneous vertex attributes may not be observable; for example, given the adjacency matrix of a Facebook community, the specific interests of the individuals may remain hidden. To more effectively model such real-world networks, we consider {\em latent position} random graphs \cite{hoff_raftery_handcock}. In a latent position graph, to each vertex $i$ in the graph there is associated an element $x_i$ of the so-called {\em latent space} $\mathcal{X}$, and the probability of connection $p_{ij}$ between any two edges $i$ and $j$ is given by a {\em link} or {\em kernel} function $\kappa: \mathcal{X} \times \mathcal{X} \rightarrow [0,1]$. That is, the edges are generated independently (so the graph is an {\em independent-edge} graph) and $p_{ij}=\kappa(x_i, x_j)$.
The {\em random dot product graph} (RDPG) of Young and Scheinerman \cite{young2007random} is an especially tractable latent position graph; here, the latent space is an appropriately constrained subspace of Euclidean space $\mathbb{R}^d$, and the link function is simply the dot or inner product of the pair of $d$-dimensional latent positions. Thus, in a $d$-dimensional random dot product graph with $n$ vertices, the latent positions associated to the vertices can be represented by an $n \times d$ matrix $\bX$ whose rows are the latent positions, and the matrix of connection probabilities $\bP=(\bP_{ij})$ is given by $\bP=\bX\bX^{\top}$. Conditional on this matrix $\bP$, the RDPG has an adjacency matrix $\bA=(\bA_{ij})$ whose entries are Bernoulli random variables with probability $\bP_{ij}$. For simplicity, we will typically consider symmetric, {\em hollow} RDPG graphs; that is, undirected, unweighted graphs in which $\bA_{ii}=0$, so there are no self-edges. In our real data analysis of a neural connectome in Section \ref{subsec:MBStructure}, however, we describe how to adapt our results to weighted and directed graphs.
In any latent position graph, the latent positions associated to graph vertices can themselves be random; for instance, the latent positions may be independent, identically distributed random variables with some distribution $F$ on $\mathbb{R}^d$. The well-known {\em stochastic blockmodel} (SBM), in which each vertex belongs to one of $K$ subsets known as {\em blocks}, with connection probabilities determined solely by block membership \cite{Holland1983}, can be represented as a random dot product graph in which all the vertices in a given block have the same latent positions (or, in the case of random latent positions, an RDPG for which the distribution $F$ is supported on a finite set). Despite their structural simplicity, stochastic block models are the building blocks for all independent-edge random graphs; \cite{wolfe13:_nonpar} demonstrates that any independent-edge random graph can be well-approximated by a stochastic block model with a sufficiently large number of blocks. Since stochastic block models can themselves be viewed as random dot product graphs, we see that suitably high-dimensional random dot product graphs can provide accurate approximations of latent position graphs \cite{tang2012universally}, and, in turn, independent-edge graphs. Thus, the architectural simplicity of the random dot product graph makes it particularly amenable to analysis, and its near-universality in graph approximation renders it expansively applicable. In addition, the cornerstone of our analysis of randot dot product graphs is a set of classical probabilistic and linear algebraic techniques that are useful in much broader settings, such as random matrix theory. As such, the random dot product graph is both a rich and interesting object of study in its own right and a natural point of departure for wider graph inference.
A classical inference task for Euclidean data is to estimate, from sample data, certain underlying distributional parameters. Similarly, for a latent position graph, a classical graph inference task is to infer the graph parameters from an observation of the adjacency matrix $\bA$. Indeed, our overall paradigm for random graph inference is inspired by the fundamental tenets of classical statistical inference for Euclidean data. Namely, our goal is to construct methods and estimators of graph parameters or graph distributions; and, for these estimators, to analyze their (1) consistency; (2) asymptotic distributions; (3) asymptotic relative efficiency; (4) robustness to model misspecification; and (5) implications for subsequent inference including one- and multi-sample hypothesis testing. In this paper, we summarize and synthesize a considerable body of work on spectral methods for inference in random dot product graphs, all of which not only advance fundamental tenets of this paradigm, but do so within a unified and parsimonious framework.
The random graph estimators and test statistics we discuss all exploit the {\em adjacency spectral embedding} (ASE) or the {\em Laplacian spectral embedding} (LSE), which are eigendecompositions of the adjacency matrix $\bA$ and {\em normalized} Laplacian matrix $\bL=\bD^{-1/2} \bA \bD^{-1/2}$, where $\bD$ is the diagonal degree matrix $\bD_{ii}=\sum_{j\neq i} \bA_{ij}$.
The ambition and scope of our approach to graph inference means that mere upper bounds on discrepancies between parameters and their estimates will not suffice. Such bounds are legion. In our proofs of consistency, we improve several bounds of this type, and in some cases improve them so drastically that concentration inequalities and asymptotic limit distributions emerge in their wake. We stress that aside from specific cases (see \cite{furedi1981eigenvalues}, \cite{tao2012random}, \cite{lei2014}), limiting distributions for eigenvalues and eigenvectors of random graphs are notably elusive. For the adjacency and Laplacian spectral embedding, we discuss not only consistency, but also asymptotic normality, robustness, and the use of the adjacency spectral embedding in the nascent field of multi-graph hypothesis testing. We illustrate how our techniques can be meaningfully applied to thorny and very sizable real data, improving on previously state-of-the-art methods for inference tasks such as community detection and classification in networks. What is more, as we now show, spectral graph embeddings are relevant to many complex and seemingly disparate aspects of graph inference.
A bird's-eye view of our methodology might well start with the stochastic blockmodel, where, for an SBM with a finite number of blocks of stochastically equivalent vertices,
\cite{STFP-2011} and \cite{fishkind2013consistent} show that $k$-means clustering of the rows of the adjacency spectral embedding accurately partitions the vertices into the correct blocks, even when the embedding dimension is misspecified or the number of blocks is unknown. Furthermore, \cite{lyzinski13:_perfec} and \cite{lyzinski15_HSBM}
give a significant improvement in the misclassification rate, by exhibiting an almost-surely perfect clustering in which, in the limit, no vertices whatsoever are misclassified. For random
dot product graphs more generally, \cite{sussman12:_univer} shows that the latent positions are consistently
estimated by the embedding, which then allows for accurate learning in a supervised vertex classification framework. In \cite{tang2012universally} these results are extended to more general latent position models, establishing a powerful universal consistency result for vertex classification in general latent position graphs, and also exhibiting an efficient embedding of vertices which were not observed in the original graph. In \cite{athreya2013limit} and \cite{tang_lse}, the authors supply distributional results, akin to a central limit theorem, for both the adjacency and Laplacian spectral embedding, respectively; the former leads to a nontrivially superior algorithm for the estimation of block memberships in a stochastic block model (\cite{suwan14:_empbayes}), and the latter resolves, through an elegant comparison of Chernoff information, a long-standing open question of the relative merits of the adjacency and Laplacian graph representations.
Morever, graph embedding plays a central role in foundational work
of Tang et al. \cite{tang14:_semipar} and \cite{tang14:_nonpar} on two-sample graph comparison: these papers provide theoretically justified, valid and consistent hypothesis tests for the semiparamatric problem of determining whether two random dot product graphs have the same latent positions and the nonparametric problem of determining whether two random dot product graphs have the
same underlying distributions. This, then, yields a systematic framework for determining statistical similarity across graphs, which in turn underpins yet another provably consistent algorithm for the decomposition of random graphs with a hierarchical structure \cite{lyzinski15_HSBM}. In \cite{levin_omni_2017}, distributional results are given for an omnibus embedding of multiple random dot product graphs on the same vertex set, and this embedding performs well both for latent position estimation and for multi-sample graph testing. For the critical inference task of vertex nomination, in which the inference goal is to produce an ordering of vertices of interest (see, for instance, \cite{Coppersmith2014}) Fishkind and coauthors introduce in \cite{FisLyzPaoChePri2015} an array of principled vertex nomination algorithms –--the canonical, maximum likelihood and spectral vertex
nomination schemes---and demonstrate the algorithms' effectiveness on both synthetic and real data. In \cite{LyzLevFisPri2016} the consistency of the maximum likelihood vertex nomination scheme is established, a scalable restricted version of the algorithm is introduced, and the algorithms are adapted to incorporate general vertex features.
Overall, we stress that these principled techniques for random dot product graphs exploit the Euclidean nature of graph embeddings but are general enough to yield meaningful results for a wide variety of random graphs. Because our focus is, in part, on spectral methods, and because the adjacency matrix $\bA$ of an independent-edge graph can be regarded as a noisy version of the matrix of probabilities $\bP$ \cite{oliveira2009concentration}, we rely on several classical results on matrix perturbations, most prominently the Davis-Kahan Theorem (see \cite{Bhatia1997} for the theorem itself, \cite{rohe2011spectral} for an illustration of its role in graph inference, and \cite{DK_usefulvariant} for a very useful variant). We also depend on the aforementioned spectral bounds of Oliveira in \cite{oliveira2009concentration} and a more recent sharpening due to Lu and Peng \cite{lu13:_spect}. We leverage probabilistic concentration inequalities, such as those of Hoeffding and Bernstein \cite{Tropp2015}. Finally, several of our results do require suitable eigengaps for $\bP$ and lower bounds on graph density, as measured by the maximum degree and the size of the smallest eigenvalue of $\bP$. It is important to point out that in our analysis, we assume that the embedding dimension $d$ of our graphs is known and fixed. In real data applications, such an embedding dimension is not known, and in Section \ref{subsec:MBStructure}, we discuss approaches (see \cite{chatterjee2015} and \cite{zhu06:_autom}) to estimating the embedding dimension. Robustness of our procedures to errors in embedding dimension is a problem of current investigation.
In the study of stochastic blockmodels, there has been a recent push to understand the fundamental information-theoretic limits for community detection and graph partitioning \cite{Abbe2015,Mossel2014,Abbe2016,Mossel2013}.
These bounds are typically algorithm-free and focus on stochastic blockmodels with constant or logarithmic average degree, in which differences between vertices in different blocks are assumed to be at the boundary of detectability.
Our efforts have a somewhat different flavor, in that we seek to understand the precise behavior of a widely applicable procedure in a more general model.
Additionally, we treat sparsity as a secondary concern, and typically do not broach the question of the exact limits of our procedures.
Our spectral methods may not be optimal for stochastic models \cite{Krzakala2013,Kawamoto2015} but they are very useful, in that they rely on well-optimized computational methods, can be implemented quickly in many standard languages, extend readily to other models, and serve as a foundation for more complex analyses.
Finally, we would be remiss not to point out that while spectral decompositions and clusterings of the adjacency matrix are appropriate for graph inference, they are also of considerable import in combinatorial graph theory: readers may recall, for instance, the combinatorial {\em ratio-cut} problem, whose objective is to partition the vertex set of a graph into two disjoint sets in a way that minimizes the number of edges between vertices in the two sets. The minimizer of a relaxation to the ratio-cut problem \cite{fiedler1973} is the eigenvector associated to the second smallest eigenvalue of the graph Laplacian $\bL$.
While we do not pursue more specific combinatorial applications of spectral methods here, we note that \cite{chung1997spectral} provides a comprehensive overview, and \cite{von2007tutorial} gives an accessible tutorial on spectral methods.
We organize the paper as follows. In Section \ref{sec:Def_Note_Background}, we define random dot product graphs and the adjacency spectral embedding, and we recall important linear algebraic background. In Section \ref{sec:ASE_Inference_RDPG}, we discuss consistency, asymptotic normality, and hypothesis testing, as well as inference for hierarchical models. In Section \ref{sec:Applications}, we discuss applications of these results to real data. Finally, in Section \ref{sec:Complexities} we discuss current theoretical and computational difficulties and open questions, including issues of optimal embedding dimension, model limitations, robustness to errorful observations, and joint graph inference.
\section{Definitions, notation, and background}\label{sec:Def_Note_Background}
\subsection{Preliminaries and notation}
We begin by establishing notation.
For a positive integer $n$, we let $[n]=\{1, 2, \cdots, n\}$. For a vector $\bv \in \mathbb{R}^n$, we let $\| \bv \|$ denote the Euclidean norm of $\bv$. We denote the identity matrix, zero matrix, and the square matrix of all ones by,
$\bI$, $\zeromx$, and $\bJ$, respectively.
We use $\otimes$ to denote the Kronecker product.
For an $n_1 \times n_2$ matrix $\bH$, we let $\bH_{ij}$ denote its $i,j$th entry;
we denote by $\bH_{\cdot j}$
the column vector formed by the $j$-th column of $\bH$;
and we denote by $\bH_{i \cdot}$ the row vector
formed by the $i$-th row of $\bH$.
For a slight abuse of notation, we also let $\bH_i \in \R^{n_2}$
denote the \emph{column} vector formed by transposing the $i$-th row
of $\bH$. That is, $\bH_i = (\bH_{i \cdot})^{\top}$.
Given any suitably specified ordering on eigenvalues of a square matrix $\bH$, we let $\lambda_i(\bH)$ denote
the $i$-th eigenvalue (under such an ordering) of $\bH$ and $\sigma_i(\bH) = \sqrt{\lambda_i(\bH^{\top}\bH)}$ the $i$-th singular value of $\bH$.
We let $\| \bH \|$ denote the spectral norm of $\bH$
and $\| \bH \|_F$ denote the Frobenius norm of $\bH$.
We let $\|\bH\|_{\tti}$ denote the maximum of the Euclidean norms of the rows of $\bH$, i.e.
$\|\bH\|_{\tti}=\max_{i} \| \bH_i \|$.
We denote the trace of a matrix $\bH$ by $\tr(\bH)$.
For an $n \times n$ symmetric matrix $\bH$ whose entries are all non-negative, we will frequently have to account for terms related to matrix sparsity, and we define $\delta(\bH)$ and $\gamma(\bH)$ as follows:
\begin{equation}\label{eq:max_degree}
\delta(\bH) = \max\limits_{1 \leq i \leq n} \sum_{j=1}^n
\bH_{ij} \qquad
\gamma(\bH) =
\frac{\sigma_{d}(\mathbf{\bH}) - \sigma_{d+1}(\bH)}{\delta(\bH)} \leq 1
\end{equation}
In a number of cases, we need to consider a sequence of matrices. We will denote such a sequence by $\bH_n$, where $n$ is typically used to denote the index of the sequence. The distinction between a particular element $\bH_n$ in a sequence of matrices and a particular row $\bH_i$ of a matrix will be clear from context, and our convention is typically to use $n$ to denote the index of a sequence and $i$ or $h$ to denote a particular row of a matrix. In the case where we need to consider the $i$th row of a matrix that is itself the $n$th element of a sequence, we will use the notation
$(\bH_n)_i$.
We define a {\em graph} $G$ to be an ordered pair of $(V,E)$ where $V$ is the so-called {\em vertex} or {\em node} set, and $E$, the set of {\em edges}, is a subset of the Cartesian product of $V \times V$. In a graph whose vertex set has cardinality $n$, we will usually represent $V$ as $V=\{1, 2, \cdots, n\}$, and we say there {\em is an edge between} $i$ and $j$ if $(i,j)\in E$. The {\em adjacency} matrix $\bA$ provides a compact representation of such a graph:
$$\bA_{ij}=1 \textrm{ if }(i,j) \in E, \textrm{ and }\bA_{ij}=0 \textrm{ otherwise. }$$
Where there is no danger of confusion, we will often refer to a graph $G$ and its adjacency matrix $\bA$
interchangeably.
Our focus is random graphs, and thus we will let $\Omega$ denote our sample space, $\mathcal{F}$ the $\sigma$-algebra of subsets of $\Omega$ and $\mathbb{P}$ our probability measure $\mathbb{P}: \mathcal{F} \rightarrow [0,1]$. We will denote the expectation of a (potentially multi-dimensional) random variable $X$ with respect to this measure by $\mathbb{E}$. Given an event $F \in \mathcal{F}$, we denote its complement by $F^c$, and we let $\Pr(F)$ denote the probability of $F$. As we will see, in many cases we can choose $\Omega$ to be subset of Euclidean space. Because we are interested in large-graph inference, we will frequently need to demonstrate that probabilities of certain events decay at specified rates. This motivates the following definition.
\begin{definition} [Convergence asymptotically almost surely and convergence with high probability]
\label{def:whp}
Given a sequence of events $\{ F_n \} \in \mathcal{F}$, where $n=1, 2, \cdots$, we say that $F_n$ occurs {\em asymptotically almost surely} if $\Pr(F_n) \rightarrow 1$ as $n \rightarrow \infty$.
We say that $F_n$ {\em occurs with high probability},
and write $F_n \text{ w.h.p. }$,
if for any $c_0 > 1$, there exists finite positive constant $C_0$ depending on $c_0$ such that $\Pr[ F_n^c ] \le C_0n^{-c_0}$ for all $n$.
We note that $F_n$ occurring w.h.p. is stronger than $F_n$ occurring asymptotically almost surely. Morever, $F_n$ occurring with high probability implies, by the Borel-Cantelli Lemma \cite{chung1974course},
that with probability $1$ there exists an $n_0$ such that
$F_n$ holds for all $n \ge n_0$.
\end{definition}
Moreover, since our goal is often to understand large-graph inference, we need to consider asymptotics as a function of graph size $n$. As such, we recall familiar asymptotic notation:
\begin{definition} [Asymptotic notation] If $w(n)$ is a quantity depending on $n$, we will say that {\em $w$ is of order $\alpha(n)$} and use the notation $w(n) \sim \Theta(\alpha(n))$ to denote that there exist positive constants $c, C$ such that for $n$ sufficiently large,
$$c\alpha(n) \leq w(n) \leq C \alpha(n).$$
When the quantity $w(n)$ is clear and $w(n)\sim \Theta(\alpha(n))$, we sometimes simply write ``$w$ is of order $\alpha(n)$".
We write $w(n) \sim O(n)$ if there exists a constant $C$ such that for $n$ sufficiently large, $w(n) \leq Cn$. We write $w(n) \sim o(n)$ if $w(n)/n \rightarrow 0$ as $n \rightarrow \infty$, and $w(n)\sim o(1)$ if $w(n) \rightarrow 0$ as $n \rightarrow \infty$. We write $w(n) \sim \Omega(n)$ if there exists a constant $C$ such that for all $n$ sufficiently large, $w(n) \geq Cn$.
\end{definition}
Throughout, we will use $C > 0$ to denote a constant, not depending on $n$,
which may vary from one line to another.
\subsection{Models}
Since our focus is on $d$-dimensional random dot product graphs, we first define an {\em an inner product distribution} as a probability distribution over a suitable subset of $\R^d$, as follows:
\begin{definition}
[ $d$-dimensional inner product distribution]\label{def:innerprod}
Let $F$ be a probability distribution whose support is given by $\supp F={\bf \mathcal{X}}_d \subset \R^d$.
We say that $F$ is a
\emph{$d$-dimensional inner product distribution}
on $\R^d$ if for all $\bx,\by \in \mathcal{X}_d=\supp F$, we have $\bx^{\top} \by \in [0,1]$.
\end{definition}
Next, we define a random dot product graph as an independent-edge random graph
for which the edge probabilities are given by the dot products of the latent
positions associated to the vertices.
We restrict our attention here to graphs that are undirected and
in which no vertex has an edge to itself.
\begin{definition} [Random dot product graph with distribution $F$] \label{def:RDPG}
Let $F$ be a $d$-dimensional inner product distribution
with $\bX_1,\bX_2,\dots,\bX_n \iid F$, collected in the rows of the matrix
$\bX=[\bX_1, \bX_2, \dots, \bX_n]^{\top} \in \R^{n \times d}$.
Suppose $\bA$ is a random adjacency matrix given by
\begin{equation} \label{eq:rdpg}
\Pr[\bA|\bX]=
\prod_{i<j}(\bX_i^{\top}\bX_j)^{\bA_{ij}}(1-\bX_i^{\top}\bX_j)^{1-\bA_{ij}}
\end{equation}
We then write $(\bA,\bX) \sim \RDPG(F,n)$ and say that $\bA$ is the adjacency
matrix of a {\em random dot product graph of dimension or rank at most} $d$ and with {\em latent positions} given by the rows of $\bX$. If $\bX \bX^{\top}$ is, in fact, a rank $d$ matrix, we say $\bA$ is the adjacency matrix of a rank $d$ random dot product graph.
\end{definition}
While our notation for a random dot product graph with distribution $F$ is $(\mathbf{A}, \mathbf{X}) \sim \mathrm{RDPG}(F)$, we emphasize that in this paper the latent positions $\mathbf{X}$ are always assumed to be unobserved. An almost identical definition holds for random dot product graphs with fixed but unobserved latent positions:
\begin{definition}[RDPG with fixed latent positions]\label{rem:RDPG_fixed_latentpos} In the definition ~\ref{def:RDPG} given
above, the latent positions are themselves random. If, instead, the latent positions are given by a fixed matrix $\bX$ and, given this matrix, the graph is generated according to Eq.\eqref{eq:rdpg}, we say that $\bA$ is a realization of a random dot product graph with latent positions $\bX$, and we write $\bA \sim \mathrm{RDPG}(\bX)$.
\end{definition}
\begin{remark} [Nonidentifiability]\label{rem:nonid}
Given a graph distributed as an RDPG,
the natural task is to recover the latent positions $\bX$ that gave
rise to the observed graph.
However, the RDPG model has an inherent nonidentifiability:
let $\bX \in \R^{n \times d}$ be a matrix of latent positions
and let $\bW \in \R^{d \times d}$ be a unitary matrix.
Since $\bX \bX^{\top} = (\bX \bW) (\bX \bW)^{\top}$, it is clear that the latent positions
$\bX$ and $\bX\bW$ give rise to the same distribution over graphs in
Equation~\eqref{eq:rdpg}.
Note that most latent position models, as defined below, also suffer from similar types of non-identifiability as edge-probabilities may be invariant to various transformations.
\end{remark}
As we mentioned, the random dot product graph is a specific instance of the more general {\em latent position random graph} with {\em link} or {\em kernel} function $\kappa$.
Indeed, the latent positions themselves need not belong to Euclidean space per se, and the link function need not be an inner product.
\begin{definition}[Latent position random graph with kernel $\kappa$]\label{def:latentpos_graph}
Let $\mathcal{X}$ be a set and $\kappa: \mathcal{X} \times \mathcal{X} \rightarrow [0,1]$ a symmetric function. Suppose to each $i \in [n]$ there is associated a point $\bX_i \in \mathcal{X}$. Given $\bX=\{\bX_1, \cdots, \bX_n\}$ consider the graph with adjacency matrix $\bA$ defined by
\begin{equation} \label{eq:lpg}
\Pr[\bA|\bX]=
\prod_{i<j}\kappa(\bX_i,\bX_j)^{\bA_{ij}}(1-\kappa(\bX_i,\bX_j))^{1-\bA_{ij}}
\end{equation}
Then $\bA$ is the adjacency matrix of a latent position random graph with latent position $\bX$ and link function $\kappa$.
\end{definition}
Similarly, we can define independent edge graphs for which latent positions need not play a role.
\begin{definition}[Independent-edge graphs]
For a matrix symmetric matrix $\bP$ of probabilities, we say that $\bA$ is distributed as an independent edge graph with probabilities $\bP$ if
\begin{equation} \label{eq:lpg}
\Pr[\bA|\bX]=
\prod_{i<j}\bP_{ij}^{\bA_{ij}}(1-\bP_{ij})^{1-\bA_{ij}}
\end{equation}
\end{definition}
By their very structure, latent position random graphs, for fixed latent positions, are independent-edge random graphs.
In general, for any latent position graph the matrix of edge probabilities $\bP$ is given by $\bP_{ij}=\kappa(\bX_i,\bX_j)$
Of course, in the case of an random dot product graph with latent position matrix $\bX$, the probability $\bP_{ij}$ of observing an edge between vertex $i$ and vertex $j$ is simply $\bX_i^{\top}\bX_j$.
Thus, for an RDPG with latent positions $\bX$, the matrix $\bP=[p_{ij}]$ is given by $\bP=\bX\bX^{\top}$.
In order to more carefully relate latent position models and RDPGs, we can consider the set of positive semidefinite latent position graphs.
Namely, we will say that a latent position random graph is positive semidefinite if the matrix $\bP$ is positive semidefinite.
In this case, we note that an RDPG can be used to approximate the latent position random graph distribution.
The best rank-$d$ approximation of $\bP$, in terms of the Frobenius norm \citep{Eckart1936}, will correspond to a RDPG with $d$-dimensional latent positions.
In this sense, by allowing $d$ to be as large as necessary, any positive semi-definite latent position random graph distribution can be approximated by a RDPG distribution to arbitrary precision \citep{tangs.:_univer}.
While latent position models generalize the random dot product graph, RDPGs can be easily related to the more limited {\em stochastic blockmodel} graph \cite{Holland1983}. The stochastic block model is also an independent-edge random graph whose vertex set is partitioned into $K$ groups, called {\em blocks}, and the stochastic blockmodel is typically parameterized by (1) a
$K\times K$ matrix of probabilities $\bB$ of adjacencies between vertices in
each of the blocks, and (2) a {\em block-assignment vector} $\tau:[n] \rightarrow [K]$ which assigns each vertex to its block. That is, for any two vertices $i,j$, the probability of their connection is
$$\bP_{ij}=\bB_{\tau(i), \tau(j)},$$
and we typically write $\bA \sim \mathrm{SBM}(\bB, \tau)$.
Here we present an alternative definition in terms of the RDPG model.
\begin{definition}[Positive semidefinite $k$-block stochastic block model]\label{def:PS_SBM} We say an RDPG with latent positions $\bX$ is an SBM with $K$ blocks if the
number of distinct rows in $\bX$ is $K$, denoted $\bX_{(1)}, \cdots, \bX_{(K)}$ In this case, we define the
block membership function $\tau:[n]\mapsto [K]$ to be a function
such that $\tau(i)=\tau(j)$ if and only if $\bX_i=\bX_j$.
We then write $$\bA \sim \mathrm{SBM}(\tau, \{\bX_{(i)}\}_{i=1}^{K})$$
In addition, we also consider the case of a stochastic block model in which the block memberships of each vertex is randomly assigned. More precisely, let $\pi \in (0,1)^{K}$ with $\sum_{k=1}^{n} \pi_k=1$ and suppose that $\tau(1), \tau(2), \dots, \tau(n)$ are now i.i.d. random variables with distribution $\mathrm{Categorical}(\pi)$, i.e., $\mathrm{Pr}(\tau(i) = k) = \pi_k$ for all $k$. Then we say $\bA$ is an {\em SBM with i.i.d block memberships}, and we write $$\bA \sim \mathrm{SBM}(\pi, \{X_{(i)}\}).$$
\end{definition}
We also consider the {\em degree-corrected} stochastic block model:
\begin{definition}[Degree Corrected Stochastic Blockmodel (DCSBM) \citep{karrer2011stochastic}]\label{def:DCSBM} We
say an RDPG is a DCSBM with $K$ blocks if there exist $K$ unit vectors
$y_1,\dotsc,y_K\in\Re^{d}$ such that for each $i\in[n]$, there exists
$k\in[K]$ and $c_i\in(0,1)$ such that $X_i=c_i y_k$.
\end{definition}
\begin{remark} The degree-corrected stochastic blockmodel model is inherently more flexible than the standard SBM because it allows for vertices within each block/community to have different expected degrees. This flexibility has made it a popular choice for modeling network data \cite{karrer2011stochastic}.
\end{remark}
\begin{definition}[Mixed Membership Stochastic Blockmodel (MMSBM) \citep{Airoldi2008}]
We say an RDPG is a MMSBM with K blocks if there exists $K$ unit vectors $y_1,\dotsc, y_K\in \Re^d$ such that for each $i\in [n]$, there exists $\alpha_1,\dotsc,\alpha_K>0$ such that $\sum_{k=1}^K \alpha_k=1$ and $X_i=\sum_{k=1}^{K} \alpha_k y_k$.
\end{definition}
\begin{remark}
The mixed membership SBM is again more general than the SBM by allowing for each vertex to in a mixture of different blocks. Additionally, note that every RDPG is a MMSBM for some choice of $K$.
\end{remark}
Our next theorem summarizes the relationship between these models.
\begin{theorem}
Considered as statistical models for graphs, i.e. sets of probability distributions on graphs, the positive-semidefinite $K$-block SBM is a subset of the $K$-block DCSBM and the $K$-block MMSBM.
Both the positive semidefinite $K$-block DCSBM and $K$-block MMSBM are subsets of the RDPG model with $K$-dimensional latent positions.
Finally, the union of all possible RDPG models, without restriction of latent position dimension, is dense in the set of positive semidefinite latent position models.
\end{theorem}
\subsection{Embeddings}
Since we rely on spectral decompositions, we begin with describing the notations for the
spectral decomposition of the rank $d$ positive semidefinite matrix $\bP=\bX\bX^{\top}$.
\begin{definition} [Spectral Decomposition of $\bP$]\label{def:spec_decomp_P}
Since $\bP$ is symmetric and positive semidefinite, let
$\bP= \UP \SP \UP^{\top}$ denote its spectral decomposition,
with $\UP \in \R^{n \times d}$ having orthonormal columns
and $\SP \in \R^{d \times d}$ a diagonal matrix
with nonincreasing entries
$(\SP)_{1,1}\ge (\SP)_{2,2} \ge \cdots \ge (\SP)_{d,d} > 0$.
\end{definition}
As with the spectral decomposition of the matrix $\bP$, given an adjancency matrix $\bA$, we define its adjacency spectral embedding as follows;
\begin{definition} [Adjacency spectral embedding (ASE)]\label{def:ASE}
Given a positive integer $d \geq 1$, the {\em adjacency spectral embedding} (ASE) of $\mathbf{A}$ into
$\mathbb{R}^{d}$ is given by $\hat{{\bf X}}={\bf U}_{\mathbf{A}}
{\bf S}_{\mathbf{A}}^{1/2}$ where
$$|{\bf A}|=[{\bf U}_{\mathbf{A}}|{\bf U}^{\perp}_{\mathbf{A}}][{\bf
S}_{\mathbf{A}} \bigoplus {\bf S}^{\perp}_{\mathbf{A}}][{\bf
U}_{\mathbf{A}}|{\bf U}^{\perp}_{\mathbf{A}}]$$ is the spectral
decomposition of $|\bf{A}| = (\bf{A}^{T} \bf{A})^{1/2}$ and
$\mathbf{S}_{\mathbf{A}}$ is the diagonal matrix of the $d$ largest eigenvalues
of $|\mathbf{A}|$ and $\mathbf{U}_{\mathbf{A}}$ is the $n \times d$ matrix whose
columns are the corresponding eigenvectors.
\end{definition}
\begin{remark}
The intuition behind the notion of adjacency spectral embedding is
as follows.
Given the goal of estimating $\bX$, had we observed $\bP$ then the spectral embedding of $\bP$, given by $\UP \SP^{1/2}$, will be a orthogonal transformation of $\bX$.
Of course, $\bP$ is not observed but instead we observe $\bA$, a noisy version of $\bP$.
The ASE will be a good estimate of $\bX$ provided that the noise does not greatly impact the embedding.
As we will see shortly,
one can show that
$\|\mathbf{A} - \mathbf{X} \mathbf{X}^{\top} \| = O(\|\mathbf{X}\|)
= o(\|\mathbf{X} \mathbf{X}^{\top}\|)$
with high probability \cite{oliveira2009concentration,lu13:_spect,Tropp2015,rinaldo_2013}.
That is to say, $\mathbf{A}$ can be viewed as a ``small''
perturbation of $\mathbf{X} \mathbf{X}^{\top}$.
Weyl's inequality or the Kato-Temple inequality \cite{cape_16_conc,kato-temple} then yield that the eigenvalues of $\mathbf{A}$ are ``close'' to the eigenvalues of $\mathbf{X} \mathbf{X}^{\top}$.
In addition, by the
Davis-Kahan theorem \cite{davis70}, the subspace
spanned by the top $d$ eigenvectors of $\mathbf{X}
\mathbf{X}^{\top}$ is well-approximated by the subspace spanned by
the top $d$ eigenvectors of $\mathbf{A}$.
\end{remark}
We also define the analogous Laplacian spectral embedding which uses the spectral decomposition of the normalized Laplacian matrix.
\begin{definition}
\label{def:LSE}
Let $\mathcal{L}(\mathbf{A}) = \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2}$ denote the normalized Laplacian of $\mathbf{A}$ where $\mathbf{D}$ is the diagonal matrix whose diagonal entries $\mathbf{D}_{ii} = \sum_{j \not = i} \mathbf{A}_{ij}$. Given a positive integer $d \geq 1$, the {\em Laplacian spectral embedding} (LSE) of $\mathbf{A}$ into
$\mathbb{R}^{d}$ is given by $\breve{{\bf X}}={\bf U}_{\mathcal{L}(\mathbf{A})}
\tilde{{\bf S}}_{\mathbf{A}}^{1/2}$
where $$|\mathcal{L}({\bf A})|=\Bigl[{\bf U}_{\mathcal{L}(\mathbf{A})}|{\bf U}^{\perp}_{\mathcal{L}(\mathbf{A})}\Bigr]\Bigl[{\bf
S}_{\mathcal{L}(\mathbf{A})} \bigoplus {\bf S}^{\perp}_{\mathcal{L}(\mathbf{A})}\Bigr]\Bigl[{\bf
U}_{\mathcal{L}(\mathbf{A})}|{\bf U}^{\perp}_{\mathcal{L}(\mathbf{A})}\Bigr]$$ is the spectral
decomposition of $|\mathcal{L}(\mathbf{A})| = (\mathcal{L}(\mathbf{A})^{\top} \mathcal{L}(\mathbf{A}))^{1/2}$ and
$\mathbf{S}_{\mathcal{L}(\mathbf{A})}$ is the diagonal matrix containg the $d$ largest eigenvalues
of $|\mathcal{L}(\mathbf{A})|$ on the diagonal and $\mathbf{U}_{\mathcal{L}(\mathbf{A})}$ is the $n \times d$ matrix whose
columns are the corresponding eigenvectors.
\end{definition}
Finally, there are a variety other matrices for which spectral decompositions may be applied to yield an embedding of the graph \cite{Le2017}.
These are often dubbed as regularized embeddings and seek to improve the stability of these methods in order to accommodate sparser graphs.
While we do not analyze these embeddings directly, many of our approaches can be adapted to these other embeddings.
\section{Core proof techniques: probabilistic and linear algebraic bounds}\label{sec:core_techniques}
In this section, we give a overview of the core background results used in our proofs.
The key tools to several of our results on consistency and normality of the adjacency spectral embedding depend on a triumvirate of matrix concentration inequalities, the Davis-Kahan Theorem, and detailed bounds via the power method.
\subsection{Concentration inequalities}
Concentration inequalities for real- and matrix-valued data are a critical component to our proofs of consistency for spectral estimates. We make use of classical inequalities, such as Hoeffding's inequality, for real-valued random variables, and we also exploit more recent work on the concentration of sums of random matrices and matrix martingales around their expectation. For a careful study of several important matrix concentration inequalities, see \cite{Tropp2015}.
We begin by recalling Hoeffding's inequality, which bounds the deviations between a sample mean of independent random variables and the expected value of that sample mean.
\begin{theorem}
\label{thm:Hoeffding}
Let $X_i$, $1 \leq i \leq n$, be independent, bounded random variables defined on some probability space $(\Omega, \mathcal{F}, \mathbb{P})$. Suppose $a_i, b_i$ are real numbers such that $a_i \leq X_i \leq b_i$. Let $\bar{X}$ be their sample mean:
$$\bar{X}=\frac{1}{n} \sum_{i=1}^n X_i$$
Then
\begin{equation}\label{eq:Hoeffding_bound1}
\Pr\left(\bar{X}-\mathbb{E}(\bar{X}) \geq t\right) \leq \exp\left(\frac{[-2n^2t^2]}{\sum_{i=1}^n (b_i -a_i)^2}\right)
\end{equation}
and
\begin{equation}\label{eq:Hoeffding_bound2}
\Pr\left(|\bar{X}-\mathbb{E}(\bar{X})| \geq t\right) \leq 2\exp\left(\frac{[-2n^2t^2]}{\sum_{i=1}^n (b_i -a_i)^2}\right)
\end{equation}
\end{theorem}
For an undirected, hollow RDPG with probability matrix $\bP$, $\E(\bA_{ij})=\bP_{ij}$ for all $i \neq j$. As such, one can regard $\bA$ as a ``noisy'' version of $\bP$.
It is tempting to believe that $\bA$ and $\bP$ are close in terms of the Frobenius norm, but this is sadly not true; indeed, it is easy to see that $$\|\bA-\bP\|_F^2=\Theta(\|\bP\|_F^2)$$
To overcome this using only Hoeffding's inequality, we can instead consider the difference $(\bA^2-\bP^2)_{ij}$, which is a sum of independent random variables.
Hence, Hoeffding's inequality implies that
$$|(\bA^2-\bP^2)_{ij}|^2=o(|{\bP^2}_{ij}|^2)$$
Since the eigenvectors of $\bA$ and $\bA^2$ coincide, this is itself sufficient to show concentration of the adjacency spectral embedding \citep{sussman12,rohe2011spectral}.
However, somewhat stronger and more elegant results can be shown by considering the spectral norm instead. In particular, a nontrivial body of recent work on matrix concentration implies that, under certain assumptions on the sparsity of $\bP$, the spectral norm of $\bA-\bP$ can be well-controlled. We focus on the following important result of Oliveira \cite{oliveira2009concentration} and Tropp \cite{Tropp2015} and further improvements of Lu and Peng \cite{lu13:_spect} and Lei and Rinaldo \cite{rinaldo_2013}, all of which establish that the $\bA$ and $\bP$ are close in spectral norm.
\begin{theorem} [Spectral norm control of $\bA-\bP$ from \cite{oliveira2009concentration, Tropp2015}]\label{thm:oliveira}
Suppose
Let $\bA$ be the adjacency matrix of an independent-edge random graph on $[n]$ with matrix of edge probabilities $\bP$. For any constant $c$, there exists another constant $C$, independent of $n$ and $\bP$, such that if $\delta(\bP)>C \ln n$, then for any $n^{-c}<\eta<1/2$,
\begin{equation}\label{eq:Oliveira_orig}
\Pr \left(\|\bA-\bP\| \leq 4\sqrt{\delta(P) \ln (n/\eta)}\right) \geq 1-\eta.
\end{equation}
\end{theorem}
In \cite{lu13:_spect}, the authors give an improvement under slightly stronger density assumptions\footnote{A similar bound is provided in \cite{rinaldo_2013}, but with $\delta(\mathbf{P})$ defined as $\delta(\mathbf{P}) = n \max_{ij} \mathbf{P}_{ij}$ and a density assumption of the form $(n \max_{ij} \mathbf{P}_{ij}) > (\log n)^{1 + a}$.}:
\begin{theorem}[Spectral norm control of $\bA-\bP$ \cite{lu13:_spect}]\label{thm:lu_peng}
With notation as above, suppose there exist positive constants such that for $n$ sufficiently large, $\delta(\bP)>(\log n)^{4 + a}$.
Then for any $c > 0$ there exists a constant $C$ depending on $c$ such that
\begin{equation}\label{eq:lu_peng_spec_norm}
\mathbb{P}\left(\|\bA-\bP\|\leq 2 \sqrt{\delta(\bP)}+C\delta^{1/4}(\bP) \ln n\right) \geq 1 - n^{-c}.
\end{equation}
\end{theorem}
\subsection{Matrix perturbations and spectral decompositions}
The above results formalize our intuition that $\bA$ provides a ``reasonable" estimate for $\bP$. Moreover, in the RDPG case, where $\bP$ is of low rank and is necessarily positive semidefinite, these results have implications about the nonnegativity of the eigenvalues of $\bA$. Specifically, we use Weyl's Theorem to infer bounds on the differences between the eigenvalues of $\bA$ and $\bP$ from the spectral norm of their difference, and the Gerschgorin Disks Theorem to infer lower bounds on the maximum row sums of $\bP$ from assumptions on the eigengap of $\bP$ (since both $\bP$ and $\bA$ are nonnegative matrices, one could also obtain the same lower bounds by invoking the Perron-Frobenius Theorem). For completeness, we recall the Gerschgorin Disks Theorem and Weyl's Theorem. The former relates the eigenvalues of a matrix to the sums of the absolute values of the entries in each row, and the latter establishes bounds on the differences in eigenvalues between a matrix and a perturbation.
\begin{theorem}[Gerschgorin Disks \cite{horn85:_matrix_analy}]
\label{thm:Gerschgorin}
Let $\bH$ be a complex $n\times n$ matrix, with entries $\bH_{ij}$\,. For $i\in \{1,\cdots ,n\}$let $R_{i}=\sum _{j\neq {i}}\left|\bH_{ij}\right|$. Let the $i$th \textrm{Gerschgorin disk} $D(\bH_{ii},R_{i})$ be the closed disk centered at $\bH_{ii}$ with radius $R_{i}$.
Then every eigenvalue of $\bH$ lies within at least one of the Gershgorin discs $D(\bH_{ii},R_{i})$.
\end{theorem}
\begin{theorem} [Weyl \cite{horn85:_matrix_analy}] Let $\bM, \bH,$ and $\bR$ be $n\times n$ Hermitian matrices, and suppose $\bM=\bH+\bR$. Suppose $\bH$ and $\bR$ have eigenvalues $\nu_1 \geq \cdots \geq \nu_n$ and $r_1 \geq \cdots \geq r_n$, respectively. Suppose the eigenvalues of $\bM$ are given by $\mu_1 \geq \cdots \geq \mu_n$. Then
$$\nu_i + r_n \leq \mu_i \leq \nu_i + r_1$$
\end{theorem}
From our random graph model assumptions and our graph density assumptions, we can conclude that with for sufficiently large $n$, the top $d$ eigenvalues of $\bA$ will be nonnegative:
\begin{remark}[Nonnegativity of the top $d$ eigenvalues of $\bA$]\label{rem:nonneg_evals_A} Suppose $\mathbf{A} \sim \mathrm{RDPG}(\mathbf{X})$. Since $\bP=\bX\bX^{\top}$, it is necessarily positive semidefinite, and thus has nonnegative eigenvalues. If we now assume that $\gamma(\mathbf{P}) > c_0$ for some constant $c_0$, then along with the Gershgorin Disks Theorem, guarantee that the top $d$ eigenvalues of $\bP$ are all of order $\delta(\bP)$, and our rank assumption on $\bP$ mandates that the remaining eigenvalues be zero. If $\delta(\bP)> \log^{4+a'}n$, the spectral norm bound in \eqref{eq:lu_peng_spec_norm} applies, ensuring that for $n$ sufficiently large, $\|A-P\|_2 \sim O(\sqrt{\delta(\bP)})$ with high probability. Thus, by Weyl's inequality, we see that the top $d$ eigenvalues of $\bA$ are, with high probability, of order $\delta$, and the remaining are, with high probability, within $\sqrt{\delta}$ of zero.
\end{remark}
Since $\bP=\bX \bX^{\top}=\UP \SP^{1/2} (\UP \SP^{1/2})^{\top}$ and $\bA$ is close to $\bP$, it is intuitively appealing to conjecture that, in fact, $\hat{\bX}=\UA \SA^{1/2}$ should be close to some rotation of $\UP \SP^{1/2}$. That is, if $\bX$ is the matrix of true latent positions----so $\bX\bW=\UP \SP^{1/2}$ for some orthogonal matrix $\bW$---then it is plausible that $\|\Xhat-\bX \bW\|_F$ ought to be comparatively small. To make this precise, however, we need to understand how both eigenvalues and eigenvectors of a matrix behave when the matrix is perturbed. Weyl's inequality \cite{horn85:_matrix_analy} addresses the former. The impact of matrix perturbations on associated eigenspaces is significantly more complicated, and the Davis-Kahan Theorem \cite{davis70, Bhatia1997} provides one approach to the latter. The Davis-Kahan has a significant role in several approaches to spectral estimation for graphs: for example, Rohe, Chatterjee, and Yu leverage it in \cite{rohe2011spectral} to prove the accuracy of spectral estimates in high-dimensional stochastic blockmodels. The version we give below is from \cite{DK_usefulvariant}, which is a user-friendly guide to the the Davis-Kahan Theorem and its statistical implications.
The Davis-Kahan Theorem is often stated as a result on canonical angles between subspaces. To that end, we recall that if $\bU$ and $\bV$ are two $n \times d$ matrices with orthonormal columns, then we define the vector of $d$ {\em canonical} or {\em principal} angles between their column spaces to be the vector $\Theta$ such that
$$\Theta=(\theta_1=\cos^{-1}\sigma_1, \cdots, \theta_d=\cos^{-1} \sigma_d )^{\top}$$
where $\sigma_1, \cdots, \sigma_d$ are the singular values of $\bU^{\top} \bV$. We define the matrix $\sin(\Theta)$ to be the $d \times d$ diagonal matrix for which $\sin(\theta)_{ii}=\sin \theta_i$.
\begin{theorem}[A variant of Davis-Kahan \cite{DK_usefulvariant}]
\label{thm:davis-kahan}
Suppose $\bH$ and $\bH'$ are two symmetric $n \times n$ matrices with real entries with spectrum given by
$\lambda_1 \geq \lambda_2 \cdots \geq \lambda_n$
and $\lambda_1' \geq \lambda_2' \cdots \geq \lambda_n'$, respectively; and let $\bv_1, \cdots, \bv_n$ and $\bv_1', \cdots, \bv_n'$ denote their corresponding orthonormal eigenvectors. Let $0 < d \leq n$ be fixed, and let $\bV$ be the matrix of whose columns are the eigenvectors $v_1, \cdots, v_d$, and similarly $\bV'$ the matrix whose columns are the eigenvectors $\bv_1', \cdots \bv_n'$. Then
$$\|\sin(\Theta)\|_F \leq \frac{2 \sqrt{d}\|\bV-\bV'\|}{\lambda_d(\bH)-\lambda_{d+1}(\bH)}.$$
\end{theorem}
Observe that if we assume that $\bP$ is of rank $d$ and has a sufficient eigengap, the Davis-Kahan Theorem gives us an immediate bound on the spectral norm of the difference between $\UA \UA^{\top}$ and $\UP \UP^{\top}$ in terms of this eigengap and the spectral norm difference of $\bA-\bP$, namely:
\begin{equation*}
\| \UA \UA^{\top} - \UP\UP^{\top} \|=\max_{i} \|\sin(\theta_i)\|
\le \frac{ C \| \bA - \bP \| }{ \lambda_d(\bP) }.
\end{equation*}
Recall that the Frobenius norm of a matrix $\bH$ satisfies
$$(\|\bH\|_F)^2=\sum_{i,j} \bH^2_{ij}=\tr (\bH^{\top} \bH) \geq \|\bH\|^2$$
and further that if $\bH$ is of rank $d$, then
$$(\|\bH\|_F)^2 \leq d\|\bH\|^2$$
and hence for rank $d$ matrices, spectral norm bounds are easily translated into bounds on the Frobenius norm. It is worth noting that \cite{rohe2011spectral} guarantees that a difference in projections can be transformed into a difference between eigenvectors themselves: namely,
given the above bound for $\|\UA \UA^{\top} - \UP \UP^{\top}\|_F$, there is a constant $C$ and an orthonormal matrix $W \in \R^{d \times d}$ such that
\begin{equation}
\label{eq:Davis_Kahan_variant1}
\|\UP \bW-\UA\|_F \leq C\sqrt{d} \frac{\|\bA-\bP\|}{\lambda_d(\bP)}.
\end{equation}
While these important results provide the backbone for much of our theory, the detailed bounds and distributional results described in the next section rely on a decomposition of $\hat{\bX}$ in terms of $(\bA - \bP) \UP \SP^{-1/2}$ and a remainder.
This first term can be viewed as an application of the power method for finding eigenvectors.
Additionally, standard univariate and multivariate concentration inequalities and distributional results can be readily applied to this term.
On the other hand, the remainder term can be shown to be of smaller order than the first, and much of the technical challenges of this work rely on carefully bounding the remainder term.
\section{Spectral embeddings and estimation for RDPGs}\label{sec:ASE_Inference_RDPG}
There is a wealth of literature on spectral methods for estimating model parameters in random graphs, dating back more than half a century to estimation in simple \ErdosRenyi models. More specifically, for \ErdosRenyi graphs, we would be remiss not to point to Furedi and Komlos's classic work \cite{furedi1981eigenvalues} on the eigenvalues and eigenvectors of the adjacency matrix of a E-R graph. Again, despite their model simplicity, \ErdosRenyi graphs veritably teem with open questions; to cite but one example, in a very recent manuscript, Arias-Castro and Verzhelen \cite{Arias-Castro_ER} address, in the framework of hypothesis testing, the question of subgraph detection within an ER graph.
Moving up to the slightly more heterogeneous stochastic block model, we again find a rich literature on consistent block estimation in stochastic block models. Fortunato \cite{fortunato} provides an overview of partitioning techniques for graphs in general, and
consistent partitioning of stochastic block models for two blocks was accomplished by Snijders
and Nowicki \cite{snijders_nowicki} and for equal-sized blocks by Condon and Karp in 2001.
For the more general case, Bickel and Chen \cite{bickel_chen_2009} demonstrate a stronger version of
consistency via maximizing Newman-Girvan modularity \cite{newman2006modularity} and other modularities. For
a growing number of blocks, Choi et al. \cite{bickel2013asymptotic} prove consistency of likelihood based
methods, and Bickel et al. \cite{bickel2011method} provide a method to consistently estimate the stochastic
block model parameters using subgraph counts and degree distributions. This work and the
work of Bickel and Chen \cite{Bickel2009} both consider the case of very sparse graphs. In \cite{Airoldi2008}, Airoldi et al define the important generalization of a {\em mixed-membership} stochastic blockmodel, in which block membership may change depending on vertex-to-vertex interactions, and the authors demonstrate methods of inference for the mixed membership and block probabilities.
Rohe, Chatterjee and Yu show in \cite{rohe2011spectral} that spectral embeddings of the Laplacian give consistent estimates of block memberships in a stochastic block model, and one of the earliest corresponding results on the consistency of the adjacency spectral embedding is given by Sussman, Tang, Fishkind, and Priebe in \cite{STFP-2011}. In \cite{STFP-2011}, it is proved that for a stochastic block model with $K$ blocks and a rank $d$ block probability matrix $B$, clustering the rows of the adjacency spectral embedding via $k$-means clustering (see \cite{pollard81:_stron_k}) results in at most $\log n$ vertices being misclassified. An improvement to this can be found in \cite{fishkind2012consistent}, where consistency recovery is possible even if the rank of the embedding dimension is unknown.
In \cite{lyzinski13:_perfec}, under the assumption of distinct eigenvalues for the second moment matrix $\Delta$ of a random dot product graph, it is shown that clustering the rows of the adjacency spectral embedding results in asymptotically almost surely perfect recovery of the block memberships in a stochastic blockmodel---i.e. for sufficiently large $n$, the probability of all vertices being correctly assigned is close to 1. An especially strong recovery is exhibited here: it is shown that in the $2 \rightarrow \infty$ norm, $\Xhat$ is sufficiently close to a rotation of the the true latent positions. In fact, each {\em row} in $\Xhat$ is within $C\log n/\sqrt{n}$ of the corresponding row in $\bX$. Unlike
a Frobenius norm bound, in which it is possible that some rows of $\Xhat$ may be close to the true positions but others may be significantly farther away, this $2 \rightarrow \infty$ bound implies that the adjacency spectral embedding has a {\em uniform} consistency.
Furthermore, \cite{tang14:_semipar} gives a nontrivial tightening of the Frobenius norm bound for the difference between the (rotated) true and estimated latent positions: in fact the Frobenius norm is not merely bounded from above by a term of order $\log n$, but rather concentrates around a {\em constant}. This constant-order Frobenius bound forms the basis of a principled two-sample hypothesis test for determining whether two random dot product graphs have the same generating latent positions (see Section \ref{subsec:Testing}).
In \cite{lyzinski15_HSBM}, the $2 \to \infty$-norm bound is extended even to the case when the second moment matrix $\Delta$ does not have distinct eigenvalues. This turns out to be critical in guaranteeing that the adjacency spectral embedding can be effectively deployed for community detection in hierarchical block models. We present this bound for the $2 \to \infty$ norm in some detail here; it illustrates the confluence of our key techniques and provides an effective roadmap for several subsequent results on asymptotic normality and two-sample testing.
\subsection{Consistency of latent position estimates} \label{subsec:Estimation}
We state here one of our lynchpin results on consistency, in the $2 \rightarrow \infty$ norm, of the adjacency spectral embedding for latent position estimation of a random dot product graph. We given an outline of the proof here, and refer the reader to the Appendix \ref{sec:Appendix} for the details, which essentially follow the proof given in \cite{lyzinski15_HSBM}. We emphasize our setting is a sequence of random dot product graphs $\mathbf{A} \sim \mathrm{RDPG}(\mathbf{X}_n)$ for increasing $n$ and thus we make the following density assumption on $\mathbf{P}_n$ as $n$ increases:
\begin{assumption}\label{ass:max_degree_assump}
Let $\mathbf{A}_n \sim \mathrm{RDPG}(\mathbf{X}_n)$ for $n \geq 1$ be a sequence of random dot product graphs with $\mathbf{A}_n$ being a $n \times n$ adjacency matrix. Suppose that $\mathbf{X}_n$ is of rank $d$ for all $n$ sufficiently large. Suppose also that there exists constants $a > 0$ and $c_0 > 0$ such that for all $n$ sufficiently large,
$$\delta(\bP_n)= \max_{i} \sum_{j=1}^n (\mathbf{P}_n)_{ij} \geq \log^{4+a}(n); \quad \gamma(\mathbf{P}_n) = \frac{\lambda_d(\mathbf{P}_n)}{\delta(\bP_n)} \geq c_0.$$
\end{assumption}
Our consistency result for the $2 \rightarrow \infty$ norm is Theorem~\ref{thm:minh_sparsity} below. In the proof of this particular result, we consider a particular random dot product graph with non-random (i.e. fixed) latent positions, but our results apply also to the case of random latent positions. In Section \ref{subsec:Distributional}, where we provide a central limit theorem, we focus on the case in which the latent positions are themselves random. Similarly, in Section \ref{subsec:Testing}, in our analysis of the semiparametric two-sample hypothesis test for the equality of latent positions in a pair of random dot product grahs, we return to the setting in which the latent positions are fixed, and in the nonparametric hypothesis test of equality of distributions, we analyze again the case when the latent positions are random. It is convenient to be able to move fluidly between the two versions of a random dot product graph, adapting our results as appropriate in each case.
In the Appendix (\ref{sec:Appendix}), we give a detailed proof of Theorem~\ref{thm:minh_sparsity} and we point out the argument used therein also sets the stage for the central limit theorem for the rows of the adjacency spectral embedding given in Subsection~\ref{subsec:Distributional}.
\begin{theorem}\label{thm:minh_sparsity}
Let $\bA_n \sim \mathrm{RDPG}(\bX_n)$ for $n \geq 1$ be a sequence of random dot product graphs where the $\bX_n$ is assumed to be of rank $d$ for all $n$ sufficiently large. Denote by $\hat{\bX}_n$ the adjacency spectral embedding of $\bA_n$ and let $(\hat{\bX}_{n})_{i}$ and $(\bX_n)_{i}$ be the $i$-th row of $\hat{\bX}_n$ and $\bX_n$, respectively. Let $E_n$ be the event that there
exists an orthogonal transformation $\bW_n \in \mathbb{R}^{d \times d}$ such that
\begin{equation*}
\max_{i} \| (\hat{\bX}_n)_{i} - \bW_n (\bX_n)_{i} \| \leq
\frac{C d^{1/2} \log^2{n}}{\delta^{1/2}(\mathbf{P}_n)}
\end{equation*}
where $C > 0$ is some fixed constant and $\mathbf{P}_n = \mathbf{X}_n \mathbf{X}_n^{\top}$. Then $E_n$ occurs asymptotically almost surely; that is, $\Pr(E_n) \rightarrow 1$ as $n \rightarrow \infty$.
\end{theorem}
Under the stochastic blockmodel, previous bounds on $\|\bX-\hat{\bX}\|_F$ implied that $k$-means applied to the rows of $\hat{\bX}$ would approximately correctly partition the vertices into their the true blocks with up to $O(\log n)$ errors.
However, this Frobenius norm bound does not imply that there are no large outliers in the rows of $\bX-\hat{\bX}$, thereby precluding any guarantee of zero errors.
The improvements provided by Theorem~\ref{thm:minh_sparsity} overcome this hurdle and, as shown in \cite{lyzinski13:_perfec}, under suitable sparsity and eigengap assumptions, $k$-means applied to $\hat{\bX}$ will {\em exactly correctly} partition the vertices into their true blocks.
This implication demonstrates the importance of improving the overall bounds and in focusing on the correct metrics for a given task---in this case, for instance, block identification.
For a brief outline of the proof of this result, we note several key ingredients.
First is a lemma guaranteeing the existence of an orthogonal matrix $\bW^*$ such that
$$\|\bW^* \SA - \SP \bW^{*}\|_F=O((\log n)\delta^{1/2}(P)$$
That is, there is an approximate commutativity between right and left multiplication of the corresponding matrices of eigenvalues by this orthogonal transformation. The second essential component is, at heart, a bound inspired by the power method. Specifically, we show that there exists an orthogonal matrix
$$\|\Xhat -\bX\bW\|=\|(\bA -\bP) \UP \SP^{-1/2}\|_F + O((\log n) \delta^{-1/2})$$
Finally, from this point, establishing the bound on the $2 \to \infty$ norm is a consequence of Hoeffding's inequality applied to sums of the form
$$ \sum_{k} (\bA_{ik}-\bP_{ik} \bU_{kj})$$.
The $2 \rightarrow \infty$ bound in Theorem \ref{thm:minh_sparsity} has several important implications. As we mentioned, \cite{lyzinski13:_perfec} establishes an earlier form of this result, with more restrictive assumptions on the the second moment matrix, and shows how this can be used to cluster vertices in an SBM perfectly, i.e. with no vertices misclassified. In addition, \cite{lyzinski13:_perfec} shows how clustering the rows of the ASE can be useful for inference in a degree-corrected stochastic block model as well. In Section \ref{sec:Applications}, we see that because of Theorem \ref{thm:minh_sparsity}, the adjacency spectral embedding and a novel angle-based clustering procedure can be used for accurately identifying subcommunities in an affinity-structured, hierarchical stochastic blockmodel \cite{lyzinski15_HSBM}. In the next section, we see how our proof technique here can be used to obtain distributional results for the rows of the adjacency spectral embedding.
\subsection{Distributional results for the ASE}\label{subsec:Distributional}
In the classical statistical task of parametric estimation, one observes a collection of i.i.d observations $X_1, \cdots, X_n$ from some family of distributions $F_{\theta}: \theta \in \Theta$, where $\Theta$ is some subset of Euclidean space, and one seeks to find a consistent estimator $T(X_1, \cdots, X_n)$ for $\theta$. As we mentioned in Section~\ref{sec:Intro}, often a next task is to determine the asymptotic distribution, as $n \rightarrow \infty$, of a suitable normalization of this estimator $T$. Such distributional results, in turn, can be useful for generating confidence intervals and testing hypotheses.
We adopt a similar framework for random graph inference. In the previous section, we demonstrate the consistency of the adjacency spectral embedding for the true latent position of a random dot product graph. In this section, we establish the asymptotic normality of the rows of this embedding and the Laplacian spectral embedding.
In the subsequent section, we examine how our methodology can be deployed for multisample graph hypothesis testing.
We emphasize that distributional results for spectral decompositions of random graphs are comparatively few. The classic results of
\Furedi and \Komlos \cite{furedi1981eigenvalues} describe the eigenvalues of the \ErdosRenyi random graph and the work of Tao and Vu \cite{tao2012random} is focused on distributions of eigenvectors of more general random matrices under moment restrictions,
but \cite{athreya2013limit} and \cite{tang_lse} are among the only references for central limit theorems for spectral decompositions of adjacency and Laplacian matrices for a wider class of independent-edge random graphs than merely the \ErdosRenyi model. Apart from their inherent interest, these limit theorems point us to current open questions on efficient estimation and the relative merits of different estimators and embeddings, in part by rendering possible a comparison of asymptotic variances and allowing us to quantify relative efficiency (see \cite{runze_law_large_graphs} and to precisely conjecture a decomposition of the sources of variance in different spectral embeddings for multiple graphs (see \cite{levin_omni_2017}).
Specifically, we show that for a $d$-dimensional random dot product graph with i.i.d latent positions, there exists a sequence of $d \times d$ orthogonal matrices $\bW_n$ such that for any row index $i$, $\sqrt{n}(\bW_n (\Xhat_n)_{i} - (\bX_n)_i)$ converges as $n \rightarrow \infty$ to a mixture of multivariate normals.
\begin{theorem}[Central Limit Theorem for the rows of the ASE]\label{thm:clt_orig_but_better}
Let $(\bA_n, \bX_n) \sim \mathrm{RDPG}(F)$ be a sequence of adjacency matrices and associated latent positions of a $d$-dimensional random dot product graph according to an inner product distribution $F$. Let $\Phi(\bx,\bSigma)$ denote the cdf of a (multivariate)
Gaussian with mean zero and covariance matrix $\bSigma$,
evaluated at $\bx \in \R^d$. Then
there exists a sequence of orthogonal $d$-by-$d$ matrices
$( \Wn )_{n=1}^\infty$ such that for all $\bm{z} \in \R^d$ and for any fixed index $i$,
$$ \lim_{n \rightarrow \infty}
\Pr\left[ n^{1/2} \left( \Xhat_n \Wn - \bX_n \right)_i
\le \bm{z} \right]
= \int_{\supp F} \Phi\left(\bm{z}, \bSigma(\bx) \right) dF(\bx), $$
where
\begin{equation}
\label{def:sigma}\bSigma(\bx)
= \Delta^{-1} \E\left[ (\bx^{\top} \bX_1 - ( \bx^{\top} \bX_1)^2 ) \bX_1 \bX_1^{\top} \right] \Delta^{-1}; \quad \text{and} \,\, \Delta = \mathbb{E}[\bX_1 \bX_1^{\top}].
\end{equation}
\end{theorem}
We also note the following important corollary of Theorem~\ref{thm:clt_orig_but_better} for when $F$ is a mixture of $K$ point masses, i.e., $(\mathbf{X}, \mathbf{A}) \sim \mathrm{RDPG}(F)$ is a $K$-block stochastic blockmodel graph. Then for any fixed index $i$, the event that $\bX_i$ is assigned to block $k \in \{1,2,\dots,K\}$ has non-zero probability and hence one can condition on the block assignment of $\bX_i$ to show that the conditional distribution of $\sqrt{n}(\mathbf{W}_n (\hat{\bX}_n)_{i} - (\bX_n)_i)$ converges to a multivariate normal. This is in contrast to the unconditional distribution being a mixture of multivariate normals as given in Theorem~\ref{thm:clt_orig_but_better}.
\begin{corollary}[SBM]
\label{cor:ase_normality_sbm}
Assume the setting and notations of Theorem~\ref{thm:clt_orig_but_better} and let
$$F = \sum_{k=1}^{K} \pi_{k} \delta_{\nu_k}, \quad \pi_1, \cdots, \pi_K > 0, \sum_{k} \pi_k = 1$$
be a mixture of $K$ point masses in $\mathbb{R}^{d}$ where $\delta_{\nu_k}$ is the Dirac delta measure at $\nu_k$.
Then there exists a sequence of orthogonal matrices $\mathbf{W}_n$ such that for all $\bm{z} \in \mathbb{R}^{d}$ and for any fixed index $i$,
\begin{equation}
\mathbb{P}\Bigl\{\sqrt{n}(\mathbf{W}_n \hat{\bX}_n - \mathbf{X}_n)_{i} \leq \bm{z} \mid \bX_i = \nu_k \Bigr\} \longrightarrow \Phi(\bm{z}, \Sigma_k)
\end{equation}
where $\Sigma_k = \Sigma(\nu_k)$ is as defined in Eq.~\eqref{def:sigma}.
\end{corollary}
We relegate the full details of the proof of this central limit theorem to the Appendix, in Section \ref{subsec:CLT_proofdetails}, but a few points bear noting here. First, both Theorem~\ref{thm:clt_orig_but_better} and Corollary~\ref{cor:ase_normality_sbm} are very similar to results proved in \cite{athreya2013limit}, but with the crucial difference being that we no longer require that the second moment matrix $\Delta = \mathbb{E}[\bX_1 \bX_1^{\top}]$ of $\bm{X}_1 \sim F$ have distinct eigenvalues (for more details, see \cite{tang_lse}). As in \cite{athreya2013limit}, our proof here depends on writing the difference between a row of the adjacency spectral embedding and its corresponding latent position as a pair of summands: the first, to which a classical Central Limit Theorem can be applied, and the second, essentially a combination of residual terms, which we show, using techniques similar to those in the proof of Theorem \ref{thm:minh_sparsity}, converges to zero. The weakening of the assumption of distinct eigenvalues necessitates significant changes from \cite{athreya2013limit} in how to bound the residual terms, because \cite{athreya2013limit} adapts a result of \cite{bickel_sarkar_2013}---the latter of which depends on the assumption of distinct eigenvalues---to control these terms. Here, we resort to somewhat different methodology: we prove instead that analogous bounds to those in \cite{lyzinski15_HSBM,tang_lse} hold for the estimated latent positions and this enables us to establish that here, too, the rows of the adjacency spectral embedding are also approximately normally distributed.
We stress that this central limit theorem depends on a delicate bounding of a sequence of so-called residual terms, but its essence is straightforward. Specifically, there exists an orthogonal transformation $\bW^*$ such that we can write the $i$th row of the matrix
$$\sqrt{n}(\UA \SA^{1/2}-\UP \SP^{1/2}\bW^{*})$$ as
\begin{equation}\label{eq:CLT_basic_explanation}
\sqrt{n}(\UA \SA^{1/2}-\UP \SP^{1/2}\bW^{*})_i=\sqrt{n}(\bA -\bP) \UP \SP^{-1/2}\bW^{*})_i + \textrm{Residual terms}
\end{equation}
where the residual terms are all of order $O(n^{-1/2} \log n)$ in probability. Now, to handle the first term in Eq.\eqref{eq:CLT_basic_explanation}, we can condition on a fixed latent position $\bX_i={\bf x}_i$, and when this is fixed, the classical Lindeberg-Feller Central Limit Theorem establishes the asymptotic normality of this term. The order of the residual terms then guarantees, by Slutsky's Theorem, the desired asymptotic normality of the gap between estimated and true latent positions, and finally we need only integrate over the possible latent positions to obtain a mixture of normals.
\subsection{An example under the stochastic block model}
To illustrate Theorem~\ref{thm:clt_orig_but_better}, we
consider random graphs generated according to a
stochastic block model with parameters
\begin{equation}
\label{eq:1}
B = \begin{bmatrix} 0.42 & 0.42 \\ 0.42 & 0.5 \end{bmatrix}
\quad \text{and} \quad \pi = (0.6,0.4).
\end{equation}
In this model, each node is either in block 1 (with probability 0.6)
or block 2 (with probability 0.4). Adjacency probabilities are
determined by the entries in $B$ based on the block memberships of the
incident vertices. The above stochastic blockmodel corresponds to a random dot product
graph model in $\mathbb{R}^{2}$ where the distribution $F$ of the latent
positions is a mixture of point masses located at $x_1\approx (0.63,
-0.14)$ (with prior probability $0.6$) and $x_2\approx (0.69, 0.13)$
(with prior probability $0.4$).
We sample an adjacency matrix $\bA$ for graphs on $n$ vertices from the
above model for various choices of $n$. For each graph $G$, let
$\Xhat\in\Re^{n\times 2}$ denote the embedding of $A$ and let
$\Xhat_{i\cdot}$ denote the $i$th row of $\Xhat$. In
Figure~\ref{fig:clusplot}, we plot the $n$ rows of $\Xhat$ for the
various choices of $n$. The points are denoted by symbols according to the block
membership of the corresponding vertex in the stochastic blockmodel. The ellipses show
the 95\% level curves for the distribution of $\hat{X}_i$ for each
block as specified by the limiting distribution.
\begin{figure*}[!htbp]
\centering
\subfloat[$n = 1000$]{
\includegraphics[width=5.5cm]{clusplot1000.pdf}
}
\hfil
\subfloat[$n = 2000$]{
\includegraphics[width=5.5cm]{clusplot2000.pdf}
}
\hfil
\subfloat[$n = 4000$]{
\includegraphics[width=5.5cm]{clusplot4000.pdf}
\label{fig5:subfig_scan}
}
\hfil
\subfloat[$n = 8000$]{
\includegraphics[width=5.5cm]{clusplot8000.pdf}
}
\caption{Plot of the estimated latent positions in a two-block
stochastic blockmodel graph on $n$ vertices. The
points are denoted by symbols according to the blockmembership of the
corresponding vertices. Dashed ellipses give the 95\% level curves
for the distributions as specified in
Theorem~\ref{thm:clt_orig_but_better}.}
\label{fig:clusplot}
\end{figure*}
We then estimate the covariance matrices for the residuals. The
theoretical covariance matrices are given in the last column of
Table~\ref{tab:cov}, where $\Sigma_{1}$ and $\Sigma_{2}$ are the
covariance matrices for the residual $\sqrt{n}(\hat{\bX}_i - \bX_i)$ when
$X_i$ is from the first block and second block, respectively. The
empirical covariance matrices, denoted $\hat{\Sigma}_1$ and
$\hat{\Sigma}_2$, are computed by evaluating the sample covariance of
the rows of $\sqrt{n}\Xhat_i$ corresponding to vertices in block 1
and 2 respectively. The estimates of the covariance matrices are
given in Table~\ref{tab:cov}. We see that as $n$ increases, the
sample covariances tend toward the specified limiting covariance
matrix given in the last column.
\begin{table}[htbp]
\footnotesize
\begin{center}
\begin{tabular}{cccccc}
$n$ & 2000 & 4000 & 8000 & 16000 & $\infty$ \\
$\hat{\Sigma}_1$ &%
$\begin{bmatrix} .58 & .54 \\ .54 & 16.56 \end{bmatrix}$ &%
$\begin{bmatrix} .58 & .63 \\ .63 & 14.87 \end{bmatrix}$ &%
$\begin{bmatrix} .60 & .61 \\ .61 & 14.20 \end{bmatrix}$ &%
$\begin{bmatrix} .59 & .58 \\ .58 & 13.96 \end{bmatrix}$ &%
$\begin{bmatrix} .59 & .55 \\ .55 & 13.07 \end{bmatrix}$ \\ \\
$\hat{\Sigma}_2$ &%
$\begin{bmatrix} .58 & .75 \\ .75 & 16.28 \end{bmatrix}$ &%
$\begin{bmatrix} .59 & .71 \\ .71 & 15.79 \end{bmatrix}$ &%
$\begin{bmatrix} .58 & .54 \\ .54 & 14.23 \end{bmatrix}$ &%
$\begin{bmatrix} .61 & .69 \\ .69 & 13.92 \end{bmatrix}$ &%
$\begin{bmatrix} .60 & .59 \\ .59 & 13.26 \end{bmatrix}$ \\
\end{tabular}
\end{center}
\caption{The sample covariance matrices for $\sqrt{n}(\hat{X}_i-X_i)$
for each block in a stochastic blockmodel with two blocks. Here
$n \in \{2000,4000,8000,16000\}$. In the
last column are the theoretical covariance matrices for the limiting
distribution.}
\label{tab:cov}
\end{table}
We also investigate the effects of the multivariate normal distribution
as specified in Theorem~\ref{thm:clt_orig_but_better} on inference procedures. It is shown in
\cite{STFP-2011,sussman2012universally} that the approach of
embedding a graph into some Euclidean space, followed by inference
(for example, clustering or classification) in that space can be
consistent. However, these consistency results are, in a sense, only
first-order results. In particular, they demonstrate only that the
error of the inference procedure converges to $0$ as the number of
vertices in the graph increases. We now illustrate how
Theorem~\ref{thm:clt_orig_but_better} may lead to a more refined error analysis.
We construct a sequence of random graphs on $n$ vertices, where $n$
ranges from $1000$ through $4000$ in increments of $250$, following
the stochastic blockmodel with parameters as given above in
Eq.~\eqref{eq:1}. For each graph $G_n$ on $n$ vertices, we embed $G_n$
and cluster the embedded vertices of $G_n$ via Gaussian mixture
model and K-means clustering. Gaussian mixture model-based clustering
was done using the MCLUST
implementation of \cite{fraley99:_mclus}.
We then measure the classification error of the
clustering solution. We repeat this procedure 100 times to obtain an
estimate of the misclassification rate. The results are plotted in
Figure~\ref{fig:gmm_kmeans_bayes}. For comparison, we plot the
Bayes optimal classification error rate under the assumption that the
embedded points do indeed follow a multivariate normal mixture with
covariance matrices $\Sigma_1$ and $\Sigma_2$ as given in the
last column of Table~\ref{tab:cov}. We also plot the misclassification
rate of $(C \log{n})/n$ as given in \cite{STFP-2011}
where the constant $C$ was chosen to match the misclassification rate
of $K$-means clustering for $n = 1000$. For the number of
vertices considered here, the upper bound for
the constant $C$ from \cite{STFP-2011} will give a vacuous upper
bound of the order of $10^6$ for the misclassification rate in this
example.
Finally, we recall that the $2\to \infty$ norm bound of Theorem~\ref{thm:minh_sparsity} implies that, for large enough $n$, even the $k$-means algorithm will exactly recover the true block memberships with high probability \cite{lyzinski15:_relax}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{gmm_kmeans_bayes.pdf}
\caption{Comparison of classification error for Gaussian mixture
model (red curve), K-Means (green curve), and Bayes classifier
(cyan curve). The classification errors for each $n \in
\{1000,1250,1500, \dots, 4000\}$ were obtained by averaging 100
Monte Carlo iterations and are plotted on a $\log_{10}$ scale. The
plot indicates that the assumption of a mixture of multivariate
normals can yield non-negligible improvement in the accuracy of the inference
procedure. The log-bound curve (purple) shows an upper bound on
the error rate as derived in \cite{STFP-2011}. Figure duplicated from \cite{athreya2013limit}.}
\label{fig:gmm_kmeans_bayes}
\end{figure}
For yet another application of the central limit theorem, we refer the reader to \cite{suwan14:_empbayes}, where the authors discuss the assumption of multivariate normality for estimated latent positions and how this can lead to a significantly improved empirical-Bayes framework for the estimation of block memberships in a stochastic blockmodel.
\subsection{Distributional results for Laplacian spectral embedding}\label{subsec:lse}
We now present the analogous central limit theorem results of Section \ref{subsec:Distributional} for the normalized {\em Laplacian spectral embedding} (see Definition~\ref{def:LSE}). We first recall the definition of the Laplacian spectral embedding.
\begin{theorem}[Central Limit Theorem for the rows of the LSE]
\label{THM:LSE}
Let $(\mathbf{A}_n, \mathbf{X}_n) \sim \mathrm{RDPG}(F)$ for $n \geq 1$ be a sequence of $d$-dimensional random dot product graphs distributed according to some inner product distribution $F$. Let $\bm{\mu}$ and $\tilde{\Delta}$ denote the quantities
\begin{gather}
\label{eq:mu_defn1}
\bm{\mu} = \mathbb{E}[\bX_1] \in \mathbb{R}^{d}; \quad \tilde{\Delta} =
\mathbb{E}\Bigl[ \frac{ \bX_1 \bX_1^{\top}}{\bX_1^{\top} \bm{\mu}} \Bigr] \in \mathbb{R}^{d \times d}.
\end{gather}
Also denote by $\tilde{\Sigma}(\bx)$ the $d \times d$ matrix
\begin{equation}
\label{eq:lse-sigma}
\mathbb{E}\Bigl[ \Bigl(\frac{\tilde{\Delta}^{-1} \bX_1}{\bX_1^{\top} \bm{\mu}} - \frac{\bx}{2 \bx^{\top} \bm{\mu}}\Bigr) \Bigl(\frac{\bX_1^{\top} \tilde{\Delta}^{-1}}{\bX_1^{\top} \bm{\mu}} - \frac{\bx^{\top}}{2 \bx^{\top} \bm{\mu}}\Bigr) \frac{(\bx^{\top} \bX_1 - \bx^{\top} \bX_1 \bX_1^{\top} \bx)}{\bx^{\top} \bm{\mu}} \Bigr].
\end{equation}
Then there exists a sequence of
$d \times d$ orthogonal matrices
$(\mathbf{W}_n)$ such that for each fixed index $i$ and any $\bm{x} \in \mathbb{R}^{d}$,
\begin{equation}
\label{eq:Xtilde_clt}
\Pr\Bigl\{n\bigl( \mathbf{W}_n (\breve{\bX}_n)_{i} - \tfrac{(\bX_n)_{i}}{\sqrt{\sum_{j} (\bX_n)_i^{\top} (\bX_n)_j}} \bigr) \leq \bm{x} \Bigr\}
\longrightarrow \int \Phi(\bm{x}, \tilde{\Sigma}(\bm{y})) dF(\bm{y})
\end{equation}
\end{theorem}
When $F$ is a mixture of point masses---specifically, when $\mathbf{A} \sim \mathrm{RDPG}(F)$ is a stochastic blockmodel graph---we also have the following limiting conditional distribution for $n (\mathbf{W}_n (\breve{\bX}_n)_i - \tfrac{(\bX_n)_i}{\sqrt{\sum_{j} (\bX_n)_i^{\top} (\bX_n)_j}})$.
\begin{theorem}
\label{thm:clt-lse-sbm}
Assume the setting and notations of Theorem~\ref{THM:LSE} and let $$F = \sum_{k=1}^{K} \pi_{k} \delta_{\nu_k}, \quad \pi_1, \cdots, \pi_K > 0, \sum_{k} \pi_k = 1$$
be a mixture of $K$ point masses in $\mathbb{R}^{d}$.
Then there exists a sequence of $d \times d$ orthogonal matrices $\mathbf{W}_n$ such that for any fixed index $i$,
\begin{equation}
\label{eq:lse-clt-sbm}
\mathbb{P}\Bigl\{n \bigl(\mathbf{W}_n (\breve{\bX}_n)_i - \tfrac{\nu_k}{\sqrt{\sum_{l} n_l \nu_k^{\top} \nu_l}} \bigr) \leq \bm{z} \mid (\bX_n)_i = \nu_k \Bigr\}
\longrightarrow \bm{\Phi}(\bm{z}, \tilde{\Sigma}_k)
\end{equation}
where $\tilde{\Sigma}_k = \tilde{\Sigma}(\nu_k)$ is as defined in Eq.~\eqref{eq:lse-sigma} and $n_k$ for $k \in \{1,2,\dots,K\}$ denote the number of vertices in $\mathbf{A}$ that are assigned to block $k$.
\end{theorem}
\begin{remark}
As a special case of Theorem~\ref{thm:clt-lse-sbm}, we note that if $\mathbf{A}$ is an Erd\H{o}s-R\'{e}nyi graph on $n$ vertices with edge probability $p^2$ -- which corresponds to a random dot product graph where the latent positions are identically $p$ -- then for each fixed index $i$, the normalized Laplacian embedding satisfies
$$ n\bigl(\breve{\bX}_i - \tfrac{1}{\sqrt{n}}\bigr) \overset{\mathrm{d}}{\longrightarrow} \mathcal{N}\bigl(0, \tfrac{1 - p^2}{4p^2}\bigr).$$
Recall that $\breve{\bX}_i$ is proportional to $1/\sqrt{d_i}$ where $d_i$ is the degree of the $i$-th vertex.
On the other hand, the adjacency spectral embedding satisfies $$\sqrt{n}(\hat{\bX}_i - p) \overset{\mathrm{d}}{\longrightarrow} \mathcal{N}(0, 1 - p^2).$$
As another example, let $\mathbf{A}$ be sampled from a stochastic blockmodel with block probability matrix $\mathbf{B} = \bigl[\begin{smallmatrix} p^2 & pq \\ pq & q^2 \end{smallmatrix}\bigr]$ and block assignment probabilities $(\pi, 1- \pi)$. Since $\mathbf{B}$ has rank $1$, this model corresponds to a random dot product graph where the latent positions are either $p$ with probability $\pi$ or $q$ with probability $1 - \pi$. Then for each fixed index $i$, the normalized Laplacian embedding satisfies
\begin{gather}
\label{eq:er-p-q-lse1}
n \bigl(\breve{\bX}_i - \tfrac{p}{\sqrt{n_1 p^2 + n_2 pq}}\bigr) \overset{\mathrm{d}}{\longrightarrow} \mathcal{N}\Bigl(0, \tfrac{\pi p (1 - p^2) + (1 - \pi) q(1 - pq)}{4 (\pi p + (1 - \pi)q)^3}\Bigr) \,\, \text{if $\bX_i = p$}, \\
\label{eq:er-p-q-lse2}
n\bigl(\breve{\bX}_i - \tfrac{q}{\sqrt{n_1 pq + n_2 q^2}}\bigr) \overset{\mathrm{d}}{\longrightarrow} \mathcal{N}\Bigl(0, \tfrac{\pi p (1 - pq) + (1 - \pi) q(1 - q^2)}{4 (\pi p + (1 - \pi)q)^3}\Bigr) \,\, \text{if $\bX_i = q$}.
\end{gather}
where $n_1$ and $n_2 = n - n_1$ are the number of vertices of $\mathbf{A}$ with latent positions $p$ and $q$. The adjacency spectral embedding, meanwhile, satisfies
\begin{gather}
\label{eq:er-p-q-ase1}
\sqrt{n}(\hat{\bX}_i - p) \overset{\mathrm{d}}{\longrightarrow} \mathcal{N}\Bigl(0, \tfrac{\pi p^4(1 - p^2) + (1 - \pi) pq^3(1 - pq)}{(\pi p^2 + (1 - \pi)q^2)^2}\Bigr) \,\, \text{if $\bX_i = p$},\\
\label{eq:er-p-q-ase2}
\sqrt{n}(\hat{\bX}_i - q) \overset{\mathrm{d}}{\longrightarrow} \mathcal{N}\Bigl(0, \tfrac{\pi p^3q(1 - pq) + (1 - \pi) q^4(1 - q^2)}{(\pi p^2 + (1 - \pi)q^2)^2}\Bigr) \,\, \text{if $\bX_i = q$}.
\end{gather}
\end{remark}
We present a sketch of the proof of Theorem~\ref{THM:LSE} in the Appendix, Section \ref{sec:Appendix}. Due to the intricacy of the proof, however, even in the Appendix we do not provide full details; we instead refer the reader to \cite{tang_lse} for the complete proof. Moving forward, we focus on the important implications of these distributional results for subsequent inference, including a mechanism by which to assess the relative desirability of ASE and LSE, which {\em vary depending on inference task.}
\section{Implications for subsequent inference}
The previous sections are devoted to establishing the consistency and asymptotic normality of the adjacency and Laplacian spectral embeddings for the estimation of latent positions in an RDPG.
In this section, we describe several subsequent graph inference tasks, all of which depend on this consistency: specifically, nonparametric clustering, semiparametric and nonparametric two-sample graph hypothesis testing, and multi-sample graph inference.
\subsection{Nonparametric clustering: a comparison of ASE and LSE via Chernoff information}
\label{subsec:chernoff}
We now discuss how the limit results of Section~\ref{subsec:Distributional} and Section~\ref{subsec:lse} provides insight into subsequent inference.
In a recent pioneering work the authors of \cite{bickel_sarkar_2013} analyze, in the context of stochastic blockmodel graphs, how the choice of spectral embedding by either the adjacency matrix or the normalized Laplacian matrix impacts subsequent recovery of the block assignments. In particular, they show that a metric constructed from the average distance between the vertices of a block and its cluster centroid for the spectral embedding can be used as a surrogate measure for the performance of the subsequent inference task, i.e., the metric is a surrogate measure for the error rate in recovering the vertices to block assignments using the spectral embedding. It is shown in \cite{bickel_sarkar_2013} that for two-block stochastic blockmodels, for a large regime of parameters the normalized Laplacian spectral embedding reduces the within-block variance (occasionally by a factor of four) while preserving the between-block variance, as compared to that of the adjacency spectral embedding.
This suggests that for a large region of the parameter space for two-block stochastic blockmodels, the spectral embedding of the Laplacian is preferable to the spectral embedding of the adjacency matrix for subsequent inference. However, we observe that the metric in \cite{bickel_sarkar_2013} is intrinsically tied to the use of $K$-means as the clustering procedure: specifically, a smaller value of the metric for the Laplacian spectral embedding as compared to that for the adjacency spectral embedding only implies that clustering the Laplacian spectral embedding using $K$-means is possibly better than clustering the adjacency spectral embedding using $K$-means.
Motivated by the above observation, in \cite{tang_lse}, we propose a metric that is {\em independent} of any specific clustering procedure, a metric that characterizes the minimum error achievable by {\em any} clustering procedure that uses only the spectral embedding for the recovery of block assignments in stochastic blockmodel graphs. For a given embedding method, the metric used in \cite{tang_lse} is based on the notion of statistical information between the limiting distributions of the blocks. Roughly speaking, smaller statistical information implies less information to discriminate between the blocks of the stochastic blockmodel. More specifically, the limit result in Section~\ref{subsec:Distributional} and Section~\ref{subsec:lse} state that, for stochastic blockmodel graphs, conditional on the block assignments the entries of the scaled eigenvectors corresponding to the few largest eigenvalues of the adjacency matrix and the normalized Laplacian matrix converge to a multivariate normal as the number of vertices increases. Furthermore, the associated covariance matrces are not spherical, so $K$-means clustering for the adjacency spectral embedding or Laplacian spectral embedding does not yield minimum error for recovering the block assignment. Nevertheless, these limiting results also facilitate comparison between the two embedding methods via the classical notion of Chernoff information \cite{chernoff_1952}.
We now recall the notion of the Chernoff $\alpha$-divergences (for $\alpha \in (0,1))$ and Chernoff information. Let $F_0$ and $F_1$ be two absolutely continuous multivariate distributions in $\Omega = \mathbb{R}^{d}$ with density functions $f_0$ and $f_1$, respectively. Suppose that $Y_1, Y_2, \dots, Y_m$ are independent and identically distributed random variables, with $Y_i$ distributed either $F_0$ or $F_1$. We are interested in testing the simple null hypothesis $\mathbb{H}_0 \colon F = F_0$ against the simple alternative hypothesis $\mathbb{H}_1 \colon F = F_1$. A test $T$ can be viewed as a sequence of mappings $T_m \colon \Omega^{m} \mapsto \{0,1\}$ such that given $Y_1 = y_1, Y_2 = y_2, \dots, Y_m = y_m$, the test rejects $\mathbb{H}_0$ in favor of $\mathbb{H}_1$ if $T_m(y_1, y_2, \dots, y_m) = 1$; similarly, the test favors $\mathbb{H}_0$ if $T_m(y_1, y_2, \dots, y_m) = 0$.
The Neyman-Pearson lemma states that, given $Y_1 = y_1, Y_2 = y_2, \dots, Y_m = y_m$ and a threshold $\eta_m \in \mathbb{R}$, the likelihood ratio test which rejects $\mathbb{H}_0$ in favor of $\mathbb{H}_1$ whenever
$$ \Bigl(\sum_{i=1}^{m} \log{f_0(y_i)} - \sum_{i=1}^{m} \log{f_1(y_i)} \Bigr) \leq \eta_m $$
is the most powerful test at significance level $\alpha_m = \alpha(\eta_m)$, so that the likelihood ratio test minimizes the Type II error $\beta_m$ subject to the constraint that the Type I error is at most $\alpha_m$. Assuming that $\pi \in (0,1)$ is a prior probability that $\mathbb{H}_0$ is true. Then, for a given $\alpha_m^{*} \in (0,1)$, let $\beta_m^{*} = \beta_m^{*}(\alpha_m^{*})$ be the Type II error associated with the likelihood ratio test when the Type I error is at most $\alpha_m^{*}$. The quantity $\inf_{\alpha_m^{*} \in (0,1)} \pi \alpha_m^{*} + (1 - \pi) \beta_m^{*}$ is then the Bayes risk in deciding between $\mathbb{H}_0$ and $\mathbb{H}_1$ given the $m$ independent random variables $Y_1, Y_2, \dots, Y_m$. A classical result of Chernoff \cite{chernoff_1952,chernoff_1956} states that the Bayes risk is intrinsically linked to a quantity known as the {\em Chernoff information}. More specifically, let $C(F_0, F_1)$ be the quantity
\begin{equation}
\label{eq:chernoff-defn}
\begin{split} C(F_0, F_1) & = - \log \, \Bigl[\, \inf_{t \in (0,1)} \int_{\mathbb{R}^{d}} f_0^{t}(\bm{x}) f_1^{1-t}(\bm{x}) \mathrm{d}\bm{x} \Bigr] \\
&= \sup_{t \in (0,1)} \Bigl[ - \log \int_{\mathbb{R}^{d}} f_0^{t}(\bm{x}) f_1^{1-t}(\bm{x}) \mathrm{d}\bm{x} \Bigr].
\end{split}
\end{equation}
Then we have
\begin{equation}
\label{eq:chernoff-binary}
\begin{split}
\lim_{m \rightarrow \infty} \frac{1}{m} \inf_{\alpha_m^{*} \in (0,1)} \log( \pi \alpha_m^{*} + (1 - \pi) \beta_m^{*}) & = - \, C(F_0, F_1).
\end{split}
\end{equation}
Thus $C(F_0, F_1)$, the Chernoff information between $F_0$ and $F_1$, is the {\em exponential} rate at which the Bayes error $\inf_{\alpha_m^{*} \in (0,1)} \pi \alpha_m^{*} + (1 - \pi) \beta_m^{*}$ decreases as $m \rightarrow \infty$; we note that the Chernoff information is independent of $\pi$. We also define, for a given $t \in (0,1)$ the Chernoff divergence $C_t(F_0, F_1) $ between $F_0$ and $F_1$ by
$$ C_{t}(F_0,F_1) = - \log \int_{\mathbb{R}^{d}} f_0^{t}(\bm{x}) f_1^{1-t}(\bm{x}) \mathrm{d}\bm{x}. $$
The Chernoff divergence is an example of a $f$-divergence as defined in \cite{Csizar,Ali-Shelvey}. When $t = 1/2$, $C_t(F_0,F_1)$ is the Bhattacharyya distance between $F_0$ and $F_1$. Recall that any $f$-divergence satisfies the Information Processing Lemma and is invariant with respect to invertible transformations \cite{Liese_Vadja}. Therefore, any $f$-divergence such as the Kullback-Liebler divergence can also be used to compare the two embedding methods; the Chernoff information is particularly attractive because of its explicit relationship with the Bayes risk.
The characterization of Chernoff information in Eq.~\eqref{eq:chernoff-binary} can be extended to $K + 1 \geq 2$ hypotheses. Let $F_0, F_1, \dots, F_{K}$ be distributions on $\mathbb{R}^{d}$ and suppose that $Y_1, Y_2, \dots, Y_m$ are independent and identically distributed random variables with $Y_i$ distributed $F \in \{F_0, F_1, \dots, F_K\}$. We are thus interested in determining the distribution of the $Y_i$ among the $K+1$ hypothesis $\mathbb{H}_0 \colon F = F_0, \dots, \mathbb{H}_{K} \colon F = F_K$. Suppose also that hypothesis $\mathbb{H}_k$ has {\em a priori} probability $\pi_k$. Then for any decision rule $\delta$, the risk of $\delta$ is $r(\delta) = \sum_{k} \pi_k \sum_{l \not = k} \alpha_{lk}(\delta) $ where $\alpha_{lk}(\delta)$ is the probability of accepting hypothesis $\mathbb{H}_l$ when hypothesis $\mathbb{H}_k$ is true. Then we have \cite{leang-johnson}
\begin{equation}
\label{eq:chernoff-multiple}
\inf_{\delta} \lim_{m \rightarrow \infty} \frac{r(\delta)}{m} = - \min_{k \not = l} C(F_k, F_l).
\end{equation}
where the infimum is over all decision rules $\delta$. That is, for any $\delta$, $r(\delta)$ decreases to $0$ as $m \rightarrow \infty$ at a rate no faster than $\exp(- m \min_{k \not = l} C(F_k, F_l))$. It was also shown in \cite{leang-johnson} that the {\em Maximum A Posterior} decision rule achieves this rate.
Finally, if $F_0 = \mathcal{N}(\mu_0, \Sigma_0)$ and $F_1 = \mathcal{N}(\mu_1, \Sigma_1)$, then denoting by $\Sigma_t = t \Sigma_0 + (1 - t) \Sigma_1$, the Chernoff information $C(F_0, F_1)$ between $F_0$ and $F_1$ is given by
\begin{equation*}
C(F_0, F_1) = \sup_{t \in (0,1)} \Bigl(\frac{t(1 - t)}{2} (\mu_1 - \mu_2)^{\top}\Sigma_t^{-1}(\mu_1 - \mu_2) + \frac{1}{2} \log \frac{|\Sigma_t|}{|\Sigma_0|^{t} |\Sigma_1|^{1 - t}} \Bigr).
\end{equation*}
Comparison of the performance of the Laplacian spectral embedding and the adjacency spectral embedding for recovering the block assignments now proceeds as follows. Let $\mathbf{B} \in [0,1]^{K \times K}$ and $\bm{\pi} \in \mathbb{R}^{K}$ be the matrix of block probabilities and the vector of block assignment probabilities for a $K$-block stochastic blockmodel. We shall assume that $\mathbf{B}$ is positive semidefinite. Then given an $n$ vertex instantiation of the SBM graph with parameters $(\bm{\pi}, \mathbf{B})$, for sufficiently large $n$, the large-sample optimal error rate for recovering the block assignments when adjacency spectral embedding is used as the initial embedding step can be characterized by the quantity $\rho_{\mathrm{A}} = \rho_{\mathrm{A}}(n)$ defined by
\begin{equation}
\label{eq:rho_ASE}
\rho_{\mathrm{A}} = \min_{k \not = l} \! \sup_{t \in (0,1)} \frac{1}{2} \log \frac{|\Sigma_{kl}(t)|}{|\Sigma_{k}|^{t} |\Sigma_{l}|^{1-t}} + \frac{nt(1 - t)}{2}(\nu_k - \nu_l)^{\top} \Sigma_{kl}^{-1}(t) (\nu_k - \nu_l)
\end{equation}
where $\Sigma_{kl}(t) = t \Sigma_{k} + (1 - t) \Sigma_l$; $\Sigma_k = \Sigma(\nu_k)$ and $\Sigma_l = \Sigma(\nu_l)$ are as defined in Theorem~\ref{thm:clt_orig_but_better}. We recall Eq.~\eqref{eq:chernoff-multiple}, in particular the fact that as $\rho_{\mathrm{A}}$ increases, the large-sample optimal error rate decreases.
Similarly, the large-sample optimal error rate when Laplacian spectral embedding is used as the pre-processing step can be characterized by the quantity
$\rho_{\mathrm{L}} = \rho_{\mathrm{L}}(n)$ defined by
\begin{equation}
\label{eq:rho_LSE}
\rho_{\mathrm{L}} = \min_{k \not = l} \sup_{t \in (0,1)} \frac{1}{2} \log \frac{|\tilde{\Sigma}_{kl}(t)|}{|\tilde{\Sigma}_{k}|^{t} |\tilde{\Sigma}_{l}|^{1-t}} + \frac{n t(1 - t)}{2} (\tilde{\nu}_k - \tilde{\nu}_l)^{\top} \tilde{\Sigma}_{kl}^{-1}(t) (\tilde{\nu}_k - \tilde{\nu}_l)
\end{equation}
where $\tilde{\Sigma}_{kl}(t) = t \tilde{\Sigma}_{k} + (1 - t) \tilde{\Sigma}_l$ with $\tilde{\Sigma}_k = \tilde{\Sigma}(\nu_k)$ and $\tilde{\Sigma}_l = \tilde{\Sigma}(\nu_l)$ as defined in Theorem~\ref{thm:clt-lse-sbm}, and
$\tilde{\nu}_k = \nu_k/(\sum_{k'} \pi_{k'} \nu_k^{\top} \nu_{k'})^{1/2}$. We emphasize that we have made the simplifying assumption that $n_k = n \pi_k$ in our expression for $\tilde{\nu}_k$ in Eq.~\eqref{eq:rho_LSE}. This is for ease of comparison between $\rho_{\mathrm{A}}$ and $\rho_{\mathrm{L}}$ in our subsequent discussion.
The ratio $\rho_{\mathrm{A}}/\rho_{\mathrm{L}}$ is a surrogate measure of the relative large-sample performance of the adjacency spectral embedding as compared to the Laplacian spectral embedding for subsequent inference, at least in the context of stochastic blockmodel graphs. That is to say, for given parameters $\bm{\pi}$ and $\mathbf{B}$, if $\rho_{\mathrm{A}}/\rho_{\mathrm{L}} > 1$, then adjacency spectral embedding is to be preferred over Laplacian spectral embedding when $n$, the number of vertices in the graph, is sufficiently large; similarly, if $\rho_{\mathrm{A}}/\rho_{\mathrm{L}} < 1$, then Laplacian spectral embedding is to be preferred over adjacency spectral embedding.
\begin{figure}[tp!]
\center
\includegraphics[width=0.7\textwidth]{contour_plot_2blocks_revised.pdf}
\caption{The ratio $\rho_{\mathrm{A}}/\rho_{\mathrm{L}}$ displayed for various values of $p \in [0.2, 0.8]$ and $r = q - p \in [-0.15, 0.15]$. The labeled lines are the contour lines for $\rho_{\mathrm{A}}/\rho_{\mathrm{L}}$. Figure duplicated from \cite{tang_lse}.
}
\label{fig:ratio-plot}
\end{figure}
As an illustration of the ratio $\rho_{\mathrm{A}}/\rho_{\mathrm{L}}$, we first consider the collection of 2-block stochastic blockmodels where $\mathbf{B} = \Bigl[ \begin{smallmatrix} p^2 & pq \\ pq & q^2 \end{smallmatrix} \Bigr]$ for $p, q \in (0,1)$ and $\bm{\pi} = (\pi_1, \pi_2 )$ with $\pi_1 + \pi_2 = 1$. We note that these $\mathbf{B}$ also have rank $1$ and thus the Chernoff information can be computed explicitly.
Then for sufficiently large $n$, $\rho_{\mathrm{A}}$ is approximately
$$ \rho_{\mathrm{A}} \approx \sup_{t \in (0,1)} \frac{nt(1 - t)}{2} (p - q)^{2} (t \sigma_1^{2} + (1 - t) \sigma_2^{2})^{-1}$$
where $\sigma_1$ and $\sigma_2$ are as specified in Eq.~\eqref{eq:er-p-q-ase1} and Eq.~\eqref{eq:er-p-q-ase2}, respectively. Simple calculations yield
$$ \rho_{\mathrm{A}} \approx \frac{n(p - q)^2 (\pi_1 p^2 + \pi_2 q^2)^2}{2\bigl(\sqrt{\pi_1 p^4 (1 - p^2) + \pi_2 p q^3(1 - pq) } + \sqrt{\pi_1 p^3 q(1 - pq) + \pi_2 q^4 (1 - q^2)}\bigr)^2}$$ for sufficiently large $n$. Similarly, denoting by $\tilde{\sigma}_1^{2}$ and $\tilde{\sigma}_2^2$ the variances specified in Eq.~\eqref{eq:er-p-q-lse1} and Eq.~\eqref{eq:er-p-q-lse2}, we have
\begin{equation*}
\begin{split}
\rho_{\mathrm{L}} & \approx \sup_{t \in (0,1)} \frac{nt(1 - t)}{2} \Bigl(\frac{p}{\sqrt{\pi_1 p^2 + \pi_2 pq}} - \frac{q}{\sqrt{\pi_1 p q + \pi_2 q^2}}\Bigr)^{2} (t \tilde{\sigma}_1^{2} + (1 - t) \tilde{\sigma}_2^2)^{-1} \\
& \approx \frac{2n(\sqrt{p} - \sqrt{q})^2 (\pi_1 p + \pi_2 q)^2}{\bigl(\sqrt{\pi_1 p (1 - p^2) + \pi_2 q (1 - pq)} + \sqrt{\pi_1 p (1 - pq) + \pi_2 q (1 - q^2)}\bigr)^2} \\ &
\approx \frac{2n(p - q)^2 (\pi_1 p + \pi_2 q)^2}{(\sqrt{p} + \sqrt{q})^2 \bigl(\sqrt{\pi_1 p (1 - p^2) + \pi_2 q (1 - pq)} + \sqrt{\pi_1 p (1 - pq) + \pi_2 q (1 - q^2)}\bigr)^2}
\end{split}
\end{equation*}
for sufficiently large $n$.
Fixing $\bm{\pi} = (0.6, 0.4)$,
we compute the ratio $\rho_{\mathrm{A}}/\rho_{\mathrm{L}}$ for a range of $p$ and $q$ values, with $p \in [0.2, 0.8]$ and $q = p + r$ where $r \in [-0.15, 0.15]$. The results are plotted in Figure~\ref{fig:ratio-plot}. The $y$-axis of Figure~\ref{fig:ratio-plot} denotes the values of $p$ and the $x$ axis are the values of $r$. We see from the above figure that, in general, neither of the methods, namely adjacency spectral clustering or normalized Laplacian spectral embedding followed by clustering via Gaussian mixture models, dominates over the whole $(p,r)$ parameter space. However, in general, one can easily show that LSE is preferable over ASE whenever the block probability matrix $\mathbf{B}$ becomes sufficiently sparse.
Determination of similarly intuitive conditions for which ASE dominates over LSE is considerably more subtle and is the topic of current research. But in general, we observe that ASE dominates over LSE whenever the entries of $\mathbf{B}$ are relatively large.
Finally we consider the collection of stochastic blockmodels with parameters $\bm{\pi}$ and $\mathbf{B}$ where
\begin{equation}
\label{eq:3block-example}
\mathbf{B} = \begin{bmatrix} p & q & q \\ q & p & q \\ q & q & p \\ \end{bmatrix}, \quad p, q \in (0,1), \,\, \text{and} \,\, \bm{\pi} = (0.8, 0.1, 0.1).
\end{equation}
First we compute the ratio $\rho_{\mathrm{A}}/\rho_{\mathrm{L}}$ for $p \in [0.3, 0.9]$ and $r = q - p$ with $r \in [- 0.2, -0.01]$. The results are plotted in Figure~\ref{fig:ratio_3blocks}, with the $y$-axis of Figure~\ref{fig:ratio_3blocks} being the values of $p$ and the $x$-axis being the values of $r$. Once again we see that for the purpose of subsequent inference, neither embedding methods dominates over the whole parameter space and that LSE is still preferable to ASE for smaller values of $p$ and $q$ and that ASE is preferable to LSE for larger values of $p$ and $q$.
\begin{figure}[htbp]
\center
\includegraphics[width=0.8\textwidth]{contour_plot_3blocks.pdf}
\caption{The ratio $\rho_{A}/\rho_{L}$ displayed for various values of $p \in [0.2, 0.8]$ and $r = q - p \in [-0.2, -0.01]$ for the 3-block stochastic blockmodel of Eq.~\eqref{eq:3block-example}. The labeled lines are the contour lines for $\rho_{\mathrm{A}}/\rho_{\mathrm{L}}$. Figure duplicated from \cite{tang_lse}.
}
\label{fig:ratio_3blocks}
\end{figure}
\subsection{Hypothesis testing}\label{subsec:Testing}
The field of multi-sample graph inference is comparatively new, and the development of a comprehensive machinery for two-sample hypothesis
testing for random graphs is of both theoretical and practical
importance. The test procedures in \cite{tang14:_semipar} and \cite{tang14:_nonpar}, both of which leverage the adjacency spectral embedding to test hypotheses of equality or equality in distribution for random dot product graphs, are among the only principled methodologies currently available. In both cases, the accuracy of the adjacency spectral embedding as an estimate of the latent positions is the key to constructing a test statistic. Specifically, \cite{tang14:_semipar} gives a new and improved bond, Theorem~\ref{thm:conc_Xhat_X} below, for the Frobenius room of the difference between the
original latent positions and the estimated latent positions obtained from the embedding. This bound is then used to establish a valid and consistent test for the semiparameteric hypothesis test of equality for latent positions in a pair of vertex-aligned random dot product graphs. In the nonparametric case, \cite{tang14:_nonpar} demonstrates how the adjacency spectral embedding can be integrated with a kernel density estimator to accurately estimate the underlying distribution $F$ for in a random dot product graph with i.i.d latent positions.
To begin, we consider the problem of developing a test for the hypothesis that two random dot product
graphs on the same vertex set, with known vertex correspondence, have
the same generating latent position or have generating latent
positions that are scaled or diagonal transformations of one another.
This framework includes, as a special case, a test for whether two
stochastic blockmodels have the same or related block probability
matrices. In this two-sample testing problem, though,
the parameter dimension grows as the sample size grows. Therefore, the
problem is not precisely analogous to classical two-sample tests for,
say, the difference of two parameters belonging to some fixed
Euclidean space, in which an increase in data has no effect on the
dimension of the parameter. The problem is also not
nonparametric, since we view our latent positions as fixed
and impose specific distributional requirements on the
data---that is, on the adjacency matrices. Indeed, we regard the
problem as semiparametric, and \cite{tang14:_semipar} adapts the traditional definition of
consistency to this setting. In particular, for the test procedure we describe, power will increase
to one for alternatives in which the difference between the two latent
positions grows with the sample size.
Our test procedure is, at first glance, deceptively simple: given a pair of adjacency matrices $\bA$ and $\bB$ for two $d$-dimensional random dot product graphs, we generate their adjacency spectral embeddings, denoted $\Xhat$ and $\Yhat$, respectively, and compute an appropriately normalized version of the so-called {\em Procrustes fit} or {\em Procrustes distance} between the two embeddings:
$$\min_{\bW \in \mathcal{O}^{d \times d}}\|\Xhat-\Yhat \bW\|_F$$
(Recall that such a fit is necessary because of the inherent nonidentifiability of the random dot product model.)
Understanding the limiting distribution of this test statistic is more complicated, however, and appropriately framing the set of null and alternative hypothesis for which the test is valid and consistent (i.e. a level $\alpha$-test with power converging to 1 as $n \rightarrow \infty$) is delicate. To that end, we first state the key concentration inequality for $\min_{\bW \in \mathcal{O}^{d \times d}}\|\Xhat-\Yhat \bW\|_F$.
\begin{theorem}\label{thm:conc_Xhat_X}
Suppose $\bP=\bX \bX^{\top}$ is an $n \times n$ probability
matrix of rank $d$. Suppose also
that there exists $\epsilon>0$ such that
$\delta(\bP)>(\log{n})^{2 + \epsilon}$. Let $c>0$ be
arbitrary but fixed. Then there exists a $n_0(c)$ and a universal
constant $C \geq 0$ such that if $n \geq n_0$ and $n^{-c} <
\eta<1/2$, then there exists a deterministic $\bW \in
\mathcal{O}(d)$ such that, with probability at least $1 - 3\eta$,
\begin{equation}\label{conc_x-xhat}
\Bigl| \|\Xhat-\bX\bW\|_F-C(\bX) \Bigr| \leq \frac{C d
\log{(n/\eta)}}{C(\bX) \sqrt{ \gamma^{5}(\bP) \delta(\bP)}}
\end{equation}
where $C(\bX)$ is a function of $\bX$ given by
\begin{equation}\label{eq:def_of_C_semipar}
C(\bX) = \sqrt{\mathrm{tr} \,\,
\SP^{-1/2} \UP^{\top} \E[(\bA -
\bP)^{2}] \UP
\SP^{-1/2}}
\end{equation}
where $\mathbb{E}[(\bA - \bP)^2]$ is taken with respect to $\bA$ and conditional on $\bX$.
\end{theorem}
We note that the proof of this theorem consists of two pieces: it is straightforward to show that the Frobenius norm bound of Lemma \ref{thm:minh_frob} implies that
$$\|\Xhat - \bX \bW \|_{F} = \|(\bA - \bP) \UP \SP^{-1/2} \|_{F}
+ O(d \log (n) \delta^{-1/2}(\bP) \gamma^{-5/2}(\bP))$$
To complete the theorem, then, \cite{tang14:_semipar} demonstrates a concentration inequality for
$\|(\mathbf{A} - \mathbf{P}) \mathbf{U}_{\mathbf{P}}
\mathbf{S}_{\mathbf{P}}^{-1/2} \|^{2}_{F}$, showing that
\begin{equation}
\label{eq:log_sobolev_conc_inequality}
\bigl|\|(\mathbf{A} - \mathbf{P}) \mathbf{U}_{\mathbf{P}}
\mathbf{S}_{\mathbf{P}}^{-1/2} \|^{2}_{F} - C^{2}(\mathbf{X}) \bigr| \leq
\frac{14 \sqrt{2d} \log{(n/\eta)}}{\gamma(\mathbf{P}) \sqrt{\delta(\mathbf{P})}}.
\end{equation}
where $C(\bX)$ is as defined in \eqref{eq:def_of_C_semipar}. We do not go into the details of this concentration inequality here, but rather point the reader to \cite{tang14:_semipar}. We observe, however, that this inequality has immediate consequences for two-sample testing for random dot product graphs. For two random dot product graphs with probability matrices $\bP=\bX \bX^{\top}$ and $\bQ=\bY \bY^{\top}$, consider the null hypothesis $\bX=\bY\bW$ for some orthogonal $\bW$. It can be shown that $\min_{\bW \in \mathcal{O}^{d \times d}}\|\Xhat-\Yhat\bW\|_F$ is the basis for a valid and consistent test. We emphasize, though, that as the graph size $n$ increases, the $n \times d$ matrix of latent positions also increases in size. As a consequence of this, we consider the following notion of consistency in this semiparametric setting. As an aside on the notation, in this section, we consider a sequence of graphs with latent positions, all indexed by $n$; thus, as we noted in our preliminary remarks on notation, $\bX_n$ and $\Xhat_n$ refer to the {\em matrices} of true and estimated latent positions in this sequence.
\begin{definition}\label{consistency}
Let $(\mathbf{X}_n, \mathbf{Y}_n)_{n \in \mathbb{N}}$, be a given sequence of latent positions, where $\mathbf{X}_n$ and $\mathbf{Y}_n$ are both in $\mathbb{R}^{n \times d}$. A test statistic $T_n$ and associated
rejection region $R_n$ to test the null hypothesis
\begin{align*}
H^n_0: \, {\bf X}_n =_{W} {\bf Y}_n \quad
\textrm{ against } \quad H^n_a: \, {\bf X}_n \not =_{W} {\bf Y}_n
\end{align*}
is a {\em consistent, asymptotically level $\alpha$ test}
if for any $\eta>0$, there exists $n_0 = n_0(\eta)$ such that
\begin{enumerate}[(i)]
\item If $n>n_0$ and $H_a^n$ is true, then $P(T_n \in R_n)>1-\eta$
\item If $n > n_0$ and $H_0^n$ is true, then $P(T_n \in R_n) \leq \alpha + \eta$
\end{enumerate}
\end{definition}
With this definition of consistency, we obtain the following theorem on two-sample testing for random dot products on the same vertex set and with known vertex correspondence.
\begin{theorem}
\label{thm:identity}
For each fixed $n$, consider the
hypothesis test
\begin{equation*}
H^{n}_0: {\bf X}_n =_{W} {\bf Y}_n \quad \textrm{ versus } \quad
H^{n}_a: {\bf X}_n \not =_{W} {\bf Y}_n
\end{equation*}
where ${\bf X}_n$ and ${\bf Y}_n$ $\in \mathbb{R}^{n \times d}$ are
matrices of latent positions for two random dot product graphs. Let
$\hat{{\bf X}}_n$ and $\hat{{\bf Y}}_n$ be the adjacency spectral
embeddings of ${\bf A}_n\sim \mathrm{Bernoulli}({\bf X}_n{\bf
X}_n^{\top})$ and ${\bf B}_n \sim \mathrm{Bernoulli}({\bf Y}_n{\bf
Y}_n^{\top})$, respectively. Define the test statistic $T_n$ as follows:
\begin{equation}
\label{eq:semipar_TS_def}
T_n=\frac{\min\limits_{{\bf W} \in \mathcal{O}(d)}
\|\hat{{\bf X}}_n{\bf W}-\hat{{\bf Y}}_n\|_F}
{\sqrt{d\gamma^{-1}(\mathbf{A}_n)}+ \sqrt{d \gamma^{-1}(\mathbf{B}_n)}}.
\end{equation}
Let $\alpha \in (0,1)$ be given. Then for all $C > 1$, if the
rejection region is
$R:=\left\{t \in \mathbb{R}: t\geq C \right\}$,
then there exists an
$n_1 = n_1(\alpha, C) \in \mathbb{N}$ such that for all $n \geq n_1$, the
test procedure with $T_n$ and rejection region $R$ is an at most level
$\alpha$ test, i.e., for all $n \geq n_1$, if $\mathbf{X}_n
=_{W} \mathbf{Y}_n$, then
$ \mathbb{P}(T_n \in R) \leq \alpha.$
Furthermore,
consider the sequence of latent positions $\{{\bf X}_n\}$ and
$\{{\bf Y}_n\}$, $n \in \mathbb{N}$,
satisfying the eigengap assumptions in Assumption~\ref{ass:max_degree_assump} and denote by $d_n$ the
quantity $
d_n := \min\limits_{{\bf W} \in \mathcal{O}(d)} \| {\bf X}_n{\bf W}-{\bf
Y}_n \|$.
Suppose $d_n \neq 0$ for infinitely many $n$. Let $t_1=\min\{k>0:
d_k>0\}$ and sequentially define $t_n=\min\{k>t_{n-1}: d_k>0\}$.
Let $b_n=d_{t_n}$. If $\liminf b_n = \infty$, then this test
procedure is consistent in the sense of Definition~\ref{consistency}
over this sequence of latent positions.
\end{theorem}
\begin{remark} This result does not require that ${\bf A}_n$ and ${\bf B}_n$
be independent for any fixed $n$, nor that the sequence of pairs
$({\bf A}_n, {\bf B}_n)$, $n \in \mathbb{N}$, be independent.
We note that Theorem \ref{thm:identity} is written to emphasize consistency in the sense of Definition \ref{consistency}, even in a case when, for example, the latent position sequence is such that $\mathbf{X}_n=_W\mathbf{Y}_n$ for all even $n$, but $\mathbf{X}_n$ and $\mathbf{Y}_n$ are sufficiently far apart for odd $n$.
In
addition, the requirement that $\liminf b_k=\infty$ can be weakened
somewhat. Specifically, consistency is achieved as long
as $$\liminf_{n \rightarrow \infty} \Bigl(\| \mathbf{X}_n\mathbf{W}
-\mathbf{Y}_n \|_{F} - C(\mathbf{X}_n) - C(\mathbf{Y}_n)\Bigr) > 0.$$
\end{remark}
It also possible to construct analogous tests for latent positions related by scaling factors, or, in the case of the degree-corrected stochastic block model, by projection. We summarize these below, beginning with the case of scaling.
For the scaling case, let $\mathcal{C}=\mathcal{C}(\mathbf{Y}_n)$
denote the class of all positive constants $c$ for which all the
entries of $c^2 \mathbf{Y}_n \mathbf{Y}_n^{\top}$ belong to the unit
interval. We wish to test the null hypothesis
$H_0 \colon \mathbf{X}_n =_{W} c_n \mathbf{Y}_n$ for some
$c_n\in \mathcal{C}$ against the alternative
$H_a \colon \mathbf{X}_n \not =_{W} c_n\mathbf{Y}_n$ for any
$c_n \in \mathcal{C}$. In what follows below, we will only write
$c_n>0$, but will always assume that $c_n \in \mathcal{C}$, since the
problem is ill-posed otherwise. The test statistic $T_n$ is now a
simple modification of the one used in Theorem~\ref{thm:identity}: for
this test, we compute a Procrustes distance between scaled adjacency
spectral embeddings for the two graphs.
\begin{theorem}
\label{thm:2}
For each fixed $n$, consider the
hypothesis test
\begin{align*}
H^{n}_0 \colon {\bf X}_n =_{W} c_n{\bf Y}_n \quad \text{for some $c_n > 0$}
\textrm{ versus } \,\,
H^{n}_a \colon {\bf X}_n \not =_{W} c_n{\bf Y}_n \quad \text{for all $c_n > 0$}
\end{align*}
where ${\bf X}_n$ and ${\bf Y}_n$ $\in \mathbb{R}^{n \times d}$ are
latent positions for two random dot product graphs with adjacency
matrices $\mathbf{A}_n$ and $\mathbf{B}_n$, respectively.
Define the test statistic $T_n$ as follows:
\begin{equation}
\label{eq:8}
T_n=\frac{\min\limits_{{\bf W} \in \mathcal{O}(d)}
\|\hat{{\bf X}}_n{\bf W}/\|\Xhat_n\|_{F} - \hat{{\bf
Y}}_n/\|\hat{\mathbf{Y}}_n\|_{F} \|_{F}}
{2 \sqrt{d \gamma^{-1}(\mathbf{A}_n)}/\|\Xhat_n\|_{F}+ 2\sqrt{d
\gamma^{-1}(\mathbf{B}_n)}/\|\hat{\mathbf{Y}}_n \|_{F}}.
\end{equation}
Let $\alpha \in (0,1)$ be given. Then for all $C
> 1$, if the rejection region is $R:=\left\{t \in \mathbb{R}: t\geq C \right\}$,
then there exists an $n_1 = n_1(\alpha, C) \in \mathbb{N}$ such that
for all $n \geq n_1$, the test procedure with $T_n$ and rejection
region $R$ is an at most level $\alpha$ test.
Furthermore, consider the sequence of latent position $\{{\bf X}_n\}$ and
$\{{\bf Y}_n\}$, $n \in \mathbb{N}$,
satisfying Assumption~\ref{ass:max_degree_assump} and denote by $d_n$ the quantity
\begin{equation}\label{eq:scaling_alternative}
d_n := \frac{ \min\limits_{{\bf W} \in \mathcal{O}(d)}\| {\bf X}_n{\bf W}/\|{\bf
X}_n\|_{F} - {\bf Y}_n/\|{\bf Y}_n\|_{F}
\|_{F}}{1/\|\mathbf{X}_n\|_{F} +
1/\|\mathbf{Y}_n\|_{F}} = \frac{ \min\limits_{{\bf W} \in \mathcal{O}(d)}
\| {\bf X}_n \|\mathbf{Y}_n \|_{F} {\bf W}
- {\bf Y}_n \|\mathbf{X}_n \|_{F}
\|_{F}}{\|\mathbf{X}_n\|_{F} +
\|\mathbf{Y}_n\|_{F}}
\end{equation}
Suppose $d_n \neq 0$ for infinitely many $n$. Let $t_1=\min\{k>0:
d_k>0\}$ and sequentially define $t_n=\min\{k>t_{n-1}: d_k>0\}$. Let
$b_n=d_{t_n}$. If $\liminf b_n = \infty$, then this test procedure is
consistent in the sense of Definition~\ref{consistency} over this
sequence of latent positions.
\end{theorem}
We next consider the case of testing whether the latent positions are
related by a diagonal transformation.
i.e., whether $H_0
\colon \mathbf{X}_n =_{W} \mathbf{D}_n \mathbf{Y}_n$ for some
diagonal matrix $\mathbf{D}_n$. We proceed analogously to the
scaling case, above, by defining the class
$\mathcal{E}=\mathcal{E}(\mathbf{Y}_n)$ to be all positive diagonal
matrices $\mathbf{D}_n \in \mathbb{R}^{n \times n}$ such that
$\mathbf{D}_n \mathbf{Y}_n \mathbf{Y}_n^{\top} \mathbf{D}_n$ has all
entries in the unit interval.
As before, we will
always assume that $\mathbf{D}_n$ belongs to $\mathcal{E}$, even if
this assumption is not explicitly stated. The test statistic $T_n$ in
this case is again a simple modification of the one used in
Theorem~\ref{thm:identity}. However, for technical reasons, our proof
of consistency requires an additional condition on the minimum
Euclidean norm of each row of the matrices $\mathbf{X}_n$ and
$\mathbf{Y}_n$. To avoid certain technical issues, we impose a
slightly stronger density assumption on our graphs for this test.
These assumptions can be weakened, but at the cost of interpretability.
The assumptions we make on the latent positions, which we summarize
here, are moderate restrictions on the sparsity of the graphs.
\begin{assumption}
\label{eigengap_assump_diagonal}
We assume that there exists $d \in \mathbb{N}$ such that for all
$n$, $\mathbf{P}_n$ is of rank $d$. Further, we assume that there
exist constants $\epsilon_1>0$, $\epsilon_2 \in (0,1)$, $c_0>0$ and
$n_0(\epsilon_1, \epsilon_2, c) \in \mathbb{N}$ such that for all $n \geq n_0$:
\begin{align}
\gamma(\mathbf{P}_n) \geq c_0; \qquad
\delta(\mathbf{P}_n) \geq (\log{n})^{2 + \epsilon_1}; \qquad
\min_{i} \|X_i\| >
\left(\frac{\log{n}}{\sqrt{\delta(\mathbf{P}_n)}}\right)^{1 -
\epsilon_2}
\end{align}
\end{assumption}
We then have the following result.
\begin{theorem}
\label{thm:1}
For each fixed $n$, consider the
hypothesis test
\begin{align*}
H^{n}_0 \colon {\bf X}_n =_{W} \mathbf{D}_n {\bf Y}_n \quad
\text{for some $\mathbf{D}_n \in \mathcal{E}$}\,\,
\textrm{ versus }
\,\, H^{n}_a \colon {\bf X}_n \not =_{W} \mathbf{D}_n{\bf Y}_n \quad \text{for
any $\mathbf{D}_n \in \mathcal{E}$}
\end{align*}
where ${\bf X}_n$ and ${\bf Y}_n$ $\in \mathbb{R}^{n \times d}$ are
matrices of latent positions for two random dot product graphs.
For any matrix $\mathbf{Z} \in \mathbb{R}^{n \times d}$, let
$\mathcal{D}(\mathbf{Z})$ be the diagonal matrix whose diagonal
entries are the Euclidean norm of the rows of $\mathbf{Z}$ and let
$\mathcal{P}(\mathbf{Z})$ be the matrix whose rows are the projection of the rows of
$\mathbf{Z}$ onto the unit sphere.
We define the test statistic as follows:
\begin{equation}
\label{eq:15}
T_n=\frac{\min\limits_{{\bf W} \in \mathcal{O}(d)}
\|\mathcal{P}(\hat{\mathbf{X}}_n) {\bf W} -
\mathcal{P}(\hat{\mathbf{Y}}_n) \|_{F}}
{2 \sqrt{d \gamma^{-1}(\mathbf{A})} \|\mathcal{D}^{-1}(\Xhat_n)\|+
2 \sqrt{d \gamma^{-1}(\mathbf{B}_n)}\|\mathcal{D}^{-1}(\hat{\mathbf{Y}}_n)\|}.
\end{equation}
where we write
$\mathcal{D}^{-1}(\mathbf{Z})$ for
$(\mathcal{D}(\mathbf{Z}))^{-1}$. Note that
$\|\mathcal{D}^{-1}(\mathbf{Z})\| = 1/(\min_{i} \|Z_i\|)$.
Let $\alpha \in (0,1)$ be given. Then for all $C
> 1$, if the rejection region is
$R:=\left\{t \in \mathbb{R}: t\geq C \right\},$
then there exists an $n_1 = n_1(\alpha, C) \in \mathbb{N}$ such that
for all $n \geq n_1$, the test procedure with $T_n$ and rejection
region $R$ is an at most level-$\alpha$ test.
Furthermore, consider the sequence of latent position $\{{\bf X}_n\}$ and $\{{\bf Y}_n\}$, $n \in \mathbb{N}$,
satisfying
Assumption~\ref{eigengap_assump_diagonal} and denote by $d_n$ the quantity
\begin{equation}
\label{eq:12}
d_n :=
\frac{ \min\limits_{{\bf W} \in \mathcal{O}(d)}\| \mathcal{P}({\bf
X}_n) {\bf W} -
\mathcal{P}({\bf Y}_n)
\|_{F}}{\|\mathcal{D}^{-1}(\mathbf{X})\| +
\|\mathcal{D}^{-1}(\mathbf{Y})\|} = D_{\mathcal{P}}(\mathbf{X}_n, \mathbf{Y}_n)
\end{equation}
Suppose $d_n \neq 0$ for infinitely many $n$. Let $t_1=\min\{k>0:
d_k>0\}$ and sequentially define $t_n=\min\{k>t_{n-1}: d_k>0\}$. Let
$b_n=d_{t_n}$. If $\liminf b_n = \infty$, then this test procedure is
consistent in the sense of Definition~\ref{consistency} over this
sequence of latent positions.
\end{theorem}
This collection of semiparametric tests has numerous applications in graph comparison; in Section \ref{sec:Applications} , we describe its use in connectomics and brain scan data. We stress, though, that the Procrustes transformations are rather cumbersome, and they limit our ability to generalize these procedures to graph comparisons involving more than two graphs. As a consequence, it can be useful to consider {\em joint} or {\em omnibus} embeddings, in which adjacency matrices for multiple graphs on the same vertex set are jontly embedded into a single (larger-dimensional) space, but {\em with a distinct representation for each graph}. For an illuminating joint graph inference study on the {\em C. elegans} connectome that addresses somewhat different questions from semiparametric testing, see \cite{chen_worm}.
Simultaneously embedding multiple graphs into a shared space
allows comparison of graphs without the need to
perform pairwise alignments of graph embeddings.
Further, a distinct representation of each graph
renders the omnibus embedding especially useful for
subsequent comparative graph inference.
\subsection{Omnibus embedding}\label{subsec:omnibus}
In \cite{levin_omni_2017}, we show that an omnibus embedding---that is, an embedding of multiple graphs into a single shared space---can yield consistent estimates of underlying latent positions. Moreover, like the adjacency spectral embedding for a single graph, the rows of this omnibus embedding, suitably-scaled, are asymptotically normally distributed. As might be anticipated, the use of multiple independent graphs generated from the same latent positions, as opposed just a single graph, yields a reduction in variance for the estimated latent positions, and since the omnibus embedding provides a distinct representation for each graph, subsequently averaging these estimates reduces the variance further still. Finally, the omnibus embedding allows us to compare graphs without cumbersome Procrustes alignments.
To construct the omnibus embedding, we consider a collection of $m$ random dot product graphs,
all with the same the same generating latent positions.
This motivates the following definition:
\begin{definition}
[Joint Random Dot Product Graph]
\label{def:JRDPG}
Let $F$ be a $d$-dimensional inner product distribution on $\R^d$.
We say that random graphs $\bA^{(1)},\bA^{(2)},\dots,\bA^{(m)}$
are distributed as a \emph{joint random dot product graph (JRDPG)}
and write $(\bA^{(1)},\bA^{(2)},\dots,\bA^{(m)},\bX) \sim \JRDPG(F,n,m)$
if $\bX = [\bX_1, \bX_2,\dots,\bX_n]^{\top} \in \R^{n \times d}$ has its (transposed)
rows distributed i.i.d. as $\bX_i \sim F$, and we have
marginal distributions $(\bA^{(k)},\bX) \sim \RDPG(F,n)$
for each $k=1,2,\dots,m$.
That is, the $\bA^{(k)}$ are conditionally independent
given $\bX$, with edges independently distributed as
$\bA^{(k)}_{i,j} \sim \Bern( (\bX\bX^{\top})_{ij} )$ for all $1 \le i < j \le n$
and all $k \in [m]$.
\end{definition}
Given a set of $m$ adjacency matrices distributed as
$$(\bA^{(1)},\bA^{(2)},\dots,\bA^{(m)},\bX) \sim \JRDPG(F,n,m)$$
for distribution $F$ on $\R^d$,
a natural inference task is to recover the $n$ latent positions
$\bX_1,\bX_2,\dots,\bX_n \in \R^d$ shared by the vertices of the $m$ graphs.
To estimate the underlying latent postions from these $m$ graphs, \cite{runze_law_large_graphs} provides justification for the estimate
$\Xbar = \ASE( \Abar, d )$, where $\Abar$ is the sample mean of the
adjacency matrices $\bA^{(1)},\bA^{(2)},\dots,\bA^{(m)}$.
However, $\Xbar$ is ill-suited to any task that requires
comparing latent positions across the $m$ graphs,
since the $\Xbar$ estimate collapses the $m$ graphs into a single
set of $n$ latent positions.
This motivates the \emph{omnibus embedding},
which still yields a single spectral decomposition, but with a separate $d$-dimensional representation for each of the $m$ graphs.
This makes the omnibus embedding useful for {\em simultaneous}
inference across all $m$ observed graphs.
\begin{definition}[Omnibus embedding] \label{def:omni_def}
Let $\bA^{(1)},\bA^{(2)},\dots,\bA^{(m)} \in \R^{n \times n}$
be (possibly weighted) adjacency matrices
of a collection of $m$ undirected graphs.
We define the $mn$-by-$mn$ omnibus matrix
of $\bA^{(1)}, \bA^{(2)}, \dots, \bA^{(m)}$ by
\begin{equation} \label{eq:omnidef}
\bM =
\begin{bmatrix}
\bA^{(1)} & \frac{1}{2}(\bA^{(1)} + \bA^{(2)}) & \dots & \frac{1}{2}(\bA^{(1)} + \bA^{(m)}) \\
\frac{1}{2}(\bA^{(2)} + \bA^{(1)}_2) & \bA^{(2)} & \dots & \frac{1}{2}(\bA^{(2)} + \bA^{(m)}) \\
\vdots & \vdots & \ddots & \vdots \\
\frac{1}{2}(\bA^{(m)} + \bA^{(1)}) & \frac{1}{2}(\bA^{(m)} + \bA^{(2)})
& \dots & \bA^{(m)} \end{bmatrix},
\end{equation}
and the $d$-dimensional \emph{omnibus embedding} of
$\bA^{(1)},\bA^{(2)},\dots,\bA^{(m)}$
is the adjacency spectral embedding of $\bM$:
$$ \OMNI(\bA^{(1)},\bA^{(2)},\dots,\bA^{(m)},d)= \ASE( \bM, d ). $$
\end{definition}
where $ASE$ is the $d$-dimensional adjacency spectral embedding.
Under the JRDPG, the omnibus matrix has expected value
$$ \E \bM = \Ptilde = \bJ_m \otimes \bP = \UPt \SPt \UPt^{\top} $$
for $\UPt \in \R^{mn \times d}$ having $d$ orthonormal columns
and $\SPt \in \R^{d \times d}$ diagonal.
Since $\bM$ is a reasonable estimate for $\Ptilde = \E \bM$,
the matrix $\Zhat = \OMNI(\bA^{(1)},\bA^{(2)},\dots,\bA^{(m)},d)$
is a natural estimate of the $mn$ latent positions
collected in the matrix
$\bZ = [\bX^{\top} \, \bX^{\top}\, \dots \,\bX^{\top}]^{\top} \in \R^{mn \times d}$.
Here again, as in Remark~\ref{rem:nonid}, $\Zhat$ only recovers
the true latent positions $\bZ$ up to an orthogonal rotation.
The matrix
\begin{equation} \label{eq:Zstruct}
\Zstar = \begin{bmatrix} \Xstar \\ \Xstar \\ \vdots \\ \Xstar \end{bmatrix}
= \UPt \SPt^{1/2} \in \R^{mn \times d},
\end{equation}
provides a reasonable canonical choice of latent positions,
so that $\bZ = \Zstar \bW$ for some suitably-chosen orthogonal matrix
$\bW \in \R^{d \times d}$; again, just as for a single random dot product graph, spectral embedding of the omnibus matrix is a consistent estimator for the latent positions (up to rotation).
Below, we state precise results on consistency and asymptotic normality of the embedding of the omnibus matrix $\bM$. The proofs are similar to, but somewhat more involved than, the aforementioned analogues for the adjacency spectral embedding for one graph. We also demonstrate from simulations that the omnibus embedding can be successfully leveraged for subsequent inference, specifically two-sample testing.
First, Lemma~\ref{lem:omni2toinf} shows that the omnibus embedding
provides uniformly consistent estimates of the true latent positions,
up to an orthogonal transformation,
roughly analogous to Lemma 5 in \cite{lyzinski13:_perfec}.
Lemma~\ref{lem:omni2toinf}
shows consistency of the omnibus embedding under the $\tti$ norm,
implying that all $mn$ of the estimated latent positions
are near (a rotation of) their corresponding true positions.
\begin{lemma}\label{lem:omni2toinf}
With $\Ptilde$, $\bM$, $\UM$, and $\UPt$ defined as above, there exists
an orthogonal matrix $\Wtilde \in \R^{d \times d}$ such that
with high probability,
\begin{equation}\label{eq:omni2toinf_actualbound}
\|\UM \SM^{1/2}-\UPt \SPt^{1/2} \Wtilde \|_{\tti}
\le \frac{Cm^{1/2} \log mn }{\sqrt{n}} .
\end{equation}
\end{lemma}
As with the adjacency spectral embedding, we once again can show the asymptotic normality of the individual rows of the omnibus embedding. Note that the covariance matrix does change with $m$, and for $m$ large, this results in a nontrivial variance reduction.
\begin{theorem} \label{thm:main}
Let $(\bA^{(1)},\bA^{(2)},\dots,\bA^{(m)},\bX) \sim \JRDPG(F,n,m)$ for some
$d$-dimensional inner product distribution $F$ and let $\bM$ denote
the omnibus matrix as in \eqref{eq:omnidef}. Let
$\bZ = \Zstar \bW$ with $\Zstar$ as defined in Equation~\eqref{eq:Zstruct},
with estimate
$\Zhat = \OMNI(\bA^{(1)},\bA^{(2)},\dots,\bA^{(m)},d)$.
Let $h = m(s-1) + i$ for $i \in [n],s \in [m]$, so that $\Zhat_h$
denotes the estimated latent position
of the $i$-th vertex in the $s$-th graph $\bA^{(s)}$.
That is, $\Zhat_h$ is the
column vector formed by transposing the $h$-th row of the matrix
$\Zhat = \UM \SM^{1/2} = \OMNI(\bA^{(1)},\bA^{(2)},\dots,\bA^{(m)},d)$.
Let $\Phi(\bx,\bSigma)$ denote the cumulative distribution function of a (multivariate)
Gaussian with mean zero and covariance matrix $\bSigma$,
evaluated at $\bx \in \R^d$.
There exists a sequence of orthogonal $d$-by-$d$ matrices
$( \Wntilde )_{n=1}^\infty$ such that for all $\bx \in \R^d$,
$$ \lim_{n \rightarrow \infty}
\Pr\left[ n^{1/2} \left( \Zhat \Wntilde - \bZ \right)_h
\le \bx \right]
= \int_{\supp F} \Phi\left(\bx, \bSigma(\by) \right) dF(\by), $$
where
$\bSigma(\by) = (m+3)\bDelta^{-1} \Sigmatilde(\by) \bDelta^{-1}/(4m), $
$\bDelta = \mathbb{E}[\bX_1 \bX_1^{\top}]$ and
$$\Sigmatilde(\by)
= \E\left[ (\by^{\top} \bX_1 - ( \by^{\top} \bX_1)^2 ) \bX_1 \bX_1^{\top} \right].$$
\end{theorem}
Next, we summarize from \cite{levin_omni_2017} experiments on synthetic data
exploring the efficacy of the omnibus embedding described above.
If we merely wish to estimate the latent positions $\bX$
of a set of $m$ graphs
$(\bA^{(1)},\bA^{(2)},\dots,\bA^{(m)},\bX) \sim \JRDPG(F,n,m)$,
the estimate $\Xbar = \ASE( \sum_{i=1}^m \bA^{(i)}/m, d )$,
the embedding of the sample mean of the adjacency matrices
performs well asymptotically \cite{runze_law_large_graphs}.
Indeed, all else equal,
the embedding $\Xbar$ is preferable to the omnibus embedding
if only because it requires an eigendecomposition
of an $n$-by-$n$ matrix rather
than the much larger $mn$-by-$mn$ omnibus matrix.
\begin{figure}[t!]
\centering
\includegraphics[width=0.6\columnwidth]{expts/fullARE2v1/sqerr_by_vxs.pdf}
\caption{Mean squared error (MSE) in recovery of latent positions (up to rotation) in a 2-graph joint RDPG model as a function of the number of vertices. Performance of ASE applied to a single graph (red), ASE embedding of the mean graph (gold), the Procrustes-based pairwise embedding (blue), the omnibus embedding (green) and the mean omnibus embedding (purple). Each point is the mean of 50 trials; error bars indicate $\pm 2(SE)$. Mean omnibus embedding (OMNIbar) is competitive with $\ASE(\Abar,d)$; Procrustes alignment estimation is notably inferior to the other two-graph techniques for graphs of size between 80 and 200 vertices (note that the gap appears to persist at larger graph sizes, though it shrinks). Figure duplicated from \cite{levin_omni_2017}.}
\label{fig:compareMSE}
\end{figure}
Of course, the omnibus embedding can still be used to
to estimate the latent positions, potentially at the cost of
increased variance.
Figure \ref{fig:compareMSE} compares the mean-squared error of various
techniques for estimating the latent positions for a random dot product graph.
The figure plots the (empirical) mean squared error in recovering the
latent positions of a $3$-dimensional
JRDPG as a function of the number of vertices $n$.
Each point in the plot is the empirical mean of 50 independent trials; in each trial, the latent positions are drawn i.i.d. from a
Dirichlet with parameter $[1,\,1,\,1]^{\top} \in \R^{3}$.
Once the latent positions are so obtained, we independently generate two random dot product graphs,
$\bA^{(1)},\bA^{(2)} \in \R^{n \times n}$ with these latent positions.
The figure is interpreted as follows:
\begin{enumerate}
\item {\bf ASE1 (red)}: we embed only one of the two observed graphs,
and use only the ASE of that graph to estimate the latent positions
in $\bX$, ignoring entirely the information present in
$\bA^{(2)}$. This condition serves as a baseline for how much
additional information is provided by the second graph $\bA^{(2)}$.
\item {\bf Abar (gold)}: we embed the average of the two graphs,
$\Abar = (\bA^{(1)} + \bA^{(2)})/2$ as $\Xhat = \ASE( \Abar, 3 )$.
\item {\bf OMNI (green)}: We apply the omnibus embedding to obtain
$\Zhat = \ASE(\bM,3)$,
where $\bM$ is as in Equation~\eqref{eq:omnidef}.
We then use only the first $n$ rows of
$\Zhat \in \R^{2n \times d}$ as our estimate of $\bX$.
This embedding incorporates information available
in both graphs $\bA^{(1)}$ and $\bA^{(2)}$, but does not
weight them equally, since the first rows of $\Zhat$ are based
primarily on the information contained in $\bA^{(1)}$.
\item {\bf OMNIbar (purple)}: We again apply the omnibus embedding to obtain
estimated latent positions
$\Zhat = \ASE(\bM,3)$, but this time we use all available
information by averaging the first $n$ rows and the second $n$ rows
of $\Zhat$.
\item {\bf PROCbar (blue)}: We separately embed the graphs
$\bA^{(1)}$ and $\bA^{(2)}$, perform a Procrustes alignment between the two embeddings, and average the aligned embeddings to obtain
our final estimate of the latent positions.
\end{enumerate}
First, let us note that ASE applied to a single graph (red)
lags all other methods, as expected, since this discards all information from the second graph.
For very small graphs, the dearth of signal is such that no method will
recover the latent positions accurately.
Crucially, however, we see that the OMNIbar estimate (purple) performs nearly
identically to the Abar estimate (gold), the natural choice among spectral methods for the estimation latent positions
The Procrustes estimate (in blue)
provides a two-graph analogue of ASE (red):
it combines two ASE estimates via Procrustes alignment,
but does not enforce an {\em a priori} alignment of the estimated latent positions.
As predicted by the results in \cite{lyzinski13:_perfec} and \cite{tang14:_semipar},
the Procrustes estimate is competitive with the Abar (gold)
estimate for suitably large graphs.
The OMNI estimate (in green) serves, in a sense, as an intermediate, since it uses information available from both graphs,
but in contrast to Procrustes (blue), OMNIbar (purple)
and Abar (gold), it does not make complete use of the information
available in the second graph.
For this reason, it is noteworthy that the OMNI estimate
outperforms the Procrustes estimate for graphs of 80-100 vertices.
That is, for certain graph sizes,
the omnibus estimate appears to more optimally leverage the information in both graphs
than the Procrustes estimate does,
despite the fact that the information in the second graph has comparatively little
influence on the OMNI embedding.
The omnibus embedding can also be applied to testing
the semiparametric hypothesis that two observed graphs are drawn from the
same underlying latent positions.
Consider a collection of latent positions
$\bX_1,\bX_2,\dots,\bX_n,\bY_1,\bY_2,\dots,\bY_n \in \R^d$.
Let the graph $G_1$ with adjacency matrix $\bA^{(1)}$ have edges distributed
independently as
$ \bA^{(1)}_{ij} \sim \Bern( \bX_i^{\top} \bX_j )$.
Similarly, let $G_2$ have adjacency matrix $\bA^{(2)}$ with edges
distributed independently as
$ \bA^{(2)}_{ij} \sim \Bern( \bY_i^{\top} \bY_j )$.
The omnibus embedding provides a natural test of
the null hypothesis \eqref{eq:H0}
by comparing the first $n$ and last $n$ embeddings of the omnibus matrix
$$ \bM = \begin{bmatrix} \bA^{(1)} & (\bA^{(1)} + \bA^{(2)})/2 \\
(\bA^{(1)} + \bA^{(2)})/2 & \bA^{(2)}
\end{bmatrix}. $$
Intuitively, when $H_0$ holds,
the distributional result in Theorem~\ref{thm:main} holds,
and the $i$-th and $(n+i)$-th rows of $\OMNI(\bA^{(1)},\bA^{(2)},d)$
are equidistributed (though they are not independent).
On the other hand, when $H_0$ fails to hold, there exists at least one
$i \in [n]$ for which the $i$-th and $(n+i)$-th rows of $\bM$ are \emph{not}
identically distributed, and thus the corresponding embeddings are
also distributionally distinct.
This suggests a test that compares the first $n$ rows of
$\OMNI(\bA^{(1)},\bA^{(2)},d)$
against the last $n$ rows (see below for details).
Here, we empirically explore the power this test against its
Procrustes-based alternative from \cite{tang14:_semipar}.
We draw $\bX_1,\bX_2,\dots,\bX_n \in \R^3$ i.i.d. according to a
Dirichlet distribution $F$ with parameter
$\alphavec = [1, 1, 1]^{\top}$.
With $\bX$ defined as the matrix
$ \bX = [\bX_1 \bX_2 \dots \bX_n]^{\top} \in \R^{n \times 3}, $
let graph $G_1$ have adjacency matrix $\bA^{(1)}$, where
$ \bA^{(1)}_{ij} \sim \Bern( (\bX \bX^{\top})_{ij} )$.
We generate a second graph $G_2$ by first
drawing random points $\bZ_1,\bZ_2,\dots,\bZ_n \iid F$.
Selecting a set of indices $I \subset [n]$ of size $k < n$ uniformly at
random from among all such $\binom{n}{k}$ sets,
we let $G_2$ have latent positions
$$ \bY_i = \begin{cases} \bZ_i & \mbox{ if } i \in I \\
\bX_i & \mbox{ otherwise. } \end{cases} $$
With $\bY$ the matrix
$\bY = [\bY_1, \bY_2, \dots, \bY_n]^{\top} \in \R^{n \times 3}, $
we generate graph $G_2$ with adjacency matrix $\bA^{(2)}$, where
$ \bA^{(2)}_{ij} \sim \Bern( (\bY \bY^{\top})_{ij} ).$
We wish to test
\begin{equation} \label{eq:ptsnull}
H_0 : \bX = \bY.
\end{equation}
Consider two different tests, one based on
a Procrustes alignment of the adjacency spectral embeddings of $G_1$ and $G_2$
\cite{tang14:_semipar}
and the other based on the omnibus embedding.
Both approaches are based on estimates of the latent positions
of the two graphs.
In both cases we use a test statistic of the form
$ T = \sum_{i=1}^n \| \Xhat_i - \Yhat_i \|_F^2, $
and accept or reject based on a Monte Carlo estimate of the
critical value of $T$ under the null hypothesis,
in which $\bX_i = \bY_i$ for all $i \in [n]$.
In each trial, we use $500$ Monte Carlo iterates to estimate the
distribution of $T$.
We note that in the experiments presented here,
we assume that the latent positions
$\bX_1,\bX_2,\dots,\bX_n$ of graph $G_1$ are known for sampling purposes,
so that the matrix $\bP = \E \bA^{(1)}$ is known exactly, rather than
estimated from the observed adjacency matrix $\bA^{(1)}$.
This allows us to sample from the true null distribution.
As proved in \cite{lyzinski13:_perfec},
the estimated latent positions $\Xhat_1 = \ASE(\bA^{(1)})$
and $\Xhat_2 = \ASE( \bA^{(2)} )$ recover the true latent positions
$\bX_1$ and $\bX_2$ (up to rotation) to arbitrary accuracy
in $(2,\infty)$-norm for suitably large $n$~\cite{lyzinski13:_perfec}.
Without using this known matrix $\bP$, we would require that our matrices
have tens of thousands of vertices before the variance associated with
estimating the latent positions would no longer overwhelm the signal present
in the few altered latent positions.
Three major factors influence the complexity of testing
the null hypothesis in Equation \eqref{eq:ptsnull}:
the number of vertices $n$,
the number of changed latent positions $k = |I|$,
and the distances $\|\bX_i - \bY_i\|_F$ between the latent positions.
The three plots in Figure \ref{fig:trueP:power} illustrate
the first two of these three factors.
These three plots show the power of two different approaches to testing
the null hypothesis \eqref{eq:ptsnull} for different sized graphs
and for different values of $k$, the number of altered latent positions.
In all three conditions, both methods improve as the number of vertices
increases, as expected, especially since we do not require
estimation of the underlying expected matrix $\bP$ for Monte Carlo
estimation of the null distribution of the test statistic.
We see that when only one vertex is changed, neither method has power much above $0.25$.
However, in the case of $k = 5$ and $k = 10$, is it clear that the
omnibus-based test achieves higher power than the Procrustes-based
test, especially in the range of 30 to 250 vertices. A more detailed examination of the relative impact of these factors in testing is given in \cite{levin_omni_2017}.
\begin{figure}[t!]
\centering
\subfloat[]{ \includegraphics[width=0.325\columnwidth]{expts/truePonediff/onediff_power_by_vxs.pdf} }
\subfloat[]{ \includegraphics[width=0.325\columnwidth]{expts/truePfivediff/fivediff_power_by_vxs.pdf} }
\subfloat[]{ \includegraphics[width=0.325\columnwidth]{expts/truePtendiff/tendiff_power_by_vxs.pdf} }
\caption{Power of the ASE-based (blue) and omnibus-based (green)
tests to detect when the two graphs being testing differ in
(a) one, (b) five, and (c) ten of their latent positions.
Each point is the proportion of 1000 trials for which the given
technique correctly rejected the null hypothesis,
and error bars denote two standard errors of this empirical mean
in either direction. Figure duplicated from \cite{levin_omni_2017}. }
\label{fig:trueP:power}
\end{figure}
In sum, our omnibus embedding provides a natural mechanism for the
simultaneous embedding of multiple graphs into a single vector space.
This eliminates the need for multiple Procrustes alignments,
which were required in previously-explored
approaches to multiple-graph testing \cite{tang14:_semipar}.
Recall the two-graph hypothesis testing framework of \cite{tang14:_semipar},
each graph is embedded separately, yielding estimates $\bX_1$ and $\bX_2$, and under the null hypothesis of equality of latent positions $bX$ up to some rotation,
Procrustes alignment is required: the test statistic is
\begin{equation} \label{eq:procmin}
\min_{\bW \in \calO_d} \| \Xhat_1 - \Xhat_2 \bW \|_F,
\end{equation}
and under the null hypothesis, a suitable rescaling of this converges as $n \rightarrow \infty$.
The effect of this Procrustes alignment on subsequent inference
is ill-understood; it has the potential to
introduce variance, and our simulations results
suggest that it negatively impacts performance in both estimation
and testing settings.
Furthermore, when the matrix $\bP = \bX \bX^{\top}$
does not have distinct eigenvalues
(i.e., is not uniquely diagonalizable), this Procrustes step is unavoidable,
since the difference $\|\Xhat_1 - \Xhat_2\|_F$ need not converge at all.
In contrast, our omnibus embedding builds an alignment of the graphs
into its very structure. To see this, consider, for simplicity, the $m=2$ case.
Let $\bX \in \R^{n \times d}$ be the matrix whose rows are the latent positions
of both graphs $G_1$ and $G_2$, and let $\bM \in \R^{2n \times 2n}$ be their
omnibus matrix.
Then
\begin{equation*}
\E \bM = \Ptilde = \begin{bmatrix} \bP & \bP \\ \bP & \bP \end{bmatrix}
= \begin{bmatrix} \bX \\ \bX \end{bmatrix}
\begin{bmatrix} \bX \\ \bX \end{bmatrix}^{\top}.
\end{equation*}
Suppose now that we wish to factorize $\Ptilde$ as
$$ \Ptilde = \begin{bmatrix} \bX \\ \bX \bW^* \end{bmatrix}
\begin{bmatrix} \bX \\ \bX \bW^* \end{bmatrix}^{\top}
= \begin{bmatrix} \bP & \bX (\bW^*)^{\top} \bX^{\top} \\
\bX \bW^* \bX^{\top} & \bP \end{bmatrix}. $$
That is, we want to consider graphs $G_1$ and $G_2$ as being generated from
the same latent positions,
but in one case, say, under a \emph{different} rotation.
This possibility necessitates the Procrustes alignment in the case of
separately-embedded graphs.
In the case of the omnibus matrix,
the structure of the $\Ptilde$ matrix implies that $\bW^*=\bI_d$.
In contrast to the Procrustes alignment,
the omnnibus matrix incorporates an alignment {\em a priori}.
Simulations show that the omnibus embedding
outperforms the Procrustes-based test for equality of latent positions,
especially in the case of moderately-sized graphs.
To further illustrate the utility of this omnibus embedding, consider the case of testing whether three different random dot product graphs have the same generating latent positions. The omnibus embedding gives us a {\em single} canonical representation of all three graphs: Let $\Xhat^O_1$, $\Xhat^O_2$, and $\Xhat^O_3$ be the estimates for the three latent position matrices generated from the omnibus embedding.
To test whether any two of these random graphs have the same generating latent positions, we merely have to compare the Frobenius norms of their differences, as opposed to computing three separate Procrustes alignments.
In the latter case, in effect, we do not have a canonical choice of coordinates in which to compare our graphs simultaneously.
\subsection{Nonparametric graph estimation and testing}
\label{subsec:nonpar}
The semiparametric and omnibus testing procedures we describe both focus on the estimation of the latent positions themselves. But a very natural concern, for a random dot product graph with distribution $F$, is the estimation of the {\em distribution} $F$. We next address how the adjacency spectral embedding, judiciously integrated with kernel density estimation, can be used for nonparametric estimation and testing in random graphs.
Throughout our discussion on nonparametric estimation, we shall always assume that the distributions of the latent positions
satisfy the following distinct eigenvalues assumption. The assumption
implies that the estimates of the latent position obtained by the
adjacency spectral embedding will,
in the limit, be uniquely determined.
\begin{assumption}\label{ass:rank_F}
The distribution $F$ for the latent positions
$X_1, X_2, \dots, \sim F$ is such that the second moment matrix
$\mathbb{E}[X_1 X_1^{\top}]$ has $d$ distinct eigenvalues and $d$ is known.
\end{assumption}
We realize that Assumption \ref{ass:rank_F} is restrictive -- in
particular, it is not satisfied by the stochastic block model with
$K>2$ blocks of equal size and edge probabilities $p$ within
communities and $q$ between communities -- it is a necessary technical condition for us to obtain
the limiting results of Theorem~\ref{eq:lpg}.
The motivation behind this assumption is as follows: the matrix
$\mathbb{E}[X_1 X_1^{\top}]$ is of rank $d$ with $d$ known so
that given a graph $\mathbf{A} \sim \mathrm{RDPG}(F)$, one can
construct the adjacency spectral embedding of $\mathbf{A}$ into the
``right'' Euclidean space. The requirement that
$\mathbb{E}[X_1 X_1^{\top}]$ has $d$ distinct eigenvalues is---once again---due to the
intrinsic property of non-identifiability of random dot
product graphs. As always, for any random dot product graph
$\mathbf{A}$, the latent position $\mathbf{X}$ associated with
$\mathbf{A}$ can only be estimated up to some true but unknown
orthogonal transformation. Because we are concerned with two-sample
hypothesis testing, we must guard against the scenario in which we
have two graphs $\mathbf{A}$ and $\mathbf{B}$ with latent positions
$\mathbf{X} = \{X_i\}_{i=1}^{n} \overset{\mathrm{i.i.d}}{\sim} F$ and
$\mathbf{Y} = \{Y_k\}_{k=1}^{m} \overset{\mathrm{i.i.d}}{\sim} F$ but
whose estimates $\hat{\mathbf{X}}$ and $\hat{\mathbf{Y}}$ lie in
different, incommensurate subspaces of $\mathbb{R}^{d}$. That is to
say, the estimates $\hat{\mathbf{X}}$ and $\hat{\mathbf{Y}}$ satisfy
$\hat{\mathbf{X}} \approx \mathbf{X} \mathbf{W}_1$ and
$\hat{\mathbf{Y}} \approx \mathbf{Y} \mathbf{W}_2$, but
$\|\mathbf{W}_1 - \mathbf{W}_2\|_{F}$ does not converge to $0$ as $n,m
\rightarrow \infty$. See also \cite{fishkind15:_incom_phenom} for
exposition of a related so-called ``incommensurability phenomenon."
Our main point of departure for this subsection compared to Section~\ref{subsec:Testing} is the assumption
that, given a sequence of pairs of random dot product graphs with adjacency matrices $\mathbf{A}_n$ and $\mathbf{B}_n$,
the rows of the latent positions $\mathbf{X}_n$ and
$\mathbf{Y}_n$ are independent samples from some fixed distributions $F$ and $G$,
respectively. The corresponding tests are therefore tests of equality between $F$ and
$G$. More formally, we consider the following two-sample nonparametric testing
problems for random dot product graphs. Let $F$ and $G$ be two inner product distributions.
Given $\mathbf{A} \sim \mathrm{RDPG}(F)$ and $\mathbf{B} \sim \mathrm{RDPG}(G)$, we
consider the tests:
\begin{enumerate}
\item{\em (Equality, up to orthogonal transformation)}
\begin{align*}
H_{0} \colon F \upVdash G
\quad \text{against} \quad H_{A} \colon F \nupVdash G,
\end{align*}
where $F \upVdash G$ denotes that there exists a unitary operator
$U$ on $\mathbb{R}^{d}$ such that $F = G \circ U$ and $F \nupVdash
G$ denotes that $F \not = G \circ U$ for any unitary operator $U$ on
$\mathbb{R}^{d}$.
\item {\em (Equality, up to scaling)}
\begin{align*}
H_{0} \colon F \upVdash G \circ c \quad \text{for
some $c > 0$} \quad
\text{against} \quad H_{A} \colon F \nupVdash G \circ c \quad
\text{for any $c > 0$},
\end{align*}
where $Y \sim F \circ c$ if $cY \sim F$.
\item {\em (Equality, up to projection)} \begin{align*}
\quad H_{0} \colon F \circ \pi^{-1} \upVdash G
\circ \pi^{-1} \quad
\text{against} \quad H_{A} \colon F \circ \pi^{-1} \nupVdash G \circ \pi^{-1} ,
\end{align*}
where $\pi$ is the
projection $x \mapsto x/\|x\|$; hence $Y \sim F \circ
\pi^{-1}$ if $\pi^{-1}(Y) \sim F$.
\end{enumerate}
We note that the above null hypotheses are nested; $F \upVdash G$
implies $F \upVdash G \circ c $ for $c = 1$ while $F \upVdash G
\circ c$ for some $c > 0$ implies $F \circ \pi^{-1} \upVdash G \circ
\pi^{-1}$.
We shall address the above hypothesis testing problem by combining the framework of adjacency spectral embedding and
the kernel-based hypothesis testing framework of \cite{gretton12:_kernel_two_sampl_test}.
The testing procedure in \cite{gretton12:_kernel_two_sampl_test} is based on the following notion of the maximum mean discrepancy between
distributions.
Let $\Omega$ be a
compact metric space and
$\kappa \, \colon\, \Omega \times \Omega \mapsto \mathbb{R}$ a
continuous, symmetric, and positive definite kernel on
$\Omega$. Denote by $\mathcal{H}$ the reproducing kernel Hilbert space
associated with $\kappa$. Now let $F$ be a probability distribution on
$\Omega$. Under mild conditions on $\kappa$, the map $\mu[F]$ defined
by
\begin{equation*}
\mu[F] := \int_{\Omega} \kappa(\omega, \cdot) \, \mathrm{d} F(\omega)
\end{equation*}
belongs to $\mathcal{H}$.
Now, for given probability distributions $F$ and $G$ on $\Omega$, the
{\em maximum mean discrepancy} between $F$ and $G$ with respect to
$\mathcal{H}$ is the measure
\begin{equation*}
\mathrm{MMD}(F, G; \mathcal{H}) := \|\mu[F] - \mu[G]
\|_{\mathcal{H}}.
\end{equation*}
We now summarize some important properties of the maximum mean
discrepancy from \citep{gretton12:_kernel_two_sampl_test}.
\begin{theorem}
\label{thm:mmd_unbiased_limiting}
Let $\kappa
\, \colon \,
\mathcal{X} \times \mathcal{X} \mapsto \mathbb{R}$ be a positive definite
kernel and denote by $\mathcal{H}$ the reproducing kernel Hilbert space
associated with $\kappa$. Let $F$ and $G$ be probability distributions on $\Omega$; $X$
and $X'$ independent random variables with distribution $F$, $Y$ and
$Y'$ independent random variables with distribution $G$, and $X$ is
independent of $Y$. Then
\begin{equation}
\label{eq:5}
\begin{split}
\| \mu[F] - \mu[G] \|^{2}_{\mathcal{H}} &= \sup_{h \in \mathcal{H}
\colon \|h\|_{\mathcal{H}} \leq 1} |\mathbb{E}_{F}[h] -
\mathbb{E}_{G}[h]|^{2} \\ &= \mathbb{E}[\kappa(X,X')] - 2 \mathbb{E}[\kappa(X,Y)]
+ \mathbb{E}[\kappa(Y,Y')].
\end{split}
\end{equation}
Given $\mathbf{X} = \{X_i\}_{i=1}^{n}$ and $\mathbf{Y}
= \{Y_k\}_{k=1}^{m}$ with $\{X_i\}
\overset{\mathrm{i.i.d}}{\sim} F$ and $\{Y_i\}
\overset{\mathrm{i.i.d}}{\sim} G$,
the quantity $U_{n,m}({\bf X}, {\bf Y})$ defined by
\begin{equation}
\label{eq:10}
\begin{split}
U_{n,m}({\bf X}, {\bf Y})
&= \frac{1}{n(n-1)}
\sum_{j\not = i} \kappa(X_i,X_j)
- \frac{2}{mn} \sum_{i=1}^{n} \sum_{k=1}^{m} \kappa(X_i, Y_k) \\ &+
\frac{1}{m(m-1)} \sum_{l \not = k} \kappa(Y_k, Y_l)
\end{split}
\end{equation}
is an {unbiased consistent estimate} of $\|\mu[F] -
\mu[G]\|_{\mathcal{H}}^{2}$. Denote by $\tilde{\kappa}$ the kernel
\begin{equation*}
\begin{split}
\tilde{\kappa}(x,y) &= \kappa(x,y) - \mathbb{E}_{z}
\kappa(x, z) - \mathbb{E}_{z'} \kappa(z', y) +
\mathbb{E}_{z,z'} \kappa(z,z')
\end{split}
\end{equation*}
where the expectation is taken with respect to $z, z' \sim F$.
Suppose that $\tfrac{m}{m+n} \rightarrow
\rho \in (0,1)$ as $m, n \rightarrow \infty$. Then under the null
hypothesis of $F = G$,
\begin{equation}
\label{eq:mmd-X}
(m+n) U_{n,m}(\mathbf{X}, \mathbf{Y})
\overset{d}{\longrightarrow} \frac{1}{\rho(1 - \rho)} \sum_{l=1}^{\infty}
\lambda_{l} (\chi^{2}_{1l} - 1)
\end{equation}
where $\{\chi^{2}_{1l}\}_{l=1}^\infty$ is a sequence of independent $\chi^{2}$
random variables with one degree of freedom, and $\{\lambda_{l}\}$
are the eigenvalues of the integral operator $\mathcal{I}_{F,
\tilde{\kappa}}:\mathcal{H}\mapsto \mathcal{H}$ defined as
\begin{equation*}
I_{F, \tilde{\kappa}}(\phi)(x)=\int_{\Omega} \phi(y)\tilde{\kappa}(x,y) dF(y).
\end{equation*}
Finally, if $\kappa$ is a universal or
characteristic kernel \citep{sriperumbudur11:_univer_charac_kernel_rkhs_embed_measur,
steinwart01:_suppor_vector_machin}, then $\mu$
is an injective map, i.e., $\mu[F] = \mu[G]$ if and only if $F = G$.
\end{theorem}
\begin{remark}
A kernel $\kappa \colon \mathcal{X} \times \mathcal{X} \mapsto
\mathbb{R}$ is universal if $\kappa$ is a continuous function of
both its arguments and if the reproducing kernel Hilbert space
$\mathcal{H}$ induced by $\kappa$ is dense in the space of
continuous functions on $\mathcal{X}$ with respect to the supremum
norm. Let $\mathcal{M}$ be a family of Borel probability measures on
$\mathcal{X}$. A kernel $\kappa$ is characteristic for $\mathcal{M}$
if the map $\mu \in \mathcal{M} \mapsto \int \kappa(\cdot, z) \mu(
dz)$ is injective. If $\kappa$ is universal, then $\kappa$ is
characteristic for any $\mathcal{M}$
\citep{sriperumbudur11:_univer_charac_kernel_rkhs_embed_measur}. As
an example, let $\mathcal{X}$ be a finite dimensional Euclidean
space and define, for any $q \in (0,2)$, $k_{q}(x,y) =
\tfrac{1}{2}(\|x\|^{q} + \|y\|^{q} - \|x - y\|^{q})$. The kernels
$k_{q}$ are then characteristic for the collection of probability
distributions with finite second moments
\citep{lyons11:_distan,sejdinovic13:_equiv_rkhs}. In addition, by
Eq.~\eqref{eq:5}, the maximum mean discrepancy with reproducing
kernel $k_{q}$ can be written as
\begin{equation*}
\mathrm{MMD}^{2}(F,Q;k_{q}) = 2 \mathbb{E} \|X - Y\|^{q} -
\mathbb{E}\|X - X'\|^{q} - \mathbb{E} \|Y - Y'\|^{q}.
\end{equation*}
where $X, X'$ are
independent with distribution $F$, $Y, Y'$ are independent with
distribution $G$, and $X, Y$ are independent. This coincides with the
notion of the energy distances of \citep{szekely13:_energ}, or, when $q
= 1$, a special case of the one-dimensional interpoint comparisons
of \citep{maa96:_reduc}. Finally, we note that $(m+n)U_{n,m}(\mathbf{X}, \mathbf{Y})$ under the null hypothesis of
$F = G$ in Theorem~\ref{thm:mmd_unbiased_limiting} depends
on the $\{\lambda_l\}$ which, in turn, depend on the distribution
$F$; thus the limiting distribution is not distribution-free.
Moreover the eigenvalues $\{\lambda_l\}$ can, at best, be estimated;
for finite $n$, they cannot be explicitly determined when $F$ is
unknown. In practice, generally the critical values are estimated
through a bootstrap resampling or permutation test.
\end{remark}
We focus on the nonparametric two-sample hypothesis test of $\mathbb{H}_0 \colon F \upVdash G$ against $\mathbb{H}_A \colon F \nupVdash G$.
For our purposes, we shall assume
henceforth that $\kappa$ is a twice continuously-differentiable radial
kernel and that $\kappa$ is also universal.
To justify this assumption on our kernel, we point out that in Theorem~\ref{thm:mmd_unbiased_ase} below, we show that the test
statistic $U_{n,m}(\hat{\mathbf{X}}, \hat{\mathbf{Y}})$ based on the
estimated latent positions converges to the corresponding statistic
$U_{n,m}(\mathbf{X}, \mathbf{Y})$ for the true but unknown latent
positions. Due to the non-identifiability of the random dot product
graph under unitary transformation, \emph{any} estimate of the latent
positions is close, only up to an appropriate orthogonal transformation, to
$\mathbf{X}$ and $\mathbf{Y}$. For a radial kernel $\kappa$, this implies the approximations
$\kappa(\hat{X}_i, \hat{X}_j) \approx \kappa(X_i, X_j)$,
$\kappa(\hat{Y}_k,\hat{Y}_l) \approx \kappa(Y_k, Y_l)$ and the
convergence of $U_{n,m}(\hat{\mathbf{X}}, \hat{\mathbf{Y}})$ to
$U_{n,m}(\mathbf{X}, \mathbf{Y})$. If $\kappa$ is
not a radial kernel, the above approximations might not hold and
$U_{n,m}(\hat{\mathbf{X}}, \hat{\mathbf{Y}})$ need not converge to
$U_{n,m}(\mathbf{X}, \mathbf{Y})$. The assumption that
$\kappa$ is twice continuously-differentiable is technical. Finally, the
assumption that $\kappa$ is universal allows the test procedure to be
consistent against a large class of alternatives.
\begin{theorem}
\label{thm:mmd_unbiased_ase}
Let $(\mathbf{X}, \mathbf{A}) \sim \mathrm{RDPG}(F)$ and
$(\mathbf{Y}, \mathbf{B}) \sim \mathrm{RDPG}(G)$ be independent
random dot product graphs with latent position distributions $F$ and
$G$. Furthermore, suppose that both $F$ and $G$ satisfies the
distinct eigenvalues condition in Assumption~\ref{ass:rank_F}.
Consider the hypothesis test
\begin{align*}
H_{0} \colon F \upVdash G \quad
\text{against} \quad H_{A} \colon F \nupVdash G.
\end{align*}
Denote by $\hat{\mathbf{X}} = \{\hat{X}_1, \dots, \hat{X}_n\}$
and $\hat{\mathbf{Y}} = \{\hat{Y}_1, \dots, \hat{Y}_m\}$ the
adjacency spectral embedding of $\mathbf{A}$ and
$\mathbf{B}$, respectively.
Let $\kappa$ be a twice continuously-differentiable radial kernel and $U_{n,m}(\hat{\mathbf{X}},\hat{\mathbf{Y}})$ be defined as
\begin{equation*}
\begin{split}
U_{n,m}(\hat{\mathbf{X}}, \hat{\mathbf{Y}}) &= \frac{1}{n(n-1)}
\sum_{j \not = i}
\kappa(\hat{X}_i,
\hat{X}_j) - \frac{2}{mn} \sum_{i=1}^{n}
\sum_{k=1}^{m} \kappa(\hat{X}_i,
\hat{Y}_k) + \frac{1}{m(m-1)} \sum_{l \not = k} \kappa(\hat{Y}_k, \hat{Y}_l).
\end{split}
\end{equation*}
Let $\mathbf{W}_1$
and $\mathbf{W}_{2}$
be $d \times d$ orthogonal matrices in the eigendecomposition
$\mathbf{W}_1 \mathbf{S}_1 \mathbf{W}_{1}^{\top} = \mathbf{X}^{\top}
\mathbf{X}$, $\mathbf{W}_{2} \mathbf{S}_{2} \mathbf{W}_{2} =
\mathbf{Y}^{\top} \mathbf{Y}$, respectively.
Suppose that $m, n \rightarrow \infty$
and $m/(m+n) \rightarrow \rho \in (0,1)$. Then under the null
hypothesis of $F \upVdash G$, the sequence of matrices $\mathbf{W}_{n,m} = \mathbf{W}_{2} \mathbf{W}_{1}^{\top}$ satisfies
\begin{equation}
\label{eq:conv_mmdXhat_null}
(m+n) (U_{n,m}(\hat{\mathbf{X}}, \hat{\mathbf{Y}}) -
U_{n,m}(\mathbf{X}, \mathbf{Y} \mathbf{W}_{n,m})) \overset{\mathrm{a.s.}}{\longrightarrow} 0.
\end{equation}
Under the alternative hypothesis of $F \nupVdash G$, the sequence of
matrices ${\bf W}_{n,m} $ satisfies
\begin{equation}
\label{eq:conv_mmdXhat_alt}
\frac{m+n}{\log^2{\!(m+n)}}
(U_{n,m}(\hat{\mathbf{X}}, \hat{\mathbf{Y}}) - U_{n,m}(\mathbf{X},
\mathbf{Y} \mathbf{W}_{n,m})) \overset{\mathrm{a.s.}}{\longrightarrow} 0.
\end{equation}
\end{theorem}
Eq.\eqref{eq:conv_mmdXhat_null} and Eq.\eqref{eq:conv_mmdXhat_alt}
state that the test statistic $U_{n,m}(\hat{\mathbf{X}},
\hat{\mathbf{Y}})$ using the {\em estimated} latent positions is
almost identical to the statistic $U_{n,m}(\mathbf{X}, \mathbf{Y} \mathbf{W}_{n,m})$
using the true latent positions, under
both the null and alternative hypothesis. If we assume that $\kappa$ is a universal
kernel, then $U_{n,m}(\mathbf{X}, \mathbf{Y} \mathbf{W}_{n,m})$ converges
to $0$ under the null and converges to a positive number under the
alternative. The test statistic $U_{n,m}(\hat{\mathbf{X}},
\hat{\mathbf{Y}})$ therefore yields a test procedure that is
consistent against any alternative, provided that both $F$ and $G$
satisfy Assumption~\ref{ass:rank_F}, namely that the second moment
matrices have $d$ distinct eigenvalues.
We next consider the case of testing the hypothesis that the
distributions $F$ and $G$ are equal up to scaling or equal up to projection. For the test of equality up to scaling,
\begin{align*}
\quad H_{0} \colon F \upVdash G \circ c \quad \text{for
some $c > 0$}
\quad \text{against} \quad H_{A} \colon F \nupVdash G \circ c \quad
\text{for any $c > 0$},
\end{align*}
where $Y \sim F \circ c$ if $cY \sim F$,
we modified
Theorem~\ref{thm:mmd_unbiased_ase} by first scaling the adjacency
spectral embeddings by the norm of the empirical means before
computing the kernel test statistic.
In particular, let
\begin{align*}
\hat{s}_{X}=n^{-1/2}\|\hat{\mathbf{X}}\|_{F}, \quad \hat{s}_{Y}=m^{-1/2}
\|\hat{\mathbf{Y}}\|_{F}, \quad
s_{X}= n^{-1/2} \|\mathbf{X}\|_{F},\quad s_{Y} =
m^{-1/2} \|\mathbf{Y}\|_{F},
\end{align*}
then the conclusions of Theorem~\ref{thm:mmd_unbiased_ase} hold
when we replace $U_{n,m}(\hat{\mathbf{X}}, \hat{\mathbf{Y}})$ and $U_{n,m}(\mathbf{X}, \mathbf{Y} \mathbf{W}_{n,m})$ with
$U_{n,m}(\hat{\mathbf{X}}/\hat{s}_{X},\hat{\mathbf{Y}}/\hat{s}_{Y})$ and
$U_{n,m}(\mathbf{X}/s_{X},\mathbf{Y} \mathbf{W}_{n,m}/s_{Y})$, respectively.
For the test of equality up to projection,
\begin{align*}
\quad H_{0} \colon F \circ \pi^{-1} \upVdash G
\circ \pi^{-1} \quad \text{against} \quad H_{A} \colon F \circ \pi^{-1} \nupVdash G \circ \pi^{-1} ,
\end{align*}
where $\pi$ is the
projection $x \mapsto x/\|x\|$ that maps $x$ onto the unit sphere
in $\mathbb{R}^{d}$,
the conclusions of
Theorem~\ref{thm:mmd_unbiased_ase} hold when we replace
$U_{n,m}(\hat{\mathbf{X}}, \hat{\mathbf{Y}})$ and $U_{n,m}(\mathbf{X}, \mathbf{Y} \mathbf{W}_{n,m})$ with
$U_{n,m}(\pi(\hat{\mathbf{X}}), \pi(\hat{\mathbf{Y}}))$ and $U_{n,m}(\pi(\mathbf{X}),\pi(\mathbf{Y})
\mathbf{W}_{n,m})$, respectively, provided that
that $0$ is not an atom of either $F$ or $G$ i.e., $F(\{0\}) = G(\{0\}) = 0$.
The assumption that $0$ is not an atom is necessary,
because otherwise the problem is possibly ill-posed: specifically, $\pi(0)$ is
undefined. To contextualize the test of equality up to projection,
consider the very specific case of the degree-corrected stochastic
blockmodel \cite{karrer2011stochastic}. A degree-corrected stochastic
blockmodel can be regarded as a random dot product graph whose latent
position $X_v$ for an arbitrary vertex $v$ is of the form $X_v =
\theta_v \nu_{v}$ where $\nu_v$ is sampled from a mixture of point
masses and $\theta_v$ (the degree-correction factor) is sampled from a distribution on
$(0,1]$. Thus, given two degree-corrected stochastic blockmodel
graphs, equality up to projection tests whether the
underlying mixtures of point masses (that is, the distribution of the
$\nu_v$) are the same modulo the distribution of the degree-correction
factors $\theta_v$.
\section{Applications}\label{sec:Applications}
We begin this section by first presenting an application of the two-sample semiparametric test procedure in Section~\ref{subsec:Testing}, demonstrating how it
can be applied to compare data from a collection of neural images.
\subsection{Semiparametric testing for brain scan data}\label{subsec:semipar_cci}
We consider neural imaging graphs obtained from the test-retest
diffusion MRI and magnetization-prepared rapid acquisition gradient echo (MPRAGE) data of \cite{landman_KKI}. The raw data consist of 42 images: namely, one pair of neural
images from each of 21 subjects. These images are generated, in part, for the
purpose of evaluating scan-rescan reproducibility of the
MPRAGE image protocol. Table 5 from \cite{landman_KKI} indicates that the
variability of MPRAGE is quite small; specifically, the cortical gray
matter, cortical white matter, ventricular cerebrospinal fluid,
thalamus, putamen, caudate, cerebellar gray matter, cerebellar white
matter, and brainstem were identified with mean volume-wise
reproducibility of $3.5\%$, with the largest variability being that of
the ventricular cerebrospinal fluid at $11\%$.
We use the MIGRAINE pipeline of \cite{gray_migraine} to convert these scans into spatially-aligned graphs, i.e., graphs in which each vertex corrresponds to a particular voxel in a reference coordinate system to which the image is registered. We first consider a
collection of small graphs on seventy vertices that are generated from an atlas of
seventy brain regions and the fibers connecting them. Given these
graphs, we proceed to investigate the similarities and dissimilarities
between the scans. We first embed each graph into
$\mathbb{R}^{4}$. We then test the hypothesis of equality up to
rotation between the graphs. Since Theorem~\ref{thm:identity} is a large-sample result, the rejection region specified therein might be excessively conservative for the graphs on $n = 70$ vertices in our current context. We remedy this issue by using the rejection region and $p$-values reported by the parametric bootstrapping procedure presented in Algorithm~\ref{bootstrap-simple}.
\begin{algorithm}[tb]
\caption{Bootstrapping procedure for the test $\mathbb{H}_0 \colon \mathbf{X} =_{W} \mathbf{Y}$.}
\label{bootstrap-simple}
\begin{algorithmic}[1]
\Procedure{Bootstrap}{$\mathbf{X}, T, bs$} \Comment{Returns the p-value associated with $T$.}
\State $d \gets \mathrm{ncol}(\mathbf{X})$; \quad $\mathcal{S}_{X} \gets \emptyset$ \Comment{Set $d$ to be the number of columns of $\mathbf{X}$.}
\For{$b \gets 1 \colon bs$}
\State $\mathbf{A}_b \gets \mathrm{RDPG}(\hat{\mathbf{X}}); \quad \mathbf{B}_b \gets \mathrm{RDPG}(\hat{\mathbf{X}})$
\State $\hat{\mathbf{X}}_b \gets \mathrm{ASE}(\mathbf{A}_b, d); \quad \hat{\mathbf{Y}}_b \gets \mathrm{ASE}(\mathbf{B}_b, d)$
\State $T_b \gets \min_{\mathbf{W}} \| \hat{\mathbf{X}}_{b} - \hat{\mathbf{Y}}_{b} \mathbf{W} \|_{F}; \qquad \mathcal{S}_{X} \gets \mathcal{S}_X \cup T_b$
\EndFor
\State \Return $p \gets (|\{s \in \mathcal{S}_X \colon s \geq T\}| + 0.5)/bs$ \Comment{Continuity correction.}
\EndProcedure \\
\State $\hat{\mathbf{X}} \gets \mathrm{ASE}(\mathbf{A}, d)$; $\hat{\mathbf{Y}} \gets \mathrm{ASE}(\mathbf{B}, d)$ \Comment{The embedding dimension $d$ is assumed given.}
\State $T \gets \min_{\mathbf{W}} \|\hat{\mathbf{X}} - \hat{\mathbf{Y}} \mathbf{W} \|_{F}$
\State $p_{X} \gets \mathrm{Bootstrap}(\hat{\mathbf{X}}, T, bs)$ \Comment{The number of bootstrap samples $bs$ is assumed given.}
\State $p_{Y} \gets \mathrm{Bootstrap}(\hat{\mathbf{Y}}, T, bs)$
\State $p = \max\{p_X, p_Y\}$ \Comment{Returns the maximum of the two p-values.}
\end{algorithmic}
\end{algorithm}
The pairwise comparisons between the $42$ graphs are
presented in Figure~\ref{fig:kki-small-identity}. Figure~\ref{fig:kki-small-identity} indicates that, in general, the
test procedure fails to reject the null hypothesis when the two graphs
are for the same subject. This is consistent with the reproducibility
finding of \cite{landman_KKI}. Furthermore, this outcome is also
intuitively plausible; in addition to failing to reject when two scans
are from the same subject, we also frequently {\em do} reject the null
hypothesis when the two graphs are from scans of different
subjects. Note that our analysis is purely exploratory; as such, we do
not grapple with issues of multiple comparisons here.
\begin{figure}[h]
\centering
\includegraphics[width=0.65\textwidth]{kki-small-poisson.pdf}
\caption{Matrix of p-values (uncorrected) for testing the hypothesis
$\mathbb{H}_0 \colon \mathbf{X} =_{W} \mathbf{Y}$ for the $42
\times 41/2$ pairs of graphs generated from the KKI test-retest
dataset \cite{landman_KKI}. The labels had been arranged so
that the pair $(2i-1,2i)$ correspond to scans from the same
subject. The $p$-values are color coded to vary in intensity from
white ($p$-value of $0$) to dark red ($p$-value of $1$). Figure duplicated from \cite{tang14:_semipar}.}
\label{fig:kki-small-identity}
\end{figure}
Finally, we note that similar results also hold when we consider the large graphs generated from
these test-retest data through the MIGRAINE pipeline. In particular, for each magnetic resonance scan, the MIGRAINE
pipeline can generates graphs with upto $10^7$ vertices and $10^{10}$
edges with the vertices of all the graphs aligned. Bootstrapping the test statistics for these large
graphs present some practical difficulties; one procedure proposed in \cite{tang14:_semipar} is based on bootstrapping disjoint induced subgraphs of the original graphs using the bootstrapping procedure in Algorithm~\ref{bootstrap-simple} and combining the resulting $p$-values using Fisher's combined probability tests \cite{mosteller48:_quest_answer}.
\subsection{Community detection and classification in hierarchical models}\label{subsec:HSBM}
In disciplines as diverse as social
network analysis and neuroscience, many large graphs are believed to
be composed of loosely connected communities and/orsmaller graph primitives, whose
structure is more amenable to analysis.
We emphasize that while the problem of community detection is very well-studied and there are an abundance of community detection algorithms, these algorithms have focused
mostly on uncovering the subgraphs. Recently, however, the characterization and further
{\em classification} of these subgraphs into stochastically similar motifs
has emerged as an important area of ongoing research
The nonparametric two-sample hypothesis testing procedure in Section~\ref{subsec:nonpar} can be used in conjunction with spectral community detection algorithms to yield a robust, scalable, integrated methodology for {\em community detection} and
{\em community comparison} in graphs \cite{lyzinski15_HSBM}.
The notion of {\em hierarchical stochastic block model}---namely, a graph consisting of densely connected subcommunities which are themselves stochastic block models, with this structure iterated repeatedly---is precisely formulated in \cite{lyzinski15_HSBM}. In that work, a novel angle-based clustering method is introduced, and this clustering method allows us to isolate appropriate subgraphs. We emphasize that the angle-based clustering procedure in \cite{lyzinski15_HSBM} is designed to identify a particular affinity structure within our hierarchical block model graph. Figure \ref{fig:hsbm_cluster} illustrates how an angle-based clustering may differ from a $k$-means clustering.
\begin{figure}[tph]
\centering
\includegraphics[width=0.25\textwidth]{hsbm-Fig3-newer-crop}
\caption{Subgraphs vs. angle-based clustering: Note that if the fraction of points in the pink cone is sufficiently large, $K$-means clustering (with $K=2)$ will not cluster the vertices in the hierarchical SBM affinity model of \cite{lyzinski15_HSBM} into the appropriate subgraphs. Figure duplicated from \cite{lyzinski15_HSBM}. }
\label{fig:hsbm_cluster}
\end{figure}
Our overall community detection algorithm is summarized in Algorithm~\ref{alg:main}.
As an illustrative example of this methodology, we present an analysis of the
communities in the Friendster social network. The Friendster social network
contains roughly $60$ million users and $2$ billion
connections/edges.
In addition, there are roughly $1$ million communities at the local scale.
Because we expect the social interactions in these communities to inform the function of the different communities, we expect to observe distributional repetition among the graphs associated with these communities.
\begin{algorithm}[t!]
\begin{algorithmic}
\State \textbf{Input}: Adjacency matrix $A\in
\{0,1\}^{n\times n}$ for a latent position random graph.
\State \textbf{Output}: Subgraphs and characterization of their dissimilarity
\While {Cluster size exceeds threshold}
\State {\em Step 1}:
Compute the adjacency spectral embedding $\hat{\mathbf{X}}$ of $A$ into $\mathbb{R}^{D}$;
\State {\em Step 2}: Cluster $\hat{\mathbf{X}}$ to obtain subgraphs $\hat{H}_1,
\cdots, \hat{H}_R$ using a novel angle-based clustering procedure given in \cite{lyzinski15_HSBM}.
\State {\em Step 3}: For each $i\in[R],$ compute the adjacency
spectral embedding for each subgraph
$\hat{H}_i$ into $\mathbb{R}^{d}$, obtaining $\hat{\mathbf{X}}_{\hat{H}_i}$;
\State \textit{Step 4}: Compute $\widehat S:=[U_{\hat n_r,\hat n_s}(\hat{\mathbf{X}}_{\hat{H}_r}, \hat{\mathbf{X}}_{\hat{H}_s})]$, where $U$ is the test statistic in Theorem~\ref{thm:mmd_unbiased_ase}, producing a pairwise dissimilarity matrix on induced subgraphs;
\State \textit{Step 5}: Cluster induced subgraphs into motifs using the dissimilarities given in $\widehat S$; e.g., use a hierarchical clustering algorithm to cluster the rows of $\widehat{S}$ or the matrix of associated $p$-values.
\State \textit{Step 6}: Recurse on a representative subgraph from each motif (e.g., the largest subgraph), embedding into $\mathbb{R}^d$ in Step 1 (not $\mathbb{R}^D$);
\EndWhile
\end{algorithmic}
\caption{Detecting hierarchical structure for graphs}
\label{alg:main}
\end{algorithm}
Implementing Algorithm \ref{alg:main} on the very large Friendster
graph presents several computational challenges and model selection quagmires.
To overcome the computational challenge
in scalability, we use the specialized SSD-based graph processing
engine \texttt{FlashGraph} \cite{zheng_flashgraph}, which is
designed to analyze graphs with billions of nodes. With
\texttt{FlashGraph}, we adjacency spectral embed the Friendster
adjacency matrix into $\mathbb{R}^{14}$---where $\widehat D=14$ is
chosen using universal singular value thresholding on the partial SCREE plot \cite{chatterjee2015}.
We next cluster the embedded points into $\widehat R=15$ large-scale/coarse-grained clusters ranging
in size from $10^6$ to 15.6 million vertices (note that to alleviate sparsity concerns, we projected the embedding onto the sphere before clustering); After re-embedding the induced subgraphs associated with these $15$ clusters, we use a
linear time estimate of the test statistic $U$ to compute $\widehat
S$, the matrix of estimated pairwise dissimilarities among the
subgraphs.
See Figure
\ref{fig:friend} for a heat map depicting
$\widehat{S}\in\mathbb{R}^{15\times 15}$. In the heat map, the
similarity of the communities is represented on the spectrum between
white and red, with white representing highly similar communities and
red representing highly dissimilar communities.
From the figure, we can see clear repetition in the subgraph distributions; for example, we see a repeated motif including subgraphs $\{\hat{H}_5, \hat{H}_4,\hat{H}_3,\hat{H}_2\}$ and a clear motif including subgraphs $\{\hat{H}_{10},\hat{H}_{12},\hat{H}_9\}$.
Formalizing the motif detection step, we employ hierarchical
clustering to cluster $\widehat S$ into motifs; see Figure
\ref{fig:friend} for the corresponding hierarchical clustering
dendrogram, which suggests that our algorithm does in fact uncover
repeated motif structure at the coarse-grained level in the Friendster
graph. While it may be difficult to draw meaningful inference from
repeated motifs at the scale of hundreds of thousands to millions of
vertices, if these motifs are capturing a common HSBM structure within
the subgraphs in the motif, then we can employ our algorithm
recursively on each motif to tease out further hierarchical structure.
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{Shat-level1-new.pdf}
\caption{Heat map depiction of the level one Friendster estimated dissimilarity matrix $\widehat{S}\in\mathbb{R}^{15\times 15}$. In the heat map, the similarity of the communities is represented on the spectrum between white and red, with white representing highly similar communities and red representing highly dissimilar communities.
In addition, we cluster $\widehat S$ using hierarchical clustering and display the associated hierarchical clustering dendrogram. Figure duplicated from \cite{lyzinski15_HSBM}.}
\label{fig:friend}
\end{figure}
Exploring this further, we consider three subgraphs $\{\hat{
H}_2,\hat{H}_{8},\hat{H}_{15}\}$, two of which are in the same
motif (8 and 15) and both differing significantly from subgraph 2
according to $\widehat S.$
We embed these subgraphs into
$\mathbb{R}^{26}$ ($26$ were chosen once again using the universal singular value thresholding of \cite{chatterjee2015})
perform a Procrustes alignment of the vertex sets of the three
subgraphs, cluster each into $4$ clusters ($4$ chosen
to optimize silhouette width in $k$-means clustering), and estimate both the block connection probability matrices,
$$\hat P_2=\begin{bmatrix}
0.000045& 0.00080& 0.00056& 0.00047\\
0.00080& 0.025& 0.0096& 0.0072\\
0.00057& 0.0096& 0.012& 0.0067\\
0.00047& 0.0072& 0.0067& 0.023
\end{bmatrix},$$
$$\hat P_8=\begin{bmatrix}
0.0000022& 0.000031& 0.000071& 0.000087\\
0.000031& 0.0097& 0.00046& 0.00020\\
0.000071& 0.00046& 0.0072& 0.0030\\
0.000087& 0.00020& 0.003& 0.016
\end{bmatrix},$$
$$\hat
P_{15}=\begin{bmatrix}
0.0000055& 0.00011& 0.000081& 0.000074\\
0.00011& 0.014& 0.0016& 0.00031\\
0.000081& 0.0016 &0.0065& 0.0022\\
0.000074& 0.00031& 0.0022& 0.019
\end{bmatrix},$$
and the block membership probabilities $\hat \pi_2,\,\hat \pi_8,\,\hat
\pi_{15},$ for each of the three graphs. We calculate
\begin{align*}
\|\hat P_2-\hat P_8\|_F=0.033; &\hspace{3mm} \|\hat \pi_2-\hat \pi_8\|=0.043 ;\\
\|\hat P_8-\hat P_{15}\|_F=0.0058; &\hspace{3mm} \|\hat \pi_8-\hat \pi_{15}\|=0.0010; \\
\|\hat P_2-\hat P_{15}\|_F= 0.027; &\hspace{3mm} \|\hat \pi_2-\hat \pi_{15}\|= 0.043;
\end{align*}
which suggests that the repeated structure our algorithm
uncovers is {\it SBM substructure}, thus ensuring that we can proceed to
apply our algorithm recursively to the subsequent motifs.
As a final point, we recursively apply Algorithm \ref{alg:main} to the
subgraph $\hat{H}_{11}$. We first embed the graph into
$\mathbb{R}^{26}$ (again, with $26$ chosen via universal singular value thresholding). We then cluster the vertices into
$\widehat R=13$ large-scale clusters of sizes ranging from 500K to
2.7M vertices. We then use a linear time estimate of the test
statistic $T$ to compute $\widehat S$ (see Figure \ref{fig:friend2}),
and note that there appear to be clear repeated motifs (for example,
subgraphs 8 and 12) among the $\widehat H$'s. We run hierarchical
clustering to cluster the $13$ subgraphs, and note that the associated
dendrogram---as shown in Figure \ref{fig:friend2}---shows that our
algorithm again uncovered some repeated level-$2$ structure in the
Friendster network. We can, of course, recursively apply our
algorithm still further to tease out the motif structure at
increasingly fine-grained scale.
Ideally, when recursively running Algorithm \ref{alg:main}, we would
like to simultaneously embed and cluster all subgraphs in the motif.
In addition to potentially reducing embedding variance,
being able to efficiently simultaneously embed all the subgraphs in a
motif could greatly increase algorithmic scalability in large networks
with a very large number of communities at local-scale. In order to
do this, we need to understand the nature of the repeated structure
within the motifs. This repeated structure can inform an estimation
of a motif average (an averaging of the subgraphs within the motif),
which can then be embedded into an appropriate Euclidean space in lieu
of embedding all of the subgraphs in the motif separately. However,
this averaging presents several novel challenges, as these subgraphs
may be of very different orders and may be errorfully obtained, which
could lead to compounded errors in the averaging step.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.55\textwidth]{friend11-crop.pdf}
\caption{Heat map depiction of the level two Friendster estimated dissimilarity matrix $\widehat{S}\in\mathbb{R}^{13\times 13}$ of $\widehat H_{11}$. In the heat map, the similarity of the communities is represented on the spectrum between white and red, with white representing highly similar communities and red representing highly dissimilar communities.
In addition, we cluster $\widehat S$ using hierarchical clustering and display the associated hierarchical clustering dendrogram.}
\label{fig:friend2}
\end{figure*}
\subsection{Structure discovery in the {\em Drosophila} connectome} \label{subsec:MBStructure}
In this subsection, we address a cutting-edge application of our techniques to neuroscience: structure discovery in the larval Drosophila connectome, comprehensively described in \cite{MBStructure}, and from which significant portions are reproduced here, with permission. This is a first-of-its-kind exploratory data analysis of a newly-available wiring diagram, and although the connectome graph we analyze is directed, weighted, and also of unknown embedding dimension, our statistical techniques can nonetheless be adapted to this setting.
Specifically, we introduce the {\it latent structure model} (LSM) for network modeling and inference. The LSM is a generalization of the stochastic block model (SBM) in that the latent positions are allowed to lie on a lower-dimensional curve, and this model is amenable to semiparametric Gaussian mixture modeling (GMM) applied to the adjacency spectral embedding (ASE).
The resulting {\it connectome code}
derived via semiparametric GMM composed with ASE, which we denote, in shorthand, by $GMM \circ ASE$,
captures latent connectome structure
and elucidates biologically relevant neuronal properties.
HHMI Janelia has
recently reconstructed the complete wiring diagram of the higher order parallel fiber system for associative learning in the larval {\it Drosophila} brain,
the mushroom body (MB).
Memories are thought to be stored as functional and structural changes in connections between neurons,
but the complete circuit architecture of a higher-order learning center involved in memory formation or storage
has not been known in any organism---until now, that is. Our MB connectome
was obtained via serial section transmission electron microscopy of an entire larval {\it Drosophila} nervous system
\cite{ohyama2015multilevel,schneider2016quantitative}.
This connectome contains the entirety of MB intrinsic neurons, called Kenyon cells, and all of their pre- and post-synaptic partners
\cite{EichlerSubmitted}.
We consider the right hemisphere MB. The connectome consists of four distinct types of neurons
-- Kenyon Cells (KC), Input Neurons (MBIN), Output Neurons (MBON), Projection Neurons (PN) --
with directed connectivity illustrated in Figure \ref{fig:Fig1}.
There are $n=213$ neurons\footnote{
There are 13 isolates, all are KC; removing these isolates makes the (directed) graph one (weakly, but not strongly) connected component with 213 vertices and 7536 directed edges.},
with
$n_{KC} = 100$,
$n_{MBIN} = 21$,
$n_{MBON} = 29$, and
$n_{PN} = 63$.
Figure \ref{fig:Fig2} displays the observed MB connectome as an adjacency matrix.
Note that, in accordance with Figure \ref{fig:Fig1},
Figure \ref{fig:Fig2} shows data (edges) in only eight of the 16 blocks.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65\textwidth]{MBstructure/CircuitDiagramComplete.jpg}
\caption{\label{fig:Fig1} Illustration of the larval {\it Drosophila} mushroom body connectome as a directed graph on four neuron types. Figure duplicated from \cite{MBStructure}.}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65\textwidth]{MBstructure/INIslides/Fig2-new.pdf}
\caption{\label{fig:Fig2} Observed data for our MB connectome as a directed adjacency matrix on four neuron types with 213 vertices
($n_{KC} = 100$,
$n_{MBIN} = 21$,
$n_{MBON} = 29$, and
$n_{PN} = 63$)
and 7536 directed edges.
(This data matrix is available at
$<$\url{http://www.cis.jhu.edu/~parky/MBstructure.html}$>$.)
Figure duplicated from \cite{MBStructure}.}
\end{figure}
Due to its undeniable four-neuron-type connectivity structure,
we might think of our MB connectome,
to first order,
as an observation from a $(K=4)$-block directed stochastic block model (SBM) \cite{WangWong} on $n$ vertices.
This model is parameterized by ({\it i}) a block membership probability vector $\rho = [\rho_1,\cdots,\rho_K]$
such that $\rho_k \ge 0$ for all $k$ and $\sum_k \rho_k = 1$
and ({\it ii}) a $K \times K$ block connectivity probability matrix $B$ with entries $B_{k_1,k_2} \in [0,1]$
governing the probability of directed edges from vertices in block $k_1$ to vertices in block $k_2$.
For this model of our MB connectome we have
\[B = \left[ \begin{array}{cccc}
B_{11} & B_{12} & B_{13} & 0 \\
B_{21} & 0 & B_{23} & 0 \\
0 & B_{32} & B_{33} & 0 \\
B_{41} & 0 & 0 & 0 \end{array} \right]\]
where the $0$ in the $B_{31}$ entry, for example,
indicates that there are no directed connections from any MBON neuron to any KC neuron (as seen in Figures \ref{fig:Fig1} and \ref{fig:Fig2}).
Theoretical results and methodological advances suggest that Gaussian mixture modeling
(see, for example, \cite{mclust2012})
composed with adjacency spectral embedding,
denoted $\mclustase$, can be instructive in analysis of the (directed) SBM.
Since this graph is directed, we adapt our embedding technique just slightly. Given $d \geq 1$, the adjacency spectral embedding (ASE) of a {\em directed} graph
on $n$ vertices
employs the singular value decomposition
to represent the $n \times n$ adjacency matrix via ${\bf A} = \bigl[{\bf U} \mid {\bf U}^{\perp} \bigr] \bigl[{\bf S} \bigoplus {\bf S}^{\perp}\bigr]
\bigl[{\bf V} \mid {\bf V}^{\perp}]^{\top}$
where $\mathbf{S}$ is the $d \times d$ diagonal matrix of the $d$ largest singular values and ${\bf U}$ and ${\bf V}$ are the matrix of corresponding left and right singular vectors, and we embed the graph as $n$ points in $\mathbb{R}^{2d}$ via the concatenation
$$\Xhat = \left[ {\bU} {\bS}^{1/2} \, \mid \, {\bV} {\bS} ^{1/2} \right] \in \mathbb{R}^{n \times 2d}.$$
(The scaled left-singular vectors $\bU \bS^{1/2}$ are interpreted as the ``out-vector'' representation of the digraph,
modeling vertices' propensity to originate directed edges;
similarly, $\bV \bS^{1/2}$ are interpreted as the ``in-vectors''.)
Gaussian mixture modeling (GMM)
then fits a $K$-component $2d$-dimensional Gaussian mixture model
to the points $\Xhat_1,\cdots,\Xhat_n$ given by the rows of $\Xhat$.
If the graph is a stochastic block model, then as we have discussed previously in Section \ref{subsec:Distributional}, clustering the rows of the adjacency spectral embedding via Gaussian mixture modeling gives us consistent estimates for the latent positions.
Applying this procedure to our MB connectome
yields the clustered embedding depicted via the pairs plot presented in Figure~\ref{fig:MBpairs12},
with the associated cluster confusion matrix with respect to true neuron types presented in Table \ref{tab:mclust6}.
The clusters are clearly coincident with the four true neuron types.
(For ease of illustration, Figure \ref{fig:MBpairs12} presents just the Out1 vs.\ Out2 subspace.)
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65\textwidth]{MBstructure/INIslides/Fig3-out1-out2.pdf}
\caption{\label{fig:MBpairs12}
Plot for the clustered embedding of our MB connectome
in the Out1 vs.\ Out2 dimensions.
For ease of illustration,
we present embedding results in this two-dimensional subspace.
Recall that this is a two-dimensional visualization of six-dimensional structure. Figure duplicated from \cite{MBStructure}.
}
\end{figure}
There are two model selection problems inherent in spectral clustering in general,
and in obtaining our clustered embedding (Figure \ref{fig:MBpairs12}) in particular:
choice of embedding dimension ($\dhat$), and choice of mixture complexity ($\Khat$).
A ubiquitous and principled method for choosing the number of dimensions in eigendecompositions and singular value decompositions
is to examine the so-called {\em scree plot}
and look for ``elbows'' or ``knees'' defining the cut-off between the top signal dimensions and the noise dimensions.
Identifying a ``best'' method is, in general, impossible, as the bias-variance tradeoff demonstrates that,
for small $n$, subsequent inference may be optimized by choosing a dimension {\it smaller than} the true signal dimension;
see Section 3 of \cite{JainDuinMao} for a clear and concise illustration of this phenomenon.
There are a plethora of variations for automating this singular value thresholding (SVT);
Section 2.8 of \cite{Jackson} provides a comprehensive discussion in the context of principal components,
and
\cite{chatterjee2015}
provides a theoretically-justified (but perhaps practically suspect, for small $n$) universal SVT.
Using the profile likelihood SVT method of \cite{zhu06:_autom}
yields a cut-off at three singular values, as depicted in Figure \ref{fig:MBscree}.
Because this is a directed graph, we have both left- and right-singular vectors for each vertex;
thus the SVT choice of three singular values results in $\dhat=6$.
Similarly, a ubiquitous and principled method for choosing the number of clusters in,
for example, Gaussian mixture models,
is to maximize a fitness criterion penalized by model complexity.
Common approaches include
Akaike Information Criterion (AIC) \cite{akaike1974new},
Bayesian Information Criterion (BIC) \cite{BIC},
and Minimum Description Length (MDL) \cite{MDL},
to name a few.
Again, identifying a ``best'' method is, in general, impossible, as the bias-variance tradeoff demonstrates that,
for small $n$, inference performance may be optimized by choosing a number of clusters {\it smaller than} the true cluster complexity. The MCLUST algorithm \cite{mclust2012}, as implemented in \texttt{R}, and its associated BIC
applied to our MB connectome embedded via ASE into $\mathbb{R}^{\dhat=6}$,
is maximized at six clusters,
as depicted in Figure \ref{fig:MBbic}, and hence $\Khat=6$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.370\textwidth]{MBstructure/INIslides/MBscree.png}
\caption{\label{fig:MBscree}
Model Selection: embedding dimension $\dhat=6$ --
the top 3 singular values and their associated left- and right-singular vectors --
is chosen by SVT. Figure duplicated from \cite{MBStructure}.}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.715\textwidth]{MBstructure/INIslides/MBbic.png}
\caption{\label{fig:MBbic}
Model Selection: mixture complexity $\Khat=6$ is chosen by BIC.
(The inset shows that the main curve --
BIC for dimensions 2 through 13 for MCLUST's most general covariance model, in red --
dominates all other dimensions and all other models.) Figure duplicated from \cite{MBStructure}.
}
\end{figure}
\begin{table}[htbp]
\centering
\begin{tabular}{lrrrrrrr}
& & 1 & 2 & 3 & 4 & 5 & 6 \\
\text{KC} & & {\bf 25} & {\bf 57} & 0 & {\bf 16} & 2 & 0 \\
\text{MBIN} & & 0 & 1 & {\bf 19} & 1 & 0 & 0 \\
\text{MBON} & & 0 & 0 & 0 & 1 & 0 & {\bf 28} \\
\text{PN} & & 0 & 0 & 0 & 2 & {\bf 61} & 0 \\
\end{tabular}
\caption{\label{tab:mclust6}
$\mclustase$ for our MB connectome yields $\Khat=6$ clusters.
The clusters are clearly coherent with but not perfectly aligned with
the four true neuron types, as presented in this confusion matrix.
}
\end{table}
While BIC chooses $\Khat=6$ clusters,
it is natural to ask whether
the distribution of KC across multiple clusters
is an artifact of insufficiently parsimonious model selection.
However, choosing four or five clusters not only (substantially) decreases BIC,
but in fact leaves KC distributed across multiple clusters while splitting and/or merging other neuron types.
In the direction of less parsimony,
Figure \ref{fig:MBbic} suggests that any choice from 7 to 11 clusters is competitive, in terms of BIC, with the maximizer $\Khat=6$.
Moreover, any of these choices only slightly decreases BIC,
while leaving PN, MBIN, and MBON clustered (mostly) singularly and (mostly) purely
and distributing KC across more clusters.
Tables
\ref{tab:mclust4},
\ref{tab:mclust5}, and
\ref{tab:mclust7}
show cluster confusion matrices for other choices of $K$.
\begin{table}[htbp]
\centering
\begin{tabular}{lrrrrr}
\hline
& & 1 & 2 & 3 & 4 \\
\hline
\text{KC} & & 26 & 56 & 16 & 2 \\
\text{MBIN} & & 0 & 20 & 1 & 0 \\
\text{MBON} & & 0 & 28 & 1 & 0 \\
\text{PN} & & 0 & 0 & 16 & 47 \\
\end{tabular}
\caption{\label{tab:mclust4}
Cluster confusion matrix for $\mclustase$ with 4 clusters.
Choosing four or five clusters not only (substantially) decreases BIC (compared to $\Khat=6$),
but in fact leaves KC distributed across multiple clusters
while splitting and/or merging other neuron types.}
\end{table}
\begin{table}[htbp]
\centering
\begin{tabular}{lrrrrrr}
\hline
& & 1 & 2 & 3 & 4 & 5\\
\hline
\text{KC} & & 26 & 56 & 16 & 2 & 0 \\
\text{MBIN} & & 0 & 20 & 1 & 0 & 0 \\
\text{MBON} & & 0 & 0 & 1 & 0 & 28 \\
\text{PN} & & 0 & 0 & 16 & 47 & 0 \\
\end{tabular}
\caption{\label{tab:mclust5}
Cluster confusion matrix for $\mclustase$ with 5 clusters.
Choosing four or five clusters not only (substantially) decreases BIC (compared to $\Khat=6$),
but in fact leaves KC distributed across multiple clusters
while splitting and/or merging other neuron types.}
\end{table}
\begin{table}[htbp]
\centering
\begin{tabular}{lrrrrrrrr}
\hline
& & 1 & 2 & 3 & 4 & 5 & 6 & 7\\
\hline
\text{KC} & & 25 & 42 & 15 & 0 & 16 & 2 & 0 \\
\text{MBIN} & & 0 & 0 & 1 & 19 & 1 & 0 & 0 \\
\text{MBON} & & 0 & 0 & 0 & 0 & 1 & 0 & 28 \\
\text{PN} & & 0 & 0 & 0 & 0 & 2 & 61 & 0 \\
\end{tabular}
\caption{\label{tab:mclust7}
Cluster confusion matrix for $\mclustase$ with 7 clusters.
Any choice from 7 to 11 clusters only slightly decreases BIC (compared to $\Khat=6$),
while leaving PN, MBIN, and MBON clustered (mostly) singularly and (mostly) purely
and distributing KC across more clusters.}
\end{table}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65\textwidth]{MBstructure/INIslides/Fig6-new.pdf}
\caption{\label{fig:MBKCageclaw}
The multiple clusters for the KC neurons are capturing neuron age.
Depicted are the first two dimensions for the KC neuron out-vectors,
with color representing $\Khat=6$ cluster membership --
recall from Table \ref{tab:mclust6} that the $n_{KC}=100$ KCs are distributed across multiple clusters,
with 25 neurons in cluster \#1,
57 in \#2,
0 in \#3,
16 in \#4,
2 in \#5, and
0 in \#6.
The size of the dots represent the number of claws associated with the neurons.
We see from the scatter plot that the embedded KC neurons arc
from oldest (one-claw, lower left, cluster 1, in red),
up and younger (more claws) through cluster 2 in green, and
back down to youngest (zero-claw, clusters 4 and 5).
See also Table \ref{tab:MBKCageclaw}.
Recall that this is a two-dimensional visualization of six-dimensional structure. Figure duplicated from \cite{MBStructure}.
}
\end{figure}
\begin{table}[htbp]
\centering
\begin{tabular}{lrrrrrrr}
\hline
cluster & & 1 & 2 & 3 & 4 & 5 & 6 \\
\hline
\text{\#KCs} & & 25 & 57 & 0 & 16 & 2 & 0 \\
\hline
\text{claw: 1 (oldest)} & & 15 & 4 & --- & 0 & 0 & --- \\
\text{claw: 2} & & 7 & 4 & --- & 0 & 0 & --- \\
\text{claw: 3} & & 0 & 15 & --- & 0 & 0 & --- \\
\text{claw: 4} & & 3 & 13 & --- & 0 & 0 & --- \\
\text{claw: 5} & & 0 & 8 & --- & 0 & 0 & --- \\
\text{claw: 6} & & 0 & 3 & --- & 0 & 0 & --- \\
\text{claw: 0 (youngest)} & & 0 & 10 & --- & 16 & 2 & --- \\
\end{tabular}
\caption{\label{tab:MBKCageclaw}
The multiple clusters for the KC neurons are capturing neuron age via the number of claws associated with the neuron.
We see from the $\Khat=6$ clustering table, for the $n_{KC}=100$ KC neurons, that
cluster 1 captures predominantly older neurons,
cluster 2 captures both old and young neurons, and
clusters 4 and 5 capture only the youngest neurons.
See also Figure \ref{fig:MBKCageclaw}.
}
\end{table}
We see that our spectral clustering of the MB connectome via $\mclustase$,
with principled model selection for choosing embedding dimension and mixture complexity,
yields meaningful results:
a single Gaussian cluster for each of MBIN, MBON, and PN,
and multiple clusters for KC.
That is, we have one substantial revision to
Figure \ref{fig:Fig1}'s illustration of the larval {\it Drosophila} mushroom body connectome as a directed graph on four neuron types:
significant {\em substructure} associated with the KC neurons. Indeed, this
hints at the possibility of a continuous, rather than discrete, structure for the KC. The paper \cite{EichlerSubmitted}
describes so-called ``claws'' associated with each KC neuron,
and posits that
KCs with only 1 claw are the oldest, followed in decreasing age by multi-claw KCs (from 2 to 6 claws), with finally the youngest KCs being those with 0 claws.
Figure \ref{fig:MBKCageclaw} and Table \ref{tab:MBKCageclaw}
use this additional neuronal information to show that the multiple clusters for the KC neurons are capturing neuron age
-- and in a seemingly coherent geometry.
Indeed, precisely because the clusters for the KC neurons
are capturing neuron age
-- a continuous vertex attribute --
again, in a seemingly coherent geometry,
we define a ``latent structure model'' (LSM), a generalization of the SBM,
together with a principled semiparametric spectral clustering methodology $\smclustase$ associated thereto.
Specifically,
we fit a continuous curve to (the KC subset of) the data in latent space
and show that traversal of this curve corresponds monotonically to neuron age.
To make this precise, we begin with a directed stochastic block model:
\begin{definition}[Directed Stochastic Blockmodel (SBM)]
Let $d_{\mathrm{out}} = d_{\mathrm{in}}$, with $d=d_{\mathrm{out}} + d_{\mathrm{in}}$.
We say that an $n$ vertex graph $(\bA,\bX)\sim\mathrm{RDPG}(F)$
is a directed stochastic blockmodel (SBM) with $K$ blocks if
the distribution $F$ is a mixture of $K$ point masses,
$$dF=\sum_{k=1}^K \rho_k \delta_{x_k},$$
with block membership probability vector $\vec{\rho}$ in the unit $(K-1)$-simplex
and distinct latent positions given by $\bm{\nu}=[\nu_1,\nu_2,\ldots,\nu_K]^\top\in\mathbb{R}^{K\times d}$.
The first $d_{\mathrm{out}}$ entries of each latent position $\nu_k$ are the out-vectors, denoted $\xi_k \in \mathbb{R}^{d_{\mathrm{out}}}$,
and the remaining $d_{\mathrm{in}}$ elements are the in-vectors $\zeta_k$.
We write $G\sim SBM(n,\vec{\rho},\bm{\xi} \bm{\zeta}^\top),$
and we refer to $\bm{\xi} \bm{\zeta}^\top\in\mathbb{R}^{K \times K}$ as the block connectivity probability matrix for the model.
\end{definition}
We model the MB connectome as a four-component latent structure model (LSM),
where LSM denotes the ``generalized SBM'' where each ``block''
may be generalized from point mass latent position distribution
to latent position distribution with support on some curve
(with the "block" curves disconnected, as (of course) are SBM's point masses).
So LSM does have block structure, albeit not as simple as an SBM;
and LSM will exhibit clustering, albeit just as transparently as an SBMs. As such, it is similar to other generalizations of SBMs, including the degree-corrected and hierarchical variants.
\begin{definition}[Directed Latent Structure Model (LSM)]
Let $d_{\mathrm{out}} = d_{\mathrm{in}}$, and
let $F$ be a distribution on a set $\mathcal{X} = \mathcal{Y} \times \mathcal{Z} \subset \mathbb{R}^{d_{\mathrm{out}}} \times \mathbb{R}^{d_{\mathrm{in}}}$
such that $\langle y,z \rangle\in[0,1]$ for all $y\in\mathcal{Y}$ and $z\in\mathcal{Z}$.
We say that an $n$ vertex graph $(\mathbf{A},\mathbf{X})\sim\mathrm{RDPG}(F)$
is a directed latent structure model (LSM) with $K$ ``structure components'' if
the support of distribution $F$ is a mixture of $K$ (disjoint) curves,
$$dF=\sum_{k=1}^K \rho_k dF_k(x),$$
with block membership probability vector $\vec{\rho}$ in the unit $(K-1)$-simplex
and $F_k$ supported on $\mC_k$ and $\mC_1,\cdots,\mC_K$ disjoint.
%
We write $G\sim LSM(n,\vec{\rho},(F_1,\cdots,F_K))$.
\end{definition}
We now investigate our MB connectome as an LSM
with latent positions $X_i \overset{\mathrm{i.i.d.}}{\sim} F$
where $F$ is no longer a mixture of four point masses with one point mass per neuron type
but instead $\mathrm{supp}(F)$ is three points and a continuous curve $\CKC$.
Motivated by approximate normality of the adjacency spectral embedding of an RDPG, we consider estimating $F$ via a semiparametric Gaussian mixture model for the $\Xhat_i$'s.
Let $H$ be a probability measure on a parameter space $\Theta \subset \mathbb{R}^d \times S_{d \times d}$,
where $S_{d \times d}$ is the space of $d$-dimensional covariance matrices,
and let $\{\varphi(\cdot;\theta) : \theta \in \Theta \}$ be a family of normal densities.
Then the function given by
$$\alpha(\cdot;H) = \int_{\Theta} \varphi(\cdot;\theta) dH(\theta)$$
is a semiparametric GMM.
$H \in \mathcal{M}$ is referred to as the mixing distribution of the mixture,
where $\mathcal{M}$ is the class of all probability measures on $\Theta$.
If $H$ consists of a finite number of atoms,
then $\alpha(\cdot;H)$ is a finite normal mixture model with means, variances and proportions determined by the locations and weights of the point masses, and \cite{lindsay1983} provides theory for maximum likelihood estimation (MLE) in this context.
Thus
(ignoring covariances for presentation simplicity, so that $\theta \in \Re^d$ is the component mean vector)
we see that the central limit theorem suggests that we estimate the probability density function of the embedded MB connectome
$\Xhat_1,\cdots,\Xhat_{n=213}$,
under the LSM assumption,
as the semiparametric GMM $\alpha(\cdot;H)$
with $\Theta=\mathbb{R}^{6}$
and
where $H=F$
is supported by three points and a continuous curve $\CKC$.
Note that in the general case, where $\Theta$ includes both means and covariance matrices, we have $H = H_{F,n}$.
The central limit theorem for the adjacency spectral embedding provides a large-sample approximation for $H_{F,n}$,
and provides a mean-covariance constraint so that
if we knew the latent position distribution $F$ we would have no extra degrees of freedom
(though perhaps a more challenging MLE optimization problem).
As it is, we do our fitting in the general case, with simplifying constraints on the covariance structure associated with $\CKC$.
Our MLE
(continuing to ignore covariances for presentation simplicity)
is given by
$$d\hat{H}(\theta) = \sum_{k=1}^3 \rhohat_k I\{\theta = \thetahat_k\} + \left(1 - \sum_{k=1}^3 \rhohat_k\right) \rhohat_{KC}(\theta) I\{\theta \in \Chat\}$$
where $\thetahat_1, \thetahat_2, \thetahat_3$ are given by the means of the $\mclustase$ Gaussian mixture components for MBIN, MBON, and PN,
and $\Chat \subset \mathbb{R}^d$ is a one-dimensional curve.
Figure \ref{fig:YPYQ19} displays the MLE results from an EM optimization
for the curve $\Chat$ constrained to be quadratic, as detailed in the Appendix.
(Model testing for $\CKC$ in $\mathbb{R}^6$ does yield quadratic:
testing the null hypothesis of linear against the alternative of quadratic yields clear rejection ($p < 0.001$),
while there is insufficient evidence to favor $H_A$ cubic over $H_0$ quadratic ($p \approx 0.1$).)
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65\textwidth]{MBstructure/INIslides/YPYQ19.png}
\caption{\label{fig:YPYQ19}
Semiparametric MLE $\Chat$ for the KC latent-space curve in $\mathbb{R}^6$. Figure duplicated from \cite{MBStructure}.
}
\end{figure}
That is, (continuing to ignore covariances for presentation simplicity)
our structure discovery
via $\smclustase$
yields an $\mathbb{R}^6$ latent position estimate for the MB connectome
-- a {\it connectome code} for the larval {\it Drosophila} mushroom body --
as a semiparametric Gaussian mixture of three point masses
and a continuous parameterized curve $\Chat$;
the three Gaussians correspond to three of the four neuron types,
and the curve corresponds to the fourth neuron type (KC) with the parameterization capturing neuron age (
see Figure \ref{fig:MBsemiparfig}). We note that \cite{EichlerSubmitted} suggests distance-to-neruopile $\delta_i$
-- the distance to the MB neuropile from the bundle entry point of each KC neuron $i$ --
as a proxy for neuron age, and analyzes this distance in terms of number of claws for neuron $i$ (see Figure \ref{fig:KE1}).
We now demonstrate that
the correlation of this distance with the KC neurons' projection onto the parameterized curve $\Chat$
is highly significant -- this semiparametric spectral model captures neuroscientifically important structure in the connectome.
To wit,
we project each KC neuron's embedding onto our parameterized $\Chat$
and study the relationship between the projection's position on the curve, $t_i$, and the neuron's age through the distance proxy $\delta_i$ (see Figures \ref{fig:YPYQ21} and \ref{fig:YPYQ22}).
We find significant correlation of $\delta_i$ with $t_i$ --
Spearman's $s = -0.271$,
Kendall's $\tau = -0.205$,
Pearson's $\rho = -0.304$,
with $p < 0.01$ in each case
-- demonstrating that our semiparametric spectral modeling captures biologically relevant neuronal properties.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.55\textwidth]{MBstructure/INIslides/Fig9-new.pdf}
\caption{\label{fig:MBsemiparfig}
Semiparametric spectral latent space estimate of our MB connectome as three Gaussians and a KC curve:
colors distinguish the four neuron types and
numbers distinguish the original $\Khat=6$ clusters.
Recall that this is a two-dimensional visualization of six-dimensional structure. Figure duplicated from \cite{MBStructure}.
}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.60\textwidth]{MBstructure/INIslides/YPKEboxplot.png}
\caption{\label{fig:KE1}
Relationship between number of claws and distance $\delta_i$ (a proxy for age) for the KC neurons,
from \cite{EichlerSubmitted}. Figure duplicated from \cite{MBStructure}.
}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.63\textwidth]{MBstructure/INIslides/YPYQ21.png}
\caption{\label{fig:YPYQ21}
Projection of KC neurons onto the quadratic curve $\Chat$, yielding projection point $t_i$ for each neuron.
Recall that this is a two-dimensional visualization of six-dimensional structure. Figure duplicated from \cite{MBStructure}.
}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.63\textwidth]{MBstructure/INIslides/YPYQ22.png}
\caption{\label{fig:YPYQ22}
The correlation between the projection points $t_i$ on the quadratic curve $\Chat$ and distance $\delta_i$ (a proxy for age) for the KC neurons
is highly significant, demonstrating that our semiparametric spectral modeling captures biologically relevant neuronal properties. Figure duplicated from \cite{MBStructure}.
}
\end{figure}
In summary, motivated by the results of a spectral clustering investigation of
the recently-reconstructed synapse-level larval {\it Drosophila} mushroom body structural connectome,
which demonstrate conclusively that modeling the Kenyon Cells (KC) demands additional latent space structure,
we have developed semiparametric spectral modeling.
Exploratory data analysis suggests that the MB connectome can be productively approximated by a
four-component latent structure model (LSM),
and the resulting
MB connectome code
derived via $\smclustase$
captures biologically relevant neuronal properties. Data and code for all our analyses are available at
\url{http://www.cis.jhu.edu/~parky/MBstructure.html}.
Of course, the true connectome code is more elaborate,
and cannot be completely encompassed by any simple latent position model --
such a model precludes the propensity for transitivity, e.g.\ --
but our semiparametric spectral modeling provides another step along the path.
In terms of a (partial) ladder of biological scales
-- e.g., {\it C.\ elegans}, {\it Drosophila}, zebrafish, mouse, primate, and humans --
this works moves us off the first rung for analysis of a complete
neurons-as-vertices and synapses-as-edges connectome.
\section{Conclusion: complexities, open questions, and future work}\label{sec:Complexities}
Our paradigm for statistical inference on random graphs is anchored by the familiar pillars of classical Euclidean inference. We exhibit estimates for graph parameters that satisfy (uniform) consistency and asymptotic normality, and we demonstrate how these estimates can be exploited in a bevy of subsequent inference tasks: community detection in heterogeneous networks, multi-sample graph hypothesis testing, and exploratory data analysis in connectomics. The random dot product graph model in which we ground our techniques has both linear algebraic transparency and wide applicability, since a hefty class of independent-edge graphs is well-approximated by RDPGs. The lynchpins of our approach are spectral decompositions of adjacency and Laplacian matrices, but many of our results and proof techniques can be applied to more general random matrices. In recent work, for example, we examine eigenvalue concentration for certain classes of random matrices \cite{cape_16_conc}, and accurate estimation of covariance matrices \cite{cape_covar}. As such, our methodology is a robust edifice from which to explore questions in graph inference, data analysis, and random matrix theory.
The results we summarize here are but the tip of the iceberg, though, and there persist myriad open problems in spectral graph inference---for random dot product graphs in particular, of course, but also for random graphs writ large. In this section, we outline some current questions of interest and describe future directions for research.
Our consistency results for the adjacency spectral embedding depend on knowing the correct embedding dimension. In real data, this optimal embedding dimension is typically not only not known, but, since the RDPG model is only an approximation to any true model, may well depend on the desired subsequent inference task.
As we have pointed out in Section \ref{subsec:MBStructure}, multiple methods exist for estimating embedding dimension, and the universal singular value thresholding of \cite{chatterjee2015} and other thresholding methods \cite{fishkind2013consistent} are theoretically justified in the large-$n$ limit.
For finite $n$, however, model selection is considerably trickier.
Underestimation of the embedding dimension can markedly---and provably---bias subsequent inference. While we do not address it here, we remark that asymptotic results can be shown for the adjacency spectral embedding of a $d$-dimensional RDPG when the chosen embedding dimension is $d'<d$. On the other hand, if the embedding dimension is overestimated, no real signal is lost; therefore, most embedding methods continue to perform well, albeit with some loss of efficiency due to increased variance. Precisely quantifying this loss of efficiency is very much an open question and is related to analogous classical questions in random matrix theory \cite{tao2012random}.
In our RDPG model, an important assumption is that the $\bP$ matrix be positive semidefinite. While this limits the model, a slight generalization of the RDPG shares many of its important properties.
Considering a matrix of latent positions $\mathbf{X} \in \mathbb{R}^{p+q}$, one can set $\mathbf{P} = \mathbf{X} \mathbf{I}_{p,q} \mathbf{X}^{\top}$ where $\mathbf{I}_{p,q}$ is the diagonal matrix of size $(p+q) \times (p+q)$ with $p$ diagonal entries being $1$ and $q$ diagonal entries being $-1$. Under this generalization, any $\bP$ can be obtained, provided that $p+q$ is appropriately chosen. This then implies that any latent position graph model, even a non-positive-semidefinite one, can be approximated arbitrarily closely by this generalized RDPG.
We also wish to adapt our procedures to weighted graphs. For simple weighting, such as Poisson-distributed weights, our existing methodology applies quite naturally to the weighted adjacency matrix. More intricate questions arise when the weights are contaminated or their distribution is heavily skewed. In such cases, one can ``pass to ranks"; that, is replace the nonzero weight by its normalized rank among all the edge weights. This mitigates skew and works well in many examples, but a deeper theoretical analysis of this procedure, as well as other approaches to weighted graph inference, remain open.
To address graphs with self-edges, note that the random dot product graph model does not preclude such edges.
Indeed, since $\bP$ is not constrained to be hollow, the adjacency matrix $\bA$ thus generated need not be hollow, either.
However, the generative mechanism for self-edges in a graph may be different from the mechanism for edges between two different vertices. One approach to addressing this is to set the diagonal entries of $\bA$ to zero and then {\em augment} the diagonal artificially with imputed values.
In fact, even when there are no self loops, such procedures can improve finite-sample inferential performance.
In \cite{Marchette2011VN}, it is demonstrated that augmenting the diagonal via $\bA_{ii}=d_i/(n+1)$, where $d_i$ is the degree of vertex $i$, can improve inference in certain cases.
Similarly, \cite{Scheinerman2010} describes an iterative procedure to find a diagonal augmentation consistent with the low-rank structure of $\bP$.
It is still unclear exactly what augmentation of the diagonal, if any, might be optimal for each particular inference task.
In the case when the vertices or edges are corrupted by occlusion or noise, \cite{priebes.:_statis} and \cite{levin_lyzin_laplacian} consider approaches to vertex classification and graph recovery, demonstrating that spectral embeddings are robust to certain sources of graph error. Violations of the independent edge assumption, though, can lead to more significant hurdles, both theoretical practical, since it is a requirement for a number of the concentration inequalities on which we depend.
For joint graph inference and testing, open problems abound. We mention, for instance, the analysis of the omnibus embedding when the $m$ graphs are correlated, or when some are corrupted; a closer examination of the impact of the Procrustes alignment on power; the development of an analogue to a Tukey test for determining which graphs differ when we test for the equality of distribution for more than two graphs; the comparative efficiency of the omnibus embedding relative to other spectral decompositions; and a quantification of the trade-off for subsequent inference between a large number of independent graphs and large graph size (\cite{runze_law_large_graphs}).
In sum, the random dot product graph is a compact, manageable, and applicable model. The Euclidean nature of the adjacency and Laplacian spectral embedding for a random dot product graph allows us to approach statistical inference in this setting from a familiar Euclidean perspective. Both the adjacency spectral embedding and the Laplacian spectral embedding can be profitably leveraged for latent position estimation and single- and multi-sample graph hypothesis testing. Moreover, our distributional results for these spectral embeddings provide reassuring classical analogues of asymptotic normality for estimators, and in current ongoing work, we consider how to compare asymptotic relative efficiency of different estimators for graph parameters.
While spectral methods may not always be optimal for a given task, they are often feasible and can provide a way to accurately initialize more complex procedures. Moreover, these Euclidean representations of graph data render possible the application of techniques for analysis of Euclidean data---clustering, classification, and density estimation, for instance---to graphs.
As we have outlined above, while many important theoretical and practical challenges remain, spectral embeddings for random dot product graphs constitute an important piece of the greater puzzle of random graph inference.
\section{Appendix}\label{sec:Appendix}
In the appendix, we provide details for the proofs of our results on consistency and asymptotic normality of the adjacency spectral embedding, as well as an outline of the proof of the central limit theorem for the Laplacian spectral embedding. Our notation remains as stated in Section \ref{sec:ASE_Inference_RDPG}. We begin with a detailed proof of our result on the consistency, in the $2 \to \infty$ norm, of the ASE for latent position recovery in RDPGs.
\subsection*{Proof of Theorem~\ref{thm:minh_sparsity}}
\label{sec:minh}
Let us recall {\bf Theorem \ref{thm:minh_sparsity}}:
Let $\bA_n \sim \mathrm{RDPG}(\bX_n)$ for $n \geq 1$ be a sequence of random dot product graphs where the $\bX_n$ is assumed to be of rank $d$ for all $n$ sufficiently large. Denote by $\hat{\bX}_n$ the adjacency spectral embedding of $\bA_n$ and let $(\hat{\bX}_{n})_{i}$ and $(\bX_n)_{i}$ be the $i$-th row of $\hat{\bX}_n$ and $\bX_n$, respectively. Let $E_n$ be the event that there
exists an orthogonal transformation $\bW_n \in \mathbb{R}^{d \times d}$ such that
\begin{equation*}
\max_{i} \| (\hat{\bX})_{i} - \bW_n (\bX_n)_{i} \| \leq
\frac{C d^{1/2} \log^2{n}}{\delta^{1/2}(\mathbf{P}_n)}
\end{equation*}
where $C > 0$ is some fixed constant and $\mathbf{P}_n = \mathbf{X}_n \mathbf{X}_n^{\top}$. Then $E_n$ occurs asymptotically almost surely; that is, $\Pr(E_n) \rightarrow 1$ as $n \rightarrow \infty$.
The proof of Theorem~\ref{thm:minh_sparsity} will follow from a succession of supporting results.
We note that Theorem~\ref{thm:minh_frob}, which deals with the accuracy of spectral embedding estimates in Frobenius norm, may be of independent interest.
We begin with the following simple but essential proposition, in which we show that $\UP^{\top} \UA$ is close to an orthogonal transformation. For ease of exposition, in the remainder of this subsection we shall suppress the subscript index $n$ from our matrices $\mathbf{X}_n$, $\mathbf{A}_n$ and $\hat{\mathbf{X}}_n$.
\begin{proposition}
\label{prop:uptua_close_rotation}
Let $\bA \sim \mathrm{RDPG}(\bX)$ and let
$\bW_1 \bm{\Sigma} \bW_2^{\top}$ be the singular value
decomposition of $\UP^{\top}
\UA$.
Then with high probability,
\begin{equation*}
\|\UP^{\top} \UA -
\bW_1 \bW_2^{\top} \| = O(\delta^{-1}(\mathbf{P})).
\end{equation*}
\end{proposition}
\begin{proof}
Let $\sigma_1, \sigma_2, \dots, \sigma_d$ denote the singular values of
$\UP^{\top} \UA$ (the diagonal
entries of $\bm{\Sigma}$). Then $\sigma_i = \cos(\theta_i)$ where
the $\theta_i$ are the principal angles between the subspaces
spanned by $\UA$ and
$\UP$. Furthermore, by the Davis-Kahan Theorem,
\begin{align*}
\| \UA \UA^{\top} -
\UP \UP^{\top} \| &=
\max_{i} | \sin(\theta_i) |\leq
\frac{C \sqrt{d} \|\bA - \bP\|}{\lambda_{d}(\bP)}\\
\end{align*}
for sufficiently large $n$. Recall here $\lambda_{d}(\bP)$ denotes
the $d$-th largest eigenvalue of $\bP$. The spectral norm bound for $\bA
- \bP$ from Theorem~\ref{thm:lu_peng} along with the assumption that $\lambda_d(\mathbf{P})/\delta(\mathbf{P}) \geq c_0$ for some constant $c_0$ in Assumption~\ref{ass:max_degree_assump}
yield
$$\| \UA \UA^{\top} -
\UP \UP^{\top} \|\leq\frac{C \sqrt{d}}{\delta^{1/2}(\mathbf{P})}.$$
We thus have
\begin{equation*}
\begin{split}
\|\UP^{\top} \UA -
\bW_1 \bW_2^{\top} \| &= \|\bm{\Sigma} -
I \| = \max_i |1 - \sigma_i| \leq
\max_i (1 - \sigma_i^{2}) \\ & = \max_i \sin^{2}(\theta_i) =
\|\UA \UA^{T} -
\UP \UP^{\top} \|^{2}
= O(\delta^{-1}(\mathbf{P}))
\end{split}
\end{equation*}
as desired.
\end{proof}
Denote by $\bW^{*}$ the orthogonal matrix
$\bW_1 \bW_2^{\top}$ as defined in the above
proposition. We now establish the following key lemma. The lemma
allows us to exchange the order of the orthogonal transformation
$\bW^{*}$ and the diagonal scaling transformation $\SA$ or $\SP$.
\begin{lemma}
\label{lem:order_bounds_on_minh_differences}
Let $(\bA, \bX) \sim \mathrm{RDPG}(F)$ with sparsity factor $\rho_n$. Then asymptotically almost surely,
\begin{equation}\label{eq:approx_commute1_nosqrt}
\|\bW^{*} \SA - \SP \bW^{*} \|_{F} = O(\log n)
\end{equation}
and
\begin{equation}\label{eq:approx_commute2_sqrt}
\|\bW^{*} \SA^{1/2} - \SP^{1/2}
\bW^{*} \|_{F} = O((\log n) \,\delta^{-1/2}(\mathbf{P}))
\end{equation}
\end{lemma}
\begin{proof}
Let $\bR = \UA - \UP\UP^{\top} \UA$. We note
that $\bR$ is the residual after projecting
$\UA$ orthogonally onto the column space of
$\UP$, and thus
\begin{align*}
\|\UA - \UP
\UP^{\top} \UA \|_{F} \leq \min_{\mathbf{W}} \|\mathbf{U}_{\mathbf{A}} - \mathbf{U}_{\mathbf{P}} \mathbf{W} \|_{F}
\end{align*}
where the minimization is over all orthogonal matrices $\mathbf{W}$.
The variant of the Davis-Kahan Theorem given in Eq.~\eqref{eq:Davis_Kahan_variant1} then implies
$$ \|\UA - \UP
\UP^{\top} \UA \|_{F} \leq O(\delta^{-1/2}(\mathbf{P})).$$
We next derive that
\begin{align*}
\bW^{*}& \SA = (\bW^{*} -
\UP^{\top} \UA)
\SA +
\UP^{\top} \UA
\SA
= (\bW^{*} -
\UP^{\top} \UA) \SA +
\UP^{\top} \bA \UA
\\ &= (\bW^{*} -
\UP^{\top} \UA) \SA +\UP^{\top}(\bA - \bP)
\UA +
\UP^{\top}\bP \UA \\
&= (\bW^{*} -
\UP^{\top} \UA) \SA +
\UP^{\top}(\bA - \bP) \bR+ \UP^{\top}(\bA - \bP)
\UP \UP^{\top}
\UA + \UP^{\top}
\bP \UA \\
&= (\bW^{*} -
\UP^{\top} \UA) \SA +
\UP^{\top}(\bA - \bP) \bR+ \UP^{\top}(\bA - \bP)
\UP \UP^{\top}
\UA + \SP \UP^{\top}
\UA
\end{align*}
Writing $\SP \UP^{\top}
\UA = \SP
(\UP^{\top} \UA -
\bW^{*}) + \SP \bW^{*}$ and
rearranging terms, we obtain
\begin{equation*}
\begin{split}
\|\bW^{*} \SA -
\SP \bW^{*}\|_{F} & \leq \|\bW^{*}
- \UP^{\top} \UA \|_{F}
(\|\SA\| + \|\SP\|) +
\|\UP^{\top}(\bA - \bP)
\bR\|_{F}\\
&+ \|\UP^{\top}(\bA -
\bP) \UP\UP^{\top}\UA\|_{F} \\
& \leq O(1) + O(1) + \|\UP^{\top}(\bA -
\bP) \UP\|_{F}\|\UP^{\top}\UA\|
\end{split}
\end{equation*}
asymptotically almost surely. Now, $\|\UP^{\top}\UA\| \leq 1$. Hence we can focus on the term $\UP^{\top}(\bA -
\bP) \UP$, which is a $d \times d$ matrix
whose $ij$-th entry is of the form
\begin{align*}
(\UP^{\top} (\bA - \bP) \UP)_{ij} &=
\sum_{k=1}^{n} \sum_{l=1}^{n} (\bA_{kl} -
\bP_{kl}) \bU_{ki} \bU_{lj} \\
&= 2 \sum_{k,l : k < l}
(\bA_{kl} - \bP_{kl})\bU_{ki}\bU_{lj} - \sum_{k}
\bP_{kk} \bU_{ki} \bU_{kj}
\end{align*}
where $\bU_{\cdot i}$ and $\bU_{\cdot j}$ are the $i$-th and $j$-th
columns of $\UP$. Thus, conditioned on
$\bP$, $(\UP^{\top} (\bA - \bP)
\UP)_{ij}$ is a sum of independent mean $0$ random
variables and a term of order $O(1)$.
Now, by Hoeffding's inequality,
\begin{align*}
\Pr&\left[ \bigg|\sum_{k,l : k < l} 2 (\bA_{kl}- \bP_{kl}) \bU_{ki} u_{lj}\bigg|\geq t \right] \leq 2\exp
\Bigl( \frac{-2t^2}{\sum_{k,l : k < l} (2\bU_{ki} \bU_{lj})^2}\Bigr)
\leq 2\exp(-t^2).
\end{align*}
Therefore, each entry of
$\UP^{\top}(\bA - \bP)
\UP$ is of order
$O(\log n)$ with high probability, and as a consequence, since $\UP^{\top} (\bA-\bP)\UP$ is a $d \times d$ matrix,
\begin{equation}\label{eq:hoeffding_up_upt}
\|\UP^{\top}(\bA - \bP)
\UP\|_{F} = O(\log n)
\end{equation}
with high probability.
We establish that
$$\|\bW^{*} \SA^{1/2} -
\SP^{1/2} \bW^{*} \|_{F} = O((\log n) \lambda_d^{-1/2}(\mathbf{P}))$$
by
noting that the $ij$-th entry of $\bW^{*} \SA^{1/2} -
\SP^{1/2} \bW^{*}$ can be written as
$$\bW^{*}_{ij} (\lambda_i^{1/2}(\bA) -
\lambda_{j}^{1/2}(\bP)) = \bW^{*}_{ij} \frac{\lambda_{i}(\bA) -
\lambda_{j}(\bP)}{\lambda_{i}^{1/2}(\bA) + \lambda_{j}^{1/2}(\bP)}, $$
and the desired bound follows from the above after bounding $\lambda_i(\bA)$, either by Weyl's inequality and Theorem~\ref{thm:lu_peng}, or, alternatively, by a Kato-Temple inequality from \cite{cape_16_conc}.
\end{proof}
We next present Theorem~\ref{thm:minh_frob}, which allows us to write the Frobenius norm difference of the adjacency spectral embedding $\hat{\mathbf{X}}$ and the true latent position $\bX$ in terms of the Frobenius norm difference of $\mathbf{A} - \mathbf{P}$ and smaller order terms.
\begin{theorem}
\label{thm:minh_frob}
Let $\bA \sim \mathrm{RDPG}(\bX)$. Then there exists an orthogonal matrix $\mathbf{W}$ such that, with high probability,
\begin{align*}
\|\Xhat - \bX \bW
\|_{F} = \|(\bA - \bP) \UP
\SP^{-1/2} \|_{F} + O((\log n) \delta^{-1/2}(\mathbf{P}))
\end{align*}
\end{theorem}
\begin{proof}
Let
\begin{align*}
\bR_1 &= \UP\UP^{\top} \UA -\UP \bW^{*}\\
\bR_2 &= (\bW^{*}
\SA^{1/2} - \SP^{1/2} \mathbf{W}^{*}).
\end{align*}
We deduce that
\begin{equation*}
\begin{split}
\Xhat - \UP
\SP^{1/2} \bW^{*} &=
\UA \SA^{1/2}
-\UP W^{*}
\SA^{1/2} + \UP
(\bW^{*}
\SA^{1/2} - \SP^{1/2} W^{*}) \\
&= (\UA - \UP \UP^{\top}
\UA) \SA^{1/2} +
\bR_1 \SA^{1/2} + \UP
\bR_2
\\ &= \UA \SA^{1/2} -
\UP \UP^{\top}
\UA \SA^{1/2} + \bR_1 \SA^{1/2} + \UP
\bR_2
\end{split}
\end{equation*}
Observe that $\UP \UP^{\top}\bP = \bP$ and $\UA\SA^{1/2} = \bA \UA
\SA^{-1/2}$. Hence
\begin{align*}
\Xhat - \UP\SP^{1/2} \bW^{*} = &(\bA -
\bP) \UA \SA^{-1/2} - \UP
\UP^{\top} (\bA - \bP)
\UA \SA^{-1/2} + \bR_1 \SA^{1/2} + \UP\bR_2
\end{align*}
Writing
\begin{equation*}
\bR_3 = \UA - \UP
\bW^{*} = \UA - \UP
\UP^{\top} \UA + \bR_1,
\end{equation*}
we derive that
\begin{equation*}
\begin{split}
\Xhat - \UP\SP^{1/2} \bW^{*} =& (\bA -
\bP) \UP \bW^{*}
\SA^{-1/2}- \UP\UP^{\top}(\bA - \bP)
\UP \bW^{*}
\SA^{-1/2} \\
&+ (\bI -
\UP \UP^{\top}) (\bA - \bP)
\bR_{3} \SA^{-1/2}
+ \bR_1 \SA^{1/2} + \UP
\bR_2
\end{split}
\end{equation*}
Recalling that $\|\UA - \UP
\UP^{\top} \UA \|_{F} = O(\delta^{-1/2}(\mathbf{P}))$ with high probability, we have
\begin{align*}
\|\bR_{1}\|_{F} = O(\delta^{-1}(\mathbf{P})), \quad \|\bR_2\|_{F} = O((\log n) \, \delta^{-1/2}(\mathbf{P})), \quad \text{and} \,\,
\|\bR_3\|_{F} = O(\delta^{-1/2}(\mathbf{P}))
\end{align*}
with high probability.
Furthermore, a similar application of Hoeffding's inequality to that in the proof of Lemma~\ref{lem:order_bounds_on_minh_differences}, along with an application of Weyl's inequality and Theorem~\ref{thm:lu_peng} to bound $\lambda_i(\mathbf{A})$, ensures that
\begin{equation*}
\|\UP
\UP^{\top}(\bA - \bP)
\UP \bW^{*}
\SA^{-1/2} \|_{F} \leq \|\UP^{\top}(\bA - \bP)
\UP\|_{F} \|\SA^{-1/2}
\|_{F}= O(\log n \delta^{-1/2}(\mathbf{P}))/
\end{equation*}
As a consequence, with high probability
\begin{align*}
\|\Xhat - \UP\SP^{1/2} \bW^{*}\|_{F}
&= \|(\bA -
\bP) \UP \bW^{*}
\SA^{-1/2}\|_{F} + O((\log n) \, \delta^{-1/2}(\mathbf{P})) \\
&=
\|(\bA - \bP) \UP
\SP^{-1/2} \bW^{*}- (\bA - \bP) \UP
(\SP^{-1/2} \bW^{*} - \bW^{*}
\SA^{-1/2}) \|_{F}\\
&\hspace{5mm}+ O((\log n) \, \delta^{-1/2}(\mathbf{P}))
\end{align*}
A very similar argument to that employed in the proof of Lemma~\ref{lem:order_bounds_on_minh_differences} implies that
\begin{equation*}
\|\SP^{-1/2} \bW^{*} - \bW^{*}
\SA^{-1/2} \|_{F} = O((\log n) \, \delta^{-3/2}(\mathbf{P}))
\end{equation*}
with high probability.
We thus obtain
\begin{align}
\label{eq:2}
\|\Xhat - \UP
\SP^{1/2} \bW^{*}\|_{F}
&= \|(\bA - \bP) \UP
\SP^{-1/2} \bW^{*} \|_{F}+ O((\log n) \, \delta^{-1/2} (\mathbf{P}))
\notag\\ &= \|(\bA - \bP) \UP
\SP^{-1/2} \|_{F} + O((\log n) \, \delta^{-1/2}(\mathbf{P}))
\end{align}
with high probability.
Finally, to complete the proof, we note that
$\bX =
\UP \SP^{1/2} \tilde{\bW}$
for some orthogonal matrix $\bW$. Since $\bW^{*}$ is also
orthogonal, we conclude that there exists some orthogonal $\bW$
for which
$\bX \bW = \UP
\SP^{1/2} \bW^{*},$
as desired.
\end{proof}
We are now ready to prove the $2 \rightarrow \infty$ consistency we assert in Theorem~\ref{thm:minh_sparsity}.
\begin{proof}
To establish Theorem~\ref{thm:minh_sparsity}, we note that from Theorem~\ref{thm:minh_frob}
\begin{align*}
\|\Xhat - \bX \bW
\|_{F} &= \|(\bA - \bP) \UP
\SP^{-1/2} \|_{F}
+ O((\log n) \, \delta^{-1/2}(\mathbf{P}))
\end{align*}
and hence
\begin{align*}
\max_{i} \| \Xhat_i - \rho_n^{1/2} \bW \bX_i \|
&\leq
\frac{1}{\lambda_{d}^{1/2}(\bP)}
\max_{i} \|((\bA - \bP) \UP)_{i} \|+ O((\log n) \, \delta^{-1/2}(\mathbf{P}))
\\
& \leq \frac{d^{1/2}}{\lambda_{d}^{1/2}(\bP)}
\max_{j} \|(\bA - \bP) \bU_{\cdot j} \|_{\infty}+ O((\log n) \, \delta^{-1/2}(\mathbf{P}))
\end{align*}
where $\bU_{\cdot j}$ denotes the $j$-th column of
$\UP$.
Now, for a given $j$ and a given index $i$, the $i$-th element of
the vector $(\bA - \bP) \bU_{\cdot j}$ is of the form
\begin{equation*}
\sum_{k} (\bA_{ik} - \bP_{ik}) \bU_{kj}
\end{equation*}
and once again, by Hoeffding's inequality, the above term is $O(\log n)$ asymptotically almost surely. Taking the union bound
over all indices $i$ and all columns $j$ of $\UP$, we conclude that with high probability,
\begin{align*}
\max_{i} \| \Xhat_i - \rho_n^{1/2} \bW \bX_i \|& \leq
\frac{C d^{1/2}}{\lambda_{d}^{1/2}(P)} \log^2 n
+ O((\log n) \, \delta^{-1/2}(\mathbf{P}))\\
& \leq \frac{C d^{1/2} \log^2{n}}{\delta^{1/2}(\mathbf{P})}
\end{align*}
as desired.
\end{proof}
Now, we move to our distributional results.
\subsection{Proof of the central limit theorem for the adjacency spectral embedding}\label{subsec:CLT_proofdetails}.
Recall {\bf Theorem \ref{thm:clt_orig_but_better}}: Let $(\bA_n, \bX_n) \sim \mathrm{RDPG}(F)$ be a sequence of adjacency matrices and associated latent positions of a $d$-dimensional random dot product graph according to an inner product distribution $F$. Let $\Phi(\bx,\bSigma)$ denote the cdf of a (multivariate)
Gaussian with mean zero and covariance matrix $\bSigma$,
evaluated at $\bx \in \R^d$. Then
there exists a sequence of orthogonal $d$-by-$d$ matrices
$( \Wn )_{n=1}^\infty$ such that for all $\bm{z} \in \R^d$ and for any fixed index $i$,
$$ \lim_{n \rightarrow \infty}
\Pr\left[ n^{1/2} \left( \Xhat_n \Wn - \bX_n \right)_i
\le \bm{z} \right]
= \int_{\supp F} \Phi\left(\bm{z}, \bSigma(\bx) \right) dF(\bx), $$
where
\begin{equation}
\label{def:sigma}\bSigma(\bx)
= \Delta^{-1} \E\left[ (\bx^{\top} \bX_1 - ( \bx^{\top} \bX_1)^2 ) \bX_1 \bX_1^{\top} \right] \Delta^{-1}; \quad \text{and} \,\, \Delta = \mathbb{E}[\bX_1 \bX_1^{\top}].
\end{equation}
To prove this, we begin with the following simple lemma which indicates that when the rows of $\mathbf{X}$ are sampled i.i.d. from some distribution $F$, that the eigenvalues of $\mathbf{X}$ grows proportionally with $n$.
\begin{lemma}\label{lem:eigs_lower_bound}
With notation as above, let $F$ be an inner product distribution and suppose $\bX_1, \cdots, \bX_n, \mathbf{Y}$ be i.i.d $F$. Suppose also that $\Delta = \mathbb{E}[\bX_1 \bX_1^{\top}]$ is of rank $d$.
Then for $1 \leq i \leq d$, $\lambda_i(\bP) = \Omega( n \lambda_i(\Delta) )$ almost surely.
\end{lemma}
\begin{proof}
Recall that for any matrix $\bH$, the nonzero eigenvalues of $\bH^{\top} \bH$ are the same as those of $\bH \bH^{\top}$, so
$\lambda_i(\bX \bX^{\top}) = \lambda_i( \bX^{\top} \bX).$
In what follows, we remind the reader that $\bX$ is a matrix whose rows are the tranposes of the column vectors $\bX_i$, and $\bY$ is a $d$-dimensional vector that is independent from and has the same distribution as that of the $\bX_i$. We observe that
$ (\bX^{\top} \bX - n\E \mathbf{Y} \mathbf{Y}^{\top})_{ij} = \sum_{k=1}^n (\bX_{ki} \bX_{kj} - \E \bY_i \bY_j) $
is a sum of $n$ independent
zero-mean random variables, each contained in $[-1,1]$.
Thus, Hoeffding's inequality yields, for all $i,j \in [d]$,
$$ \Pr\left[ |(\bX^{\top} \bX) - n\E \bY \bY^{\top}|_{ij} \ge 2\sqrt{ n \log n } \right]
\le \frac{2}{n^2}. $$
A union bound over all $i,j \in [d]$ implies that
$ \|\bX^{\top} \bX - n\E \bY \bY^{\top} \|_F^2 \le 4d^2 n \log n $
with probability at least $1 - 2d^2/n^2$.
Taking square roots and noting that the Frobenius norm is an upper bound
on the spectral norm, we have that
$ \| \bX^{\top} \bX - n\E (\bY \bY^{\top}) \| \le 2d \sqrt{ n \log n } $
with probability at least $1 - 2d^2/n^2$,
and Weyl's inequality~\cite{horn85:_matrix_analy} yields that
for all $1 \le i \le d$,
$ | \lambda_i(\bX \bX^{\top} ) - n\lambda_i( \E (\bY \bY^{\top} ) | \le 2d \sqrt{ n \log n } $
with probability at least $1 - 2d^2/n^2$.
Of course, since the vector $\bY$ has the same distribution as any of the latent positions $\bX_i$, we see that $\E(\bY \bY^{\top})=\Delta$.
By the reverse triangle inequality, for any $1 \leq i \leq d$, we have
$$ \lambda_i(\bX \bX^{\top})\ge\lambda_d(\bX \bX^{\top})
\ge | n\lambda_d( \Delta) - 2d \sqrt{n \log n} |
= \Omega( n ). $$
Multiplying through by $\rho_n$, we find that there exists some constant $C$ so that for all $n$ sufficiently large, $\lambda_d(\rho_n\bX \bX^{\top}) \geq n\rho_n \lambda_d(\Delta)$ with probability at least $1 - 2d^2/n^2$.
\end{proof}
As described previously, to prove our central limit theorem, we require somewhat more
precise control on certain residual terms,
which we establish in the following key lemma. In the lemmas and proofs that follow, we frequently suppress the dependence of the sequence of graphs and embeddings on $n$.
\begin{lemma}
\label{lem:stringent_control_residuals}
Let $\bR_1, \bR_2, \bR_3$ be defined, as above, by
\begin{equation*} \begin{aligned}
\bR_1 &= \UP \UP^{\top} \UA - \UP \bW^* \\
\bR_2 &= \bW^* \SA^{1/2} - \SP^{1/2} \bW^*\\
\bR_3 &= \UA - \UP \UP^{\top} \UA + \bR_1 = \UA - \UP \bW^*.
\end{aligned} \end{equation*}
where, as before, $\bW^*$ is the orthogonal transformation $\bW^*=\bW_1 \bW_2^{\top}$ with $\bW_1 \Sigma \bW_2$ being the singular value decomposition of $\UP^{\top} \UA$.
Then the following convergences in probability hold:
\begin{equation} \label{eq:tozero1}
\sqrt{n}\left[ (\bA-\bP)\UP(\bW^* \SA^{-1/2} - \SP^{-1/2} \bW^*) \right]_h
\inprob \zeromx,
\end{equation}
\begin{equation} \label{eq:tozero2}
\sqrt{n} \left[ \UP \UP^{\top} (\bA-\bP) \UP \bW^* \SA^{-1/2} \right]_h
\inprob \zeromx,
\end{equation}
\begin{equation} \label{eq:tozero3}
\sqrt{n} \left[ (\bI - \UP \UP^{\top})(\bA-\bP) \bR_3 \SA^{-1/2} \right]_h
\inprob \zeromx,
\end{equation}
and with high probability,
$$\|\bR_1\SA^{1/2}+\UP \bR_2\|_F \le \frac{ C\log n}{n^{1/2}}. $$
\end{lemma}
\begin{proof}
We begin by observing that
$$ \| \bR_1 \SA^{1/2} + \UP \bR_2 \|_F
\le \| \bR_1 \|_F \| \SA^{1/2} \| + \| \bR_2 \|_F. $$
Proposition \ref{prop:uptua_close_rotation} and the trivial upper bound on the eigenvalues
of $\bA$ ensures that
$$ \| \bR_1 \|_F \| \SA^{1/2} \|
\le \frac{ C\log n }{ n^{1/2} }
\text{ w.h.p. }, $$
Combining this with Eq. \eqref{eq:approx_commute2_sqrt} in Lemma \ref{lem:order_bounds_on_minh_differences}, we conclude that
$$ \| \bR_1 \SA^{1/2} + \UP \bR_2 \|_F
\le \frac{ C \log n }{ n^{1/2} } \text{ w.h.p. } $$
We will establish \eqref{eq:tozero1}, \eqref{eq:tozero2}
and \eqref{eq:tozero3} order.
To see \eqref{eq:tozero1}, observe that
\begin{equation*}
\sqrt{n} \| (\bA-\bP)\UP(\bW^* \SA^{-1/2} - \SP^{-1/2} \bW^{*}) \|_F
\le \sqrt{n} \| (\bA-\bP) \UP \| \| \bW^* \SA^{-1/2} - \SP^{-1/2} \bW^* \|_F,
\end{equation*}
and Lemma~\ref{lem:order_bounds_on_minh_differences} imply that with high probability
$$ \sqrt{n} \| (\bA-\bP)\UP(\bW^* \SA^{-1/2} - \SP^{-1/2} \bW^*) \|_F
\le \frac{ C\log n }{ \sqrt{n} }, $$
which goes to $0$ as $n \rightarrow \infty$.
To show the convergence in \eqref{eq:tozero2}, we recall that
$\UP \SP^{1/2} \bW=\bX$ for some orthogonal matrix $\bW$ and observe that
since the rows of the latent position matrix $\bX$ are necessarily
bounded in Euclidean norm by $1$,
and since the top $d$ eigenvalues of $\bP$ are of order $n$ (recall Lemma \ref{lem:eigs_lower_bound}),
it follows that
\begin{equation}\label{eq:UP2toinfty}
\|\UP\|_{\tti} \le C n^{-1/2} \text{ w.h.p. }
\end{equation}
Next, \eqref{eq:hoeffding_up_upt} and Lemma \ref{lem:eigs_lower_bound} imply that
\begin{equation*} \begin{aligned}
\| (\UP \UP^{\top} (\bA-\bP) \UP \bW^{*} \SA^{-1/2})_h \|
&\le \|\UP\|_{\tti}\| \UP^{\top} (\bA-\bP) \UP \| \| \SA^{-1/2} \| \\
&\le \frac{ C \log n }{ n } \text{ w.h.p.},
\end{aligned} \end{equation*}
which implies~\eqref{eq:tozero2}.
Finally, to establish~\eqref{eq:tozero3},
we must bound the Euclidean norm of the vector
\begin{equation} \label{eq:toughguy}
\left[ (\bI - \UP \UP^{\top})(\bA-\bP) \bR_3 \SA^{-1/2} \right]_h,
\end{equation}
where, as defined above, $\bR_3 = \UA - \UP \bW^{*}$.
Let $\bB_1$ and $\bB_2$ be defined as follows:
\begin{equation}\label{eq:def_B1_B2}
\begin{aligned}
\bB_1&=
(\bI - \UP \UP^{\top})(\bA-\bP)(\bI-\UP\UP^{\top}) \UA \SA^{-1/2} \\
\bB_2&=(\bI - \UP \UP^{\top})(\bA-\bP)\UP(\UP^{\top} \UA - \bW^{*}) \SA^{-1/2}
\end{aligned}
\end{equation}
Recalling that $\bR_3=\UA-\UP \bW^*$, we have
\begin{equation*} \begin{aligned}
(\bI - \UP \UP^{\top})(\bA-\bP) \bR_3 \SA^{-1/2}
&= (\bI - \UP \UP^{\top})(\bA-\bP)(\UA-\UP\UP^{\top} \UA) \SA^{-1/2} \\
&~~~~~~+ (\bI - \UP \UP^{\top})(\bA-\bP)(\UP\UP^{\top} \UA -\UP \bW^*) \SA^{-1/2} \\
&= \bB_1 + \bB_2.
\end{aligned} \end{equation*}
We will bound the Euclidean norm of the
$h$-th row of each of these two matrices on the right-hand side,
from which a triangle inequality will yield our desired bound on the quantity in Equation~\eqref{eq:toughguy}.
Recall that we use $C$ to denote a positive constant,
independent of $n$ and $m$, which may change from line to line.
Let us first consider
$\bB_2 = (\bI - \UP \UP^{\top})(\bA-\bP)\UP(\UP^{\top} \UA - \bW^{*}) \SA^{-1/2}$.
We have
\begin{equation*}
\| \bB_2 \|_F
\le \| (\bI - \UP \UP^{\top})(\bA-\bP)\UP \|
\| \UP^{\top} \UA - \bW^{*} \|_F \| \SA^{-1/2} \|.
\end{equation*}
By submultiplicativity of the spectral norm and Theorem\ref{thm:oliveira},
$\| (\bI - \UP \UP^{\top})(\bA-\bP)\UP \| \le C n^{1/2} \log^{1/2} n$
with high probability (and indeed, under our degree assumptions and Theorem \ref{thm:lu_peng}, the $\log n$ factor can be dropped).
From Prop. \ref{prop:uptua_close_rotation} and Lemma \ref{lem:eigs_lower_bound},
respectively, we have with high probability
\begin{equation*}
\| \UP^{\top} \UA - \bW^{*} \|_F \le C n^{-1} \log n
\enspace \text{ and } \enspace
\| \SA^{-1/2} \| \le Cn^{-1/2}
\end{equation*}
Thus, we deduce that with high probability,
\begin{equation} \label{eq:B2bound}
\| \bB_2 \|_F \le \frac{ C \log^{3/2} n }{ n }
\end{equation}
from which it follows that $\| \sqrt{n} \bB_2 \|_F \inprob 0$,
and hence $\| \sqrt{n} (\bB_2)_h \| \inprob 0$.
Turning our attention to $\bB_1$, and recalling that $\UA^{\top} \UA= \mathbf{I}$, we note that
\begin{equation*} \begin{aligned}
\| (\bB_1)_h \| &=
\left\| \left[ (\bI - \UP \UP^{\top})(\bA-\bP)(\bI-\UP\UP^{\top})
\UA \SA^{-1/2} \right]_h \right\| \\
&= \left\| \left[ (\mathbf{I} - \UP \UP^{\top})(\bA-\bP)(\bI-\UP\UP^{\top})
\UA \UA^{\top} \UA \SA^{-1/2} \right]_h \right\| \\
&\le \left\| \UA \SA^{-1/2} \right\|
\left\| \left[ (\bI - \UP \UP^{\top})(\bA-\bP)(\bI-\UP\UP^{\top})\UA \UA^{\top}
\right]_h \right\|.
\end{aligned} \end{equation*}
Let $\epsilon > 0$ be a constant. We will show that
\begin{equation} \label{eq:inprobwewant}
\lim_{n \rightarrow \infty}
\Pr\left[ \| \sqrt{n}(\bB_1)_h \| > \epsilon \right]
= 0.
\end{equation}
For ease of notation, define
$$ \bE_1 = (\bI - \UP \UP^{\top})(\bA-\bP)(\bI-\UP\UP^{\top})\UA \UA^{\top}. $$
We will show that
\begin{equation} \label{eq:E1inprob}
\lim_{n \rightarrow \infty}
\Pr\left[ \sqrt{n} \left\| \left[ \bE_1 \right]_h \right\|
> n^{1/4} \right]= 0,
\end{equation}
which will imply \eqref{eq:inprobwewant}
since, by Lemma \ref{lem:eigs_lower_bound},
$\| \UA \SA^{-1/2} \| \le Cn^{-1/2}$ with high probability.
Let $\bQ \in \R^{n \times n}$ be any permutation matrix.
We observe that
$$ \bQ \UP \UP^{\top}\bQ^{\top} \bQ\bP \bQ^{\top}=\bQ\bP \bQ^{\top}, $$
and thus $\bQ\UP \UP^{\top}\bQ^{\top}$ is a projection matrix for $\bQ\bP \bQ^{\top}$
if and only if $\UP \UP^{\top}$ is a projection matrix for $\bP$.
A similar argument applies to the matrix $\UA \UA^{\top}$.
Combining this with the exchangeability structure of the matrix
$\bA - \bP$, it follows that the Frobenius norms
of the rows of $\bE_1$ are equidistributed.
This row-exchangeability for $\bE_1$ implies that
$ n \E \| (\bE_1)_h \|^2 = \| \bE_1 \|_F^2 $.
Applying Markov's inequality,
\begin{equation} \label{eq:markov} \begin{aligned}
\Pr\left[ \left\|\sqrt{n}\left[ \bE_1 \right]_h \right\| > t \right]
&\le \frac{ n \E \left\| \left[
(\bI - \UP \UP^{\top})(\bA-\bP)(\bI-\UP\UP^{\top})\UA \UA^{\top}
\right]_h \right\|^2 }{ t^2 } \\
&= \frac{ \E \left\| (\bI - \UP \UP^{\top})(\bA-\bP)(\bI-\UP\UP^{\top})\UA \UA^{\top}
\right\|_F^2 }{ t^2 }.
\end{aligned} \end{equation}
We will proceed by showing that with high probability,
\begin{equation} \label{eq:intermed}
\left\| (\bI - \UP \UP^{\top})(\bA-\bP)(\bI-\UP\UP^{\top})\UA \UA^{\top}
\right\|_F \le C \log n,
\end{equation}
whence choosing $t = n^{1/4}$ in \eqref{eq:markov} yields that
$$ \lim_{n\rightarrow \infty}
\Pr\left[ \left\|\sqrt{n}\left[ (\bI - \UP \UP^{\top})(\bA-\bP)
(\bI-\UP\UP^{\top})\UA \UA^{\top} \right]_h \right\| > n^{1/4} \right]
= 0, $$
and \eqref{eq:inprobwewant} will follow.
We have
\begin{equation*}
\left\| (\bI - \UP \UP^{\top})(\bA-\bP)(\bI-\UP\UP^{\top})\UA \UA^{\top}
\right\|_F
\le \| \bA-\bP \| \| \UA - \UP\UP^{\top} \UA \|_F \| \UA \|
\end{equation*}
Theorem \ref{thm:lu_peng} and Lemma \ref{lem:eigs_lower_bound} implies that the first term in this product
is at most $C n^{1/2}$ high probability,
and the final term in this product is, trivially, at most $1$.
To bound the second term, we will follow reasoning similar to that
in Lemma \ref{lem:order_bounds_on_minh_differences}, combined with the Davis-Kahan theorem.
The Davis-Kahan Theorem \cite{DavKah1970,Bhatia1997}
implies that for a suitable constant $C > 0$,
\begin{equation*}
\| \UA \UA^{\top} - \UP\UP^{\top} \|
\le \frac{ C \| \bA - \bP \| }{ \lambda_d(\bP) }.
\end{equation*}
By Theorem 2 in \cite{DK_usefulvariant},
there exists orthonormal $\bW \in \R^{d \times d}$ such that
$$ \| \UA - \UP \bW \|_F \le C \| \UA \UA^{\top} - \UP \UP^{\top} \|_F. $$
We observe further that the multivariate linear least squares problem
\begin{equation*}
\min_{\bT \in \R^{d \times d}} \| \UA - \UP \bT \|_F^2
\end{equation*}
is solved by $\bT = \UP^{\top} \UA$.
Combining all of the above, we find that
\begin{equation*} \begin{aligned}
\| \UA - \UP\UP^{\top} \UA \|_F^2
&\le \| \UA - \UP \bW \|_F^2
\le C \| \UA \UA^{\top} - \UP \UP^{\top} \|_F^2 \\
&\le C \| \UA \UA^{\top} - \UP \UP^{\top} \|^2
\le \left(\frac{ C \| \bA - \bP \| }{ \lambda_d(\bP) }\right)^2
\le \frac{ C}{ n } \enspace \text{ w.h.p. }
\end{aligned} \end{equation*}
We conclude that
\begin{equation*} \begin{aligned}
\left\| (\bI - \UP \UP^{\top})(\bA-\bP)(\bI-\UP\UP^{\top})\UA \UA^{\top}
\right\|_F
&\le C\| \bA-\bP \| \| \UA - \UP\UP^{\top} \UA \|_F \| \UA \|_S \\
&\le C \log n \text{ w.h.p. },
\end{aligned} \end{equation*}
which implies \eqref{eq:intermed}, as required,
and thus the convergence in \eqref{eq:tozero3} is established,
completing the proof.
\end{proof}
Next, recall the inherent nonidentifiability in the RDPG model: suppose the ``true" latent positions are some matrix $\bX$. Then
$\bX = \bU_{\mathbf{P}} \bS_{\mathbf{P}}^{1/2} \bW$ for some suitably-chosen orthogonal matrix $\bW$. We now consider the asymptotic distribution of
$$ n^{1/2} \Wn^{\top} \left[ (\bA - \bP) \UP \SP^{-1/2} \right]_h$$
conditional on $\bX_i={\bf x}_i$. Because we can write suitable terms of $\bA-\bP$ as the sum of independent random variables, we can invoke the Lindeberg-Feller Central Limit Theorem to establish the asymptotic normality of
$$ n^{1/2} \Wn^{\top} \left[ (\bA - \bP) \UP \SP^{-1/2} \right]_h$$
as follows.
\begin{lemma} \label{lem:inlaw}
Fix some $i \in [n]$.
Conditional on $\bX_i = \bx_i \in \R^d$,
there exists a sequence of $d$-by-$d$ orthogonal matrices $\{ \Wn \}$ such that
$$ n^{1/2} \Wn^{\top} \left[ (\bA - \bP) \UP \SP^{-1/2} \right]_h
\inlaw \calN( 0, \bSigma(\bx_i) ), $$
where $\bSigma(\bx_i) \in \R^{d \times d}$ is a covariance matrix that
depends on $\bx_i$.
\end{lemma}
\begin{proof}
For each $n$, choose orthogonal $\Wn \in \R^{d \times d}$
so that $\bX = \mathbf{U}_{\mathbf{P}} \mathbf{S}_{\mathbf{P}}^{1/2} \Wn$.
At least one such $\Wn$ exists for each value of $n$, since, as discussed
previously, the true latent positions $\bX$ are specified
only up to some orthogonal transformation.
We then have
\begin{equation*} \begin{aligned}
n^{1/2} \Wn^{\top} \left[ (\bA - \bP) \UP \SP^{-1/2} \right]_i
&= n^{1/2} \Wn^{\top} \SP^{-1} \Wn \left[ \bA \bX - \bP \bX \right]_i \\
&= n^{1/2} \Wn^{\top} \SP^{-1} \Wn \left(
\sum_{j=1}^n
\left(\bA_{ij} - \bP_{ij} \right)
\bX_j \right) \\
&= n^{1/2} \Wn^{\top} \SP^{-1} \Wn \left(
\sum_{j \neq i} \left(\bA_{ij} - \bP_{ij}\right) \bX_j
\right) - n^{1/2} \Wn^{\top} \SP^{-1} \Wn \bP_{ii} \bX_i \\
&= \left( n \Wn^{\top} \SP^{-1} \Wn \right)
\left[ n^{-1/2}
\sum_{j \neq i} \left(\bA_{ij} - \bP_{ij}\right) \bX_j \right] - n \Wn^{\top} \SP^{-1} \Wn \frac{ \bP_{ii} \bX_i }{ n^{1/2} }.
\end{aligned} \end{equation*}
Conditioning on $\bX_i = \bx_i \in \R^d$, we first observe that
$
\tfrac{ \bP_{ii} }{ n^{1/2} } \bX_i
= \frac{ \bx_i^{\top} \bx_i }{ n^{1/2} } x_i \rightarrow 0$ almost surely.
Furthermore,
\begin{equation*} \begin{aligned}
n^{-1/2} \sum_{j \neq i} \left(\bA_{ij} - \bP_{ij}\right) \bX_j
= n^{-1/2} \sum_{j \neq i} \left(
\bA_{ij} - \bX_j^{\top} \bx_i\right) \bX_j
\end{aligned} \end{equation*}
is a scaled sum of $n-1$ independent $0$-mean random variables,
each with covariance matrix given by
$$ \Sigmatilde(\bx_i) =
\E \left[ \left( \bx_i^{\top} \bX_j - (\bx_i^{\top} \bX_j)^2 \right) \bX_j \bX_j^{\top} \right].
$$
The multivariate central limit theorem thus implies that
\begin{equation} \label{eq:inlaw1}
n^{-1/2} \sum_{j \neq i} \left(
\bA_{ij} - \bX_j \bx_i^{\top})\bX_j\right)
\inlaw \calN( \zeromx, \Sigmatilde(\bx_i) ) .
\end{equation}
Finally, by the strong law of large numbers,
$$ n^{-1} \mathbf{W}_n^{\top} \mathbf{S}_{\mathbf{P}_n} \mathbf{W}_n = \frac{1}{n} \bX^{\top} \bX \rightarrow \bDelta \text{ a.s. } $$
and thus $(n \mathbf{W}_n^{\top} \mathbf{S}^{-1}_{\mathbf{P}_n} \mathbf{W}_n) \rightarrow \bDelta^{-1}$ almost surely.
Combining this fact with~\eqref{eq:inlaw1},
the multivariate version of Slutsky's theorem yields
$$ n^{1/2} \Wn^{\top} \left[ (\bA - \bP) \UP \SP^{-1/2} \right]_h
\inlaw \calN\left(\zeromx, \bSigma(\bx_i) \right)
$$
where $\bSigma(\bx_i) = \bDelta^{-1} \Sigmatilde(\bx_i) \bDelta^{-1}$.
Integrating over the possible values of $\bx_i$ with respect to distribution
$F$ completes the proof.
\end{proof}
Lemmas \ref{lem:stringent_control_residuals} and \ref{lem:inlaw} are the main ingredients in the proof of Theorem \ref{thm:clt_orig_but_better}, whose proof now follows easily:
\begin{proof}
[Proof of Theorem \ref{thm:clt_orig_but_better}].
We start with the following decomposition that was originally used in the proof of Theorem~\ref{thm:minh_frob}.
\begin{equation} \label{eq:expansion} \begin{aligned}
\sqrt{n}\left(\UA \SA^{1/2} - \UP \SP^{1/2} \bW^{*}\right)
&= \sqrt{n}(\bA-\bP)\UP \SP^{-1/2} \bW^{*}
+ \sqrt{n}(\bA-\bP)\UP(\bW^{*} \SA^{-1/2} - \SP^{-1/2} \bW^{*}) \\
&~~~~~~-\sqrt{n} \UP \UP^{\top} (\bA-\bP) \UP \bW^{*}\SA^{-1/2} \\
&~~~~~~+ \sqrt{n}(\bI - \UP \UP^{\top})(\bA-\bP) \bR_3 \SA^{-1/2}
+ \sqrt{n}\bR_1 \SA^{1/2} + \sqrt{n}\UP \bR_2.
\end{aligned} \end{equation}
Now given any index $i$, Lemma~\ref{lem:stringent_control_residuals} can be used to bound the $\ell_2$ norm of the $i$-th row $(\bA-\bP)\UP(\bW^{*} \SA^{-1/2} - \SP^{-1/2} \bW^{*})$, $\UP \UP^{\top} (\bA-\bP) \UP \bW^{*}\SA^{-1/2}$ and $(\bI - \UP \UP^{\top})(\bA-\bP) \bR_3 \SA^{-1/2}$. The $\ell_2$ norm of the $i$-th row of $\bR_1 \SA^{1/2}$ and $\UP \bR_2$ can be bounded from above by bound $\|\bR_1 \SA^{1/2}\|_{\tti}$ and $\|\UP \bR_2\|_{\tti}$. Now since $\|\mathbf{U}_{\mathbf{P}}\|_{\tti} = O(n^{-1/2})$ by Eq.~\eqref{eq:UP2toinfty}, we conclude that $\sqrt{n} \|\bR_1 \SA^{1/2}\|_{\tti} = O(n^{-1/2} \log^{1/2}{n})$ and $\sqrt{n} \|\UP \bR_2\|_{\tti} = O(n^{-1/2} \log^{1/2}{n})$. Therefore, for any fixed index $i$, we have
$$ \sqrt{n}\left(\UA \SA^{1/2} - \UP \SP^{1/2} \bW^{*}\right)_i = \sqrt{n}((\bA-\bP)\UP \SP^{-1/2} \bW^{*})_{i} + O(n^{-1/2} \log^{1/2} n)$$
with high probability. Since $\mathbf{X} = \mathbf{U}_{\mathbf{P}} \mathbf{S}_{\mathbf{P}} \bW$, we can rewrite the above expression as
$$ \sqrt{n}\left(\UA \SA^{1/2} (\bW^{*})^{\top} \mathbf{W} - \UP \SP^{1/2} \mathbf{W} \right)_i = \sqrt{n}((\bA-\bP)\UP \SP^{-1/2} \mathbf{W})_{i} + O(n^{-1/2} \log^{1/2} n).$$ Lemma~\ref{lem:inlaw} then establishes the asymptotic normality of $\sqrt{n}((\bA-\bP)\UP \SP^{-1/2} \mathbf{W})_{i}$ as desired.
\end{proof}
We now turn out attention a brief sketch of the proof of the central limit theorem for the Laplacian spectral embedding.
\subsection*{Sketch of proof of Theorem~\ref{THM:LSE}}
We present in this subsection a sketch of the main ideas in the proof of Theorem~\ref{THM:LSE}; detailed proofs are given in \cite{tang_lse}.
We first introduce some additional notation. For $(\mathbf{X}_n, \mathbf{A}_n) \sim \mathrm{RDPG}(F)$, let $\mathbf{T}_n = \mathrm{diag}(\mathbf{P}_n \bm{1})$ be the $n \times n$ diagonal matrices whose diagonal entries are the {\em expected} vertex degrees. Then defining $\tilde{\bf X}_n = \mathbf{T}_n^{-1/2} \mathbf{X}_n$, and noting that $\tilde{\mathbf{X}}_n \tilde{\mathbf{X}}_n^{\top} = \mathcal{L}(\mathbf{P}_n) = \mathbf{T}_n^{-1/2} \mathbf{P}_n \mathbf{T}_n^{-1/2}$, Theorem~\ref{THM:LSE} depends on showing that there exists an orthogonal matrix $\mathbf{W}_n$ such that
\begin{equation}
\label{eq:LSE-main1}
\breve{\bX}_n {\bf W}_n - \tilde{\bf X}_n = \mathbf{T}_n^{-1/2} (\mathbf{A}_n - \mathbf{P}_n) \mathbf{T}_n^{-1/2}
\tilde{\mathbf{X}}_n (\tilde{\mathbf{X}}_n^{\top} \tilde{\mathbf{X}}_n)^{-1} + \tfrac{1}{2}(\mathbf{I} - \mathbf{D}_n \mathbf{T}_n^{-1}) \tilde{\mathbf{X}}_n + \mathbf{R}_n
\end{equation}
where $\|\mathbf{R}_n\|_{F} = O(n^{-1})$ with high probability. The motivation behind Eq.~\eqref{eq:LSE-main1} is as follows.
Given $\tilde{\mathbf{X}}_n$, the entries of the right hand side of Eq.~\eqref{eq:LSE-main1}, except for the term $\mathbf{R}_n$, can be expressed explicitly in terms of linear combinations of the entries $a_{ij} - p_{ij}$ of $\mathbf{A}_n - \mathbf{P}_n$. This is in contrast with the left hand side of Eq.~\eqref{eq:LSE-main1}, which depends on the quantities $\mathbf{U}_{\mathcal{L}(\mathbf{A}_n)}$ and $\mathbf{S}_{\mathcal{L}(\mathbf{A}_n)}$ (recall Definition~\ref{def:LSE}). Since the quantities $\mathbf{U}_{\mathcal{L}(\mathbf{A}_n)}$ and $\mathbf{S}_{\mathcal{L}(\mathbf{A}_n)}$ can not be expressed explicitly in terms of the entries of $\mathbf{A}_n$ and $\mathbf{P}_n$, we conclude that the right hand side of Eq.~\eqref{eq:LSE-main1} is simpler to analyze.
Once Eq.~\eqref{eq:LSE-main1} is established, we can derive Theorem~\ref{THM:LSE} as follows.
Let $\xi_i$ denote the $i$-th row of $n ( \breve{\mathbf{X}}_n \bW_n - \tilde{\mathbf{X}}_n)$ and let $r_i$ denote the $i$-th row of $\mathbf{R}_n$. Eq.~\eqref{eq:LSE-main1} then implies
\begin{equation*}
\begin{split}
\xi_i &= (\tilde{\mathbf{X}}_n^{\top} \tilde{\mathbf{X}}_n)^{-1} \frac{n}{\sqrt{t_i}} \Bigl( \sum_{j} \frac{a_{ij} -
p_{ij}}{\sqrt{t_j}} (\tilde{\bX}_n)_j \Bigr) + \frac{n (t_i - d_i)}{2t_i} (\tilde{\bX}_n)_i + n r_i \\
&= (\tilde{\mathbf{X}}_n^{\top} \tilde{\mathbf{X}}_n)^{-1}
\frac{\sqrt{n}}{\sqrt{t_i}} \Bigl( \sum_{j
} \frac{\sqrt{n \rho_n} (a_{ij} - p_{ij}) (\bX_n)_j}{ t_j} \Bigr) - \frac{n (\bX_n)_i}{2 t_i^{3/2}} \sum_{j
} (a_{ij} - p_{ij}) + n r_i \\
&=
\frac{\sqrt{n}}{\sqrt{t_i}} \sum_{j
} \frac{(a_{ij} - p_{ij})}{\sqrt{n}} \Bigl(\frac{(\tilde{\mathbf{X}}_n^{\top} \tilde{\mathbf{X}}_n)^{-1} (\bX_n)_j}{ t_j/n} - \frac{(\bX_n)_{i}}{2 t_i/n} \Bigr) + n r_i
\end{split}
\end{equation*}
where $a_{ij}$ and $p_{ij}$ are the $ij$-th entries of $\mathbf{A}$ and $\mathbf{P}$, respectively, and $t_i$ is the $i$-th diagonal entry of $\mathbf{T}_n$.
We can then show that $n r_i \overset{\mathrm{d}}{\longrightarrow} 0$. Indeed, there are $n$ rows in $\mathbf{R}_n$ and $\|\mathbf{R}_n\|_{F} = O(n^{-1})$; hence, on average, for each index $i$, $\|r_i\|^{2} = O(n^{-3})$ with high probability (a more precise argument similar to that used in proving Lemma~\ref{lem:stringent_control_residuals} is needed to establish this rigorously).
Furthermore, $$t_i/n = \sum_{j} (\bX_n)_i^{\top} (\bX_n)_j/n \overset{\mathrm{a.s.}}{\longrightarrow} (\bX_n)_{i}^{\top} \bm{\mu}$$ as $n \rightarrow \infty$. Finally, $$\tilde{\mathbf{X}}_n^{\top} \tilde{\mathbf{X}}_n = \sum_{i} \bigl((\bX_n)_{i} (\bX_n)_{i}^{\top}/(\sum_{j} (\bX_n)_i^{\top} (\bX_n)_j)\bigr),$$ and this can be shown to converge to $\tilde{\Delta} = \mathbb{E}\bigl[\tfrac{\bX_1 \bX_1^{\top}}{\bX_1^{\top} \bm{\mu}}\bigr]$ as $n \rightarrow \infty$. We therefore have, after additional algebraic manipulations, that
\begin{equation*}
\begin{split}
\xi_i &= \frac{\sqrt{n}}{\sqrt{t_i}} \sum_{j
} \frac{(a_{ij} - p_{ij})}{\sqrt{n} } \Bigl(\frac{\tilde{\Delta}^{-1} (\bX_n)_j}{ (\bX_n)_j^{\top} \bm{\mu}} - \frac{(\bX_n)_i}{2 (\bX_n)_{i}^{\top} \bm{\mu}} \Bigr) + o(1) \\
&= \frac{\sqrt{n}}{\sqrt{t_i}} \sum_{j
} \frac{(a_{ij} - (\bX_n)_i^{\top} (\bX_n)_j)}{\sqrt{n} } \Bigl(\frac{\tilde{\Delta}^{-1} (\bX_n)_j}{ (\bX_n)_j^{\top} \bm{\mu}} - \frac{(\bX_n)_{i}}{2 (\bX_n)_i^{\top} \bm{\mu}} \Bigr) + o(1)
\end{split}
\end{equation*}
with high probability.
Conditioning on $(\bX_n)_{i} = \bm{x}$, the above expression for $\xi_i$ is roughly a sum of independent and identically distributed mean $0$ random variables. The multivariate central limit theorem can then be applied to the above expression for $\xi_i$, thereby yielding Theorem~\ref{THM:LSE}.
We now sketch the derivation of Eq.~\eqref{eq:LSE-main1}. For simplicity of notation, we shall ignore the subscript $n$ in the matrices $\mathbf{A}_n$, $\mathbf{X}_n$, $\mathbf{P}_n$ and related matrices. First, consider the following expression.
\begin{equation}
\label{eq:sketch1}
\begin{split}
\mathbf{U}_{\mathcal{L}(\mathbf{A})} \mathbf{S}_{\mathcal{L}(\mathbf{A})}^{1/2} - \mathbf{U}_{\mathcal{L}(\mathbf{P})} &\mathbf{S}_{\mathcal{L}(\mathbf{P})}^{1/2} \mathbf{U}_{\mathcal{L}(\mathbf{P})}^{\top} \mathbf{U}_{\mathcal{L}(\mathbf{A})} = \mathcal{L}(\mathbf{A}) \tilde{\mathbf{U}}_{\mathcal{L}(\mathbf{A})} \tilde{\mathbf{S}}_{\mathcal{L}(\mathbf{A})}^{-1/2} - \mathcal{L}(\mathbf{P}) \tilde{\mathbf{U}}_{\mathcal{L}(\mathbf{P})} \tilde{\mathbf{S}}_{\mathcal{L}(\mathbf{P})}^{-1/2} \tilde{\mathbf{U}}_{\mathcal{L}(\mathbf{P})}^{\top} \tilde{\mathbf{U}}_{\mathcal{L}(\mathbf{A})} \\
&= \mathcal{L}(\mathbf{A}) \ULA \ULA^{\top} \ULA \SLA^{-1/2} - \mathcal{L}(\mathbf{P}) \ULP \SLP^{-1/2} \ULP^{\top} \ULA
\end{split}
\end{equation}
Now $\mathcal{L}(\mathbf{A})$ is concentrated around $\mathcal{L}(\mathbf{P})$: namely, in the current setting,
$$\|\mathcal{L}(\mathbf{A}) - \mathcal{L}(\mathbf{P})\| = O(n^{-1/2})$$ with high probability (see Theorem~2 in \cite{lu13:_spect}). Since $\|\mathcal{L}(\mathbf{P})\| = \Theta(1)$ and the non-zero eigenvalues of $\mathcal{L}(\mathbf{P})$ are all of order $\Theta(1)$, this again implies, by the Davis-Kahan theorem, that the eigenspace spanned by the $d$ largest eigenvalues of $\mathcal{L}(\mathbf{A})$ is ``close'' to that spanned by the $d$ largest eigenvalues of $\mathcal{L}(\mathbf{P})$. More precisely, $\|\ULA \ULA - \ULP \ULP \| = O(n^{-1/2})$ with high probability, and
\begin{equation*}
\begin{split}
\ULA \SLA^{1/2} - \ULP \SLP^{1/2} \ULP^{\top} \ULA &= \mathcal{L}(\mathbf{A}) \ULP \ULP^{\top} \ULA \SLA^{-1/2} \\ & - \mathcal{L}(\mathbf{P}) \ULP \SLP^{-1/2} \ULP^{\top} \ULA + \mathbf{R}_n
\end{split}
\end{equation*}
where $\|\mathbf{R}_n\| = O(n^{-1})$ with high probability. In addition, $\|\ULA \ULA - \ULP \ULP \| = O(n^{-1/2})$ also implies that
there exists an orthogonal matrix $\mathbf{W}^{*}$ such that $\|\ULP^{\top} \ULA - \mathbf{W}^{*} \| = O(n^{-1})$ with high probability.
We next consider the terms $\SLP^{-1/2} \ULP^{\top} \ULA$ and $\ULP^{\top} \ULA \SLA^{-1/2}$. Note that the both are $d \times d$ matrices;
furthermore, since $\SLA$ and $\SLP$ are diagonal matrices, the $ij$-th entry of $\SLP^{-1/2} \ULP^{\top} \ULA - \ULP^{\top} \ULA \SLA^{-1/2}$ can be written as the $\zeta_{ij} \times h_{ij}$ where $\zeta_{ij}$ is the $ij$-th entry of $\SLP \ULP^{\top} \ULA - \ULP^{\top} \ULA \SLA$ and the $h_{ij}$ are functions of $\lambda_{i}(\mathcal{L}(\bA))$ and $\lambda_{j}(\mathcal{L}(\bP)$. In particular, $|h_{ij}| < C$ for some positive constant $C$ for all $n$. We then have that
\begin{equation*}
\begin{split}\SLP \ULP^{\top} \ULA - \ULP^{\top} \ULA \SLA &= \ULP^{\top} (\mathcal{L}(\bP) - \mathcal{L}(\bA)) \ULA \\ &= \ULP^{\top} (\mathcal{L}(\bP) - \mathcal{L}(\bA)) \ULP \ULP^{\top} \ULA \\ &+ \ULP^{\top}((\mathcal{L}(\bP) - \mathcal{L}(\bA)) (\mathbf{I} - \ULP \ULP^{\top}) \ULA
\end{split}
\end{equation*}
Now, conditioning on $\bP$, the $ij$-th entry of $\ULP^{\top} (\mathcal{L}(\bP) - \mathcal{L}(\bA)) \ULP$ can be written as a linear combination of the entries of $\mathbf{A} - \mathbf{P}$ (which are independent) and the rows of $\mathbf{X}$; hence, it can be bounded using Hoeffding's inequality.
Meanwhile, the term
$$\ULP^{\top}((\mathcal{L}(\bP) - \mathcal{L}(\bA)) (\mathbf{I} - \ULP \ULP^{\top}) \ULA$$ can be bounded by the Davis-Kahan Theorem and the spectral norm difference of $\|\mathcal{L}(\bA) - \mathcal{L}(\bP)\|$. We therefore arrive at the important fact that
$$\|\SLP \ULP^{\top} \ULA - \ULP^{\top} \ULA \SLA\|_{F} = O(n^{-1})$$
with high probability, and hence
$$\|\SLP^{-1/2} \ULP^{\top} \ULA - \ULP^{\top} \ULA \SLA^{-1/2}\| = O(n^{-1})$$ with high probability.
We can juxtapose $\ULP^{\top} \ULA$ and $\SLA^{-1/2}$ in the expression for Eq.~\eqref{eq:sketch1} and then replace $\ULP^{\top} \ULA$ by the orthogonal matrix $\mathbf{W}^{*}$, thereby obtaining
$$ \ULA \SLA^{1/2} - \ULP \SLP^{1/2} \mathbf{W}^{*} = (\mathcal{L}(\mathbf{A}) - \mathcal{L}(\mathbf{P})) \ULP \SLP^{-1/2} \mathbf{W}^{*} + \tilde{\mathbf{R}}_n $$
where $\|\tilde{\mathbf{R}}_n\| = O(n^{-1})$ with high probability.
Since $$\tilde{\mathbf{X}} \tilde{\mathbf{X}}^{\top} = \mathcal{L}(\mathbf{P}) = \ULP \SLP \ULP^{\top},$$
we have $\tilde{\mathbf{X}} = \ULP \SLP^{1/2} \tilde{\bW}$ for some orthogonal matrix $\tilde{\mathbf{W}}$.
Therefore,
\begin{equation*}
\begin{split}
\ULA \SLA^{1/2} - \tilde{\mathbf{X}} \tilde{\mathbf{W}}^{\top} \mathbf{W}^{*} & =
(\mathcal{L}(\mathbf{A}) - \mathcal{L}(\mathbf{P})) \ULP \SLP^{-1/2} \mathbf{W}^{*} + \tilde{\bR}_n\\
& = (\mathcal{L}(\mathbf{A}) - \mathcal{L}(\mathbf{P})) \ULP \SLP^{1/2} \tilde{\mathbf{W}} \tilde{\mathbf{W}}^{\top} \SLP^{-1} \tilde{\mathbf{W}} \tilde{\mathbf{W}}^{\top} \mathbf{W}^{*} + \tilde{\bR}_n\\
& = (\mathcal{L}(\mathbf{A}) - \mathcal{L}(\mathbf{P})) \tilde{\mathbf{X}} (\tilde{\mathbf{X}}^{\top} \tilde{\mathbf{X}})^{-1} \tilde{\mathbf{W}}^{\top} \mathbf{W}^{*} + \tilde{\bR}_n.
\end{split}
\end{equation*}
Equivalently,
\begin{equation}
\label{eq:sketch-1}
\ULA \SLA^{1/2} (\mathbf{W}^{*})^{\top} \tilde{\mathbf{W}} - \tilde{\mathbf{X}} = (\mathcal{L}(\mathbf{A}) - \mathcal{L}(\mathbf{P})) \tilde{\mathbf{X}} (\tilde{\mathbf{X}}^{\top} \tilde{\mathbf{X}})^{-1} + \tilde{\bR}_n (\mathbf{W}^{*})^{\top} \tilde{\mathbf{W}}.
\end{equation}
The right hand side of Eq.~\eqref{eq:sketch-1} --- except for the residual term $\tilde{\bR}_n (\mathbf{W}^{*})^{\top} \tilde{\mathbf{W}}$ which has norm of order $O(n^{-1})$ with high probability --- can now be written explicitly in terms of the entries of $\mathbf{A}$ and $\mathbf{P}$. However, since $$\mathcal{L}(\mathbf{A}) = \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2},$$ the entries of the right hand side of Eq.~\eqref{eq:sketch-1} are neither linear nor affine combinations of the entries of $\mathbf{A} - \mathbf{P}$. Nevertheless, a Taylor-series expansion of the entries of $\mathbf{D}^{-1/2}$ allows us to conclude that
$$\|\mathbf{D}^{-1/2} - \mathbf{T}^{-1/2} - \tfrac{1}{2} \mathbf{T}^{-3/2} (\mathbf{T} - \mathbf{D})\| = O(n^{-3/2})$$ with high probability.
Substituting this into Eq.~\eqref{eq:sketch-1} yields, after a few further algebraic simplifications, Eq.~\eqref{eq:LSE-main1}.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,864 |
namespace UnrealBuildTool.Rules
{
public class CodeLiteSourceCodeAccess : ModuleRules
{
public CodeLiteSourceCodeAccess(TargetInfo Target)
{
PrivateDependencyModuleNames.AddRange(
new string[]
{
"Core",
"SourceCodeAccess",
"DesktopPlatform",
}
);
if (UEBuildConfiguration.bBuildEditor)
{
PrivateDependencyModuleNames.Add("HotReload");
}
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,148 |
El Trofeu Laigueglia 2022 va ser la 59a edició del Trofeu Laigueglia. Es disputà el 2 de març de 2022 sobre un recorregut de 202 km amb sortida i arribada a Laigueglia, a la Ligúria. La cursa formava part de l'UCI ProSeries amb una categoria 1.Pro.
El vencedor final fou l'eslovè Jan Polanc (UAE Team Emirates) que s'imposà als seus companys d'equip Juan Ayuso i Alessandro Covi, segon i tercer respectivament.
Equips
L'organització convidà a 25 equips a prendre part en aquesta cursa: vuit de categoria WorldTeams, set de categoria ProTeam, vuit equips continentals i una selecció nacional.
Classificació final
Referències
Enllaços externs
Trofeu Laigueglia
Competicions ciclistes del 2022 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,976 |
{"url":"https:\/\/www.nature.com\/articles\/nphys3000?error=cookies_not_supported&code=9cb537f1-04e2-48e7-bae2-4c24232d6aaa","text":"Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.\n\n# A quantum network of clocks\n\n## Abstract\n\nThe development of precise atomic clocks plays an increasingly important role in modern society. Shared timing information constitutes a key resource for navigation with a direct correspondence between timing accuracy and precision in applications such as the Global Positioning System. By combining precision metrology and quantum networks, we propose a quantum, cooperative protocol for operating a network of geographically remote optical atomic clocks. Using nonlocal entangled states, we demonstrate an optimal utilization of global resources, and show that such a network can be operated near the fundamental precision limit set by quantum theory. Furthermore, the internal structure of the network, combined with quantum communication techniques, guarantees security both from internal and external threats. Realization of such a global quantum network of clocks may allow construction of a real-time single international time scale (world clock) with unprecedented stability and accuracy.\n\n## Access options\n\nfrom\\$8.99\n\nAll prices are NET prices.\n\n## References\n\n1. 1\n\nBloom, B. et al. An optical lattice clock with accuracy and stability at the 10\u221218 level. Nature 506, 71\u201375 (2014).\n\n2. 2\n\nHinkley, N. et al. An atomic clock with 10\u221218 instability. Science 341, 1215\u20131218 (2013).\n\n3. 3\n\nNicholson, T. L. et al. Comparison of two independent Sr optical clocks with 1\u00a0\u00d7\u00a010\u221217 stability at 103 s. Phys. Rev. Lett. 109, 230801 (2012).\n\n4. 4\n\nLeroux, I. D., Schleier-Smith, M.\u00a0H. & Vuleti\u0107, V. Implementation of cavity squeezing of a collective atomic spin. Phys. Rev. Lett. 104, 073602 (2010).\n\n5. 5\n\nBuzek, V., Derka, R. & Massar, S. Optimal quantum clocks. Phys. Rev. Lett. 82, 2207\u20132210 (1999).\n\n6. 6\n\nYe, J. et al. Delivery of high-stability optical and microwave frequency standards over an optical fiber network. J. Opt. Soc. Am. B 20, 1459\u20131467 (2003).\n\n7. 7\n\nDroste, S. et al. Optical-frequency transfer over a single-span 1840 km fiber link. Phys. Rev. Lett. 111, 110801 (2013).\n\n8. 8\n\nCirac, J., Zoller, P., Kimble, H. & Mabuchi, H. Quantum state transfer and entanglement distribution among distant nodes in a quantum network. Phys. Rev. Lett. 78, 3221\u20133224 (1997).\n\n9. 9\n\nKimble, H.\u00a0J. The quantum internet. Nature 453, 1023\u20131030 (2008).\n\n10. 10\n\nPerseguers, S., Lapeyre, G.\u00a0J., Cavalcanti, D., Lewenstein, M. & Ac\u00edn, A. Distribution of entanglement in large-scale quantum networks. Rep. Prog. Phys. 76, 096001 (2013).\n\n11. 11\n\nNielsen, M.\u00a0A. & Chuang, I.\u00a0L. Quantum Computation and Quantum Information (Cambridge Univ. Press, 2000).\n\n12. 12\n\nDuan, L.\u00a0M., Lukin, M.\u00a0D., Cirac, J.\u00a0I. & Zoller, P. Long-distance quantum communication with atomic ensembles and linear optics. Nature 414, 413\u2013418 (2001).\n\n13. 13\n\nBollinger, J., Itano, W., Wineland, D. & Heinzen, D. Optimal frequency measurements with maximally correlated states. Phys. Rev. A 54, R4649\u2013R4652 (1996).\n\n14. 14\n\nLeibfried, D. et al. Toward Heisenberg-limited spectroscopy with multiparticle entangled states. Science 304, 1476\u20131478 (2004).\n\n15. 15\n\nWineland, D. et al. Experimental issues in coherent quantum-state manipulation of trapped atomic ions. J. Res. Natl Inst. Stand. Technol. 103, 259\u2013328 (1998).\n\n16. 16\n\nKessler, E. M. et al. Heisenberg-limited atom clocks based on entangled qubits. Phys. Rev. Lett. 112, 190403 (2014).\n\n17. 17\n\nGiedke, G., Taylor, J., D Alessandro, D., Lukin, M. & Imamolu, A. Quantum measurement of a mesoscopic spin ensemble. Phys. Rev. A 74, 032316 (2006).\n\n18. 18\n\nHuelga, S. F. et al. Improvement of frequency standards with quantum entanglement. Phys. Rev. Lett. 79, 3865\u20133868 (1997).\n\n19. 19\n\nRosenband, T. & Leibrandt, D.\u00a0R. Exponential scaling of clock stability with atom number. Preprint at http:\/\/arXiv.org\/abs\/1303.6357 (2013).\n\n20. 20\n\nBorregaard, J. & S\u00f8rensen, A.\u00a0S. Efficient atomic clocks operated with several atomic ensembles. Phys. Rev. Lett. 111, 090802 (2013).\n\n21. 21\n\nEscher, B.\u00a0M., de\u00a0Matos\u00a0Filho, R.\u00a0L. & Davidovich, L. General framework for estimating the ultimate precision limit in noisy quantum-enhanced metrology. Nature Phys. 7, 406\u2013411 (2011).\n\n22. 22\n\nBorregaard, J. & S\u00f8rensen, A.\u00a0S. Near-Heisenberg-limited atomic clocks in the presence of decoherence. Phys. Rev. Lett. 111, 090801 (2013).\n\n23. 23\n\nChou, C.\u00a0W., Hume, D.\u00a0B., Koelemeij, J. C.\u00a0J., Wineland, D.\u00a0J. & Rosenband, T. Frequency comparison of two high-accuracy Al+ optical clocks. Phys. Rev. Lett. 104, 070802 (2010).\n\n24. 24\n\nMonz, T. et al. 14-qubit entanglement: Creation and coherence. Phys. Rev. Lett. 106, 130506 (2011).\n\n25. 25\n\nMaunz, P. et al. Quantum interference of photon pairs from two remote trapped atomic ions. Nature Phys. 3, 538\u2013541 (2007).\n\n26. 26\n\nSchiller, S. et al. Einstein Gravity Explorer\u2014A medium-class fundamental physics mission. Exp. Astron. 23, 573\u2013610 (2008).\n\n27. 27\n\nS\u00f8rensen, A. & M\u00f8lmer, K. Entanglement and quantum computation with ions in thermal motion. Phys. Rev. A 62, 022311 (2000).\n\n28. 28\n\nGisin, N., Ribordy, G., Tittel, W. & Zbinden, H. Quantum cryptography. Rev. Mod. Phys. 74, 145\u2013195 (2002).\n\n29. 29\n\nOlmschenk, S. et al. Quantum teleportation between distant matter qubits. Science 323, 486\u2013489 (2009).\n\n30. 30\n\nChou, C-W. et al. Functional quantum nodes for entanglement distribution over scalable quantum networks. Science 316, 1316\u20131320 (2007).\n\n31. 31\n\nTogan, E. et al. Quantum entanglement between an optical photon and a solid-state spin qubit. Nature 466, 730\u2013734 (2010).\n\n32. 32\n\nBernien, H. et al. Heralded entanglement between solid-state qubits separated by three metres. Nature 497, 86\u201390 (2013).\n\n33. 33\n\nRist\u00e8, D. et al. Deterministic entanglement of superconducting qubits by parity measurement and feedback. Nature 502, 350\u2013354 (2013).\n\n34. 34\n\nD\u00fcr, W., Briegel, H-J., Cirac, J. & Zoller, P. Quantum repeaters based on entanglement purification. Phys. Rev. A 59, 169\u2013181 (1999).\n\n35. 35\n\nSherson, J.\u00a0F. et al. Quantum teleportation between light and matter. Nature 443, 557\u2013560 (2006).\n\n36. 36\n\nMa, X-S. et al. Quantum teleportation over 143 kilometres using active feed-forward. Nature 489, 269\u2013273 (2012).\n\n37. 37\n\nMcConnell, R. et al. Generating entangled spin states for quantum metrology by single-photon detection. Phys. Rev. A 88, 063802 (2013).\n\n38. 38\n\nAndersen, U.\u00a0L. & Ralph, T.\u00a0C. High fidelity teleportation of continuous variable quantum states using delocalized single photons. Phys. Rev. Lett. 111, 050504 (2013).\n\n39. 39\n\nDjerroud, K. et al. Coherent optical link through the turbulent atmosphere. Opt. Lett. 35, 1479\u20131481 (2010).\n\n40. 40\n\nTapley, B. et al. GGM02\u2014An improved Earth gravity field model from GRACE. J. Geod. 79, 467\u2013478 (2005).\n\n41. 41\n\nAbramovici, A. et al. LIGO: The laser interferometer gravitational-wave observatory. Science 256, 325\u2013333 (1992).\n\n42. 42\n\nSeidel, A. et al. 2007 IEEE International Frequency Control Symposium Joint with the 21st European Frequency and Time Forum The aces microwave link: Instrument design and test results. 1295\u20131298 (IEEE, 2007).\n\n43. 43\n\nWolf, P. et al. Quantum physics exploring gravity in the outer solar system: The SAGAS project. Exp. Astron. 23, 651\u2013687 (2008).\n\n## Acknowledgements\n\nWe are grateful to T. Rosenband, V. Vuleti\u0107, J. Borregaard and T. Nicholson for enlightening discussions. This work was supported by NSF, CUA, ITAMP, HQOC, JILA PFC, NIST, DARPA QuASAR and Quiness programs, the Alfred P. Sloan Foundation, the Packard Foundation, ARO MURI, and the ERC grant QIOS (grant no. 306576); M.B. acknowledges support from NDSEG and NSF GRFP. It is dedicated to R. Blatt and P.\u00a0Zoller on the occasion of their 60th birthday, when initial ideas for this work were\u00a0formed.\n\n## Author information\n\nAuthors\n\n### Contributions\n\nAll authors contributed extensively to the work presented in this paper.\n\n### Corresponding author\n\nCorrespondence to M. D. Lukin.\n\n## Ethics declarations\n\n### Competing interests\n\nThe authors declare no competing financial interests.\n\n## Supplementary information\n\n### Supplementary Information\n\nSupplementary Information (PDF 1006 kb)\n\n## Rights and permissions\n\nReprints and Permissions\n\nK\u00f3m\u00e1r, P., Kessler, E., Bishof, M. et al. A quantum network of clocks. Nature Phys 10, 582\u2013587 (2014). https:\/\/doi.org\/10.1038\/nphys3000\n\n\u2022 Accepted:\n\n\u2022 Published:\n\n\u2022 Issue Date:\n\n\u2022 ### Satellite-Based Photonic Quantum Networks Are Small-World\n\n\u2022 Samura\u00ed Brito\n\u2022 , Daniel Cavalcanti\n\u2022 \u00a0&\u00a0Rafael Chaves\n\nPRX Quantum (2021)\n\n\u2022 ### Distributed quantum phase estimation with entangled photons\n\n\u2022 Li-Zheng Liu\n\u2022 , Yu-Zhe Zhang\n\u2022 , Zheng-Da Li\n\u2022 , Rui Zhang\n\u2022 , Xu-Fei Yin\n\u2022 , Yue-Yang Fei\n\u2022 , Li Li\n\u2022 , Nai-Le Liu\n\u2022 , Feihu Xu\n\u2022 , Yu-Ao Chen\n\u2022 \u00a0&\u00a0Jian-Wei Pan\n\nNature Photonics (2021)\n\n\u2022 ### Quantum Internet\n\n\u2022 Takashi YAMAMOTO\n\nTEION KOGAKU (Journal of Cryogenics and Superconductivity Society of Japan) (2021)\n\n\u2022 ### Realization of a multinode quantum network of remote solid-state qubits\n\n\u2022 M. Pompili\n\u2022 , S. L. N. Hermans\n\u2022 , S. Baier\n\u2022 , H. K. C. Beukers\n\u2022 , P. C. Humphreys\n\u2022 , R. N. Schouten\n\u2022 , R. F. L. Vermeulen\n\u2022 , M. J. Tiggelman\n\u2022 , L. dos Santos Martins\n\u2022 , B. Dirkse\n\u2022 , S. Wehner\n\u2022 \u00a0&\u00a0R. Hanson\n\nScience (2021)\n\n\u2022 ### Diagrammatic technique for simulation of large-scale quantum repeater networks with dissipating quantum memories\n\n\u2022 Viacheslav V. Kuzmin\n\u2022 \u00a0&\u00a0Denis V. Vasilyev\n\nPhysical Review A (2021)","date":"2021-05-06 09:43:55","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8306794762611389, \"perplexity\": 10337.139345454048}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243988753.91\/warc\/CC-MAIN-20210506083716-20210506113716-00373.warc.gz\"}"} | null | null |
{"url":"https:\/\/mathoverflow.net\/questions\/334239\/number-of-distinct-scheme-structures-on-a-set","text":"# Number of distinct scheme structures on a set [closed]\n\nGiven a cardinal number $$|X|$$, how many isomorphism classes of schemes with the cardinality of the set of points equal to $$|X|$$ are there?\n\n## closed as off-topic by Steven Landsburg, Dan Petersen, user44191, abx, Todd Trimble\u2666Jun 18 at 16:32\n\nThis question appears to be off-topic. The users who voted to close gave this specific reason:\n\n\u2022 \"This question does not appear to be about research level mathematics within the scope defined in the help center.\" \u2013 Steven Landsburg, Dan Petersen, user44191, abx, Todd Trimble\nIf this question can be reworded to fit the rules in the help center, please edit the question.\n\nFor any cardinal $$\\kappa\\neq 0$$, there is a proper class of schemes of cardinality $$\\kappa$$ up to isomorphism. (In the case $$\\kappa = 0$$, there is a unique empty scheme up to isomorphism, namely the spectrum of the zero ring.)\nNote first that there is a proper class of schemes with just $$1$$ point. Indeed, there are fields of every infinite cardinality, and for any field $$k$$, $$\\text{Spec}(k)$$ has only one point. This observation easily extends to the case when $$\\kappa$$ is nonzero and finite, since for any field $$k$$, $$\\text{Spec}(k^n)$$ has $$n$$ points.\nIn the case when $$\\kappa$$ is infinite, we cannot handle arbitrary $$\\kappa$$ with an infinite product of fields. For example, the points of $$\\text{Spec}(k^\\mathbb{N})$$ are in bijection with the ultrafilters on $$\\mathbb{N}$$, so $$|\\text{Spec}(k^\\mathbb{N})| = 2^{2^{\\aleph_0}}$$.\nInstead, we can use the fact that $$\\kappa = \\kappa+1$$ when $$\\kappa$$ is infinite. Fix an algebraically closed field $$F$$ of cardinality $$\\kappa$$. For any field $$k$$, we have $$|\\text{Spec}(F[x]\\times k)| = |\\text{Spec}(F[x])\\sqcup \\text{Spec}(k)| = \\kappa$$, since $$\\text{Spec}(F[x])$$ has one point for every element of $$F$$, together with a single generic point, and $$\\text{Spec}(k)$$ has a single point. So again we have a proper class of schemes with underlying set of cardinality $$\\kappa$$.\n\u2022 You could also just take the disjoint union of $\\kappa$ copies of $\\mathrm{Spec}\\,k$ (of course your construction is nicer because it produces an affine scheme) \u2013\u00a0Denis Nardin Jun 18 at 14:05","date":"2019-07-21 01:18:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 27, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9086787700653076, \"perplexity\": 164.25007943791144}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195526799.4\/warc\/CC-MAIN-20190720235054-20190721021054-00377.warc.gz\"}"} | null | null |
Liwia Druzylla (Livia Drusilla; Iulia Augusta, ur. 30 stycznia 58 p.n.e.; zm. 29 n.e.) – rzymska patrycjuszka, córka Marka Liwiusza Druzusa Klaudianusa i Alfidii.
Jej matka, Alfidia, pochodziła z mało znaczącego rodu, ale ojciec, urodzony jako Appius Klaudiusz Pulcher i adoptowany w dzieciństwie przez Marka Liwiusza Druzusa, trybuna w 91 p.n.e. zapewnił jej prestiż rodów Liwiuszów i Klaudiuszów. Liwia Druzylla poślubiła swojego kuzyna, Tyberiusza Klaudiusza Nerona, gdy miała około 15 lat. Urodził im się syn, przyszły cesarz Tyberiusz, a gdy była w szóstym miesiącu ciąży z drugim synem, Druzusem, poślubiła Oktawiana (w 38 p.n.e.). Rozwód i małżeństwo zaaranżował Oktawian, który potrzebował politycznych koneksji Liwiuszów i Klaudiuszów. Ich małżeństwo trwało ponad pięćdziesiąt lat i cechowało się wzajemną lojalnością i szacunkiem. Było jednak bezdzietne, jedyne ich dziecko urodziło się martwe.
Pozycja pierwszej damy cesarskiego dworu, powiązania rodzinne i prywatne bogactwo pozwoliło jej odgrywać znaczącą rolę przez całe życie. Jej potomkami byli wszyscy cesarze dynastii julijsko-klaudyjskiej. Była oskarżana o zbrodnicze pozbywanie się kolejnych pretendentów do sukcesji po Auguście dla zapewnienia tronu swojemu synowi. Dzięki jej zabiegom August adoptował Tyberiusza i wyznaczył go na swojego następcę. Po śmierci Augusta zgodnie z jego wolą Liwia otrzymała tytuł augusty, została członkiem jego rodu (Gens Iulia) i odtąd była nazywana Julia Augusta. Tyberiusz był niechętny jej wpływom i nie powrócił do Rzymu nawet na jej pogrzeb – została pochowana w Mauzoleum Augusta.
Julia Augusta została zaliczona w poczet bogów w 42 n.e. jako Diva Augusta przez Klaudiusza – jej wnuka.
Małżeństwa i dzieci:
Tyberiusz Klaudiusz Neron
Tyberiusz – cesarz rzymski (14–37)
Druzus – ojciec Klaudiusza – cesarza (41–54) i Germanika
Oktawian August
Bibliografia
Linki zewnętrzne
Cesarzowe rzymskie
Urodzeni w 58 p.n.e.
Zmarli w 29 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,426 |
\section{Introduction}
Spintronic devices may provide a path to achieving high data density, ultralow power consumption, and high-speed operation in beyond-CMOS data storage and computing technologies~\cite{Hirohata2014}. Magnetic domain walls (DWs)~\cite{Neel1955,Koyama2011} and skyrmions~\cite{Belavin1975,Nagaosa2013}, localized twists of the magnetization with particle-like characteristics, are of high interest as potential information carriers in spintronic devices, owing to their topological properties and facile manipulation by electric currents. In particular, the small size, enhanced stability, and ability to follow two-dimensional trajectories make skyrmions extremely promising for racetrack storage~\cite{Parkin2008,Fert2013,Sampaio2013,Tomasello2014,Wiesendanger2016} or novel non-von Neumann computing architectures~\cite{Zazvorka2019,Pinna2018,Bourianoff2018,Prychynenko2018,Kyung2019}. Pioneering early work on magnetic skyrmions focused on bulk noncentrosymmetric materials~\cite{Jeong2004,Uchida2006,Yu2010} with low ordering temperatures, or ultrathin metal films in which nanoscale skyrmions can be stabilized at a low temperature~\cite{Heinze2011}. Recently, it has been found that multilayers with perpendicular magnetic anisotropy (PMA) can host magnetic skyrmions at room termperature~\cite{Jiang2015,Woo2016,Moreau-Luchaire2016,Boulle2016}. Enhanced skyrmion stability in multilayer films is afforded by the increased skyrmion volume when the total film thickness is increased~\cite{Buttner2018}. Interfaces in such films give rise to the Dzyaloshinskii-Moriya interaction (DMI), which promotes the N\'eel character of spin textures~\cite{Haazen2013,Emori2013,Ryu2013,Yang2015}, and to a dampinglike spin-orbit torque (SOT), which provides for their efficient current-driven motion.
Although the static behaviors of multilayer skyrmions are now reasonably well-understood, their dynamics has been less studied, despite the critical role that the dynamics plays in terms of potential applications. In single ferromagnet/heavy-metal bilayers, DWs driven by SOT tend to maintain dynamic equilibrium, i.e., the DW plane is characterized by a fixed (but current-dependent~\cite{Boulle2013}) angle $\psi$. This is in sharp contrast with the phenomenon of Walker breakdown (DW precession) that occurs in bubbles and straight DWs driven by magnetic fields~\cite{Schryer1974,Malozemoff1979,Beach2005} or spin-transfer torques (STT)~\cite{Berger1978,Zhang2004,Thiaville2004,Mougin2007} that exceed a critical threshold. Walker breakdown is precluded by symmetry for SOT-driven motion in conventional single ultrathin ferromagnetic layers~\cite{Linder2013,Risinggard2017}, since at high drive, $\psi$ tends asymptotically toward the hard axis but is never driven into precession. However, recently it has been found that in in multilayers of ferromagnet and heavy-metal, DWs and skyrmions can exhibit through-thickness twists~\cite{Dovzhenko2016,Montoya2017,Legrand2017a,Lemesh2018,Legrand2018,Montoya2018} such that the statics and dynamics can no longer be described using a single value of $\psi$. Micromagnetic simulations of such twisted multilayer skyrmions~\cite{Lemesh2018} have evidenced dynamical instabilities reminiscent of Walker breakdown during SOT-driven motion, wherein Bloch line nucleation and motion in a subset of layers leads to a significantly diminished skyrmion velocity and skyrmion Hall angle. These behaviors are a result of complex surface-volume stray field interactions whose influence on the dynamics remains largely unexplored.
In this work, we show that DWs in asymmetrically stacked ferromagnetic multilayers with PMA, in contrast to single-layer and bilayer thin films~\cite{Linder2013,Risinggard2017}, generally exhibit a Walker-breakdown-like phenomenon even when driven solely by dampinglike SOT. This breakdown occurs when certain (Bloch-like) layers reach a critical velocity, beyond which precession sets in, leading to an oscillatory trajectory and a diminished mobility. For typical material parameters, this velocity is of the order of tens of m/s, corresponding to the velocity range in recent experiments~\cite{Woo2016,Litzius2016,Legrand2017,Hrabec2017,Woo2017}. The breakdown originates from the interplay of SOT, DMI, and magnetostatic interactions~\cite{Lemesh2018}, thanks to which DWs in some layers can be driven toward a Bloch configuration amenable to precession (because of the surface-volume stray field interactions~\cite{Lemesh2018}) while others maintain a N\'eel (or transient~\cite{Lemesh2017}) character. This, in turn, allows the dampinglike SOT, which acts as an effective field $\propto\sin(\psi)$, to continue to drive the magnetostatically-coupled composite structure even though the Bloch layers are not directly susceptible to the driving torque. These results hence identify a critical deficiency in proposals to utilize magnetostatically-coupled multilayers for room-temperature skyrmion-based devices, thus, providing a materials engineering framework for maximizing the dynamical stability of skyrmions.
Here, we present three-dimensional (3D) micromagnetic simulations of the current-driven dynamics of multilayer DWs and skyrmions and develop an analytical model that describes the key features. The DW velocity and precession onset predicted by our model are in good qualitative agreement with full three-dimensional micromagnetic simulations of DW and skyrmion dynamics. We hence provide essential analytical insight and predictive capability that allow for a mechanistic understanding of these newly discovered complex dynamical phenomena. Our results have important implications for the potential use of multilayer-based skyrmions in racetrack devices in which high-speed motion is desired and provide a framework for designing the dynamics of multilayer skyrmions to enable optimal behaviors.
\begin{figure*}
\includegraphics[width=1.0\linewidth]{Figure1}
\caption{{\bf Time evolution of current-driven isolated twisted domain walls.} {\bf(a, b)} Micromagnetic snapshots that depict precessional motion for $j=\SI{4.5e11}{A/m^2}$ (just above the critical current). {\bf(a)} The $x$-$z$-cross section and {\bf(b)} schematic side view of transient layers. Time $t=0$ corresponds to the static profile and the precession initiates in layer $i=13$ (arrows depict the direction of DW rotation). In the layer below it ($i=12$), the DW oscillates (without precession), while in all other layers, the DW profile remains unchanged (although quick readjustments of $\psi_i$ are still present as discussed at the end of section~\ref{sec:five}). Flat arrows show schematically the direction of the Thiele effective force $F_{i}$ from the damping-like spin-orbit torque acting on several layers. {\bf(c)} Average DW velocity as a function of current density, with empty (filled) lines or dots indicating the presence (absence) of full $2\pi$ precession and the solid line corresponding to the steady-state DW solution~\cite{Lemesh2018}. {\bf(d)} Precessional frequency as a function of current density and layer number with dashed lines representing guides ($f\sim \sqrt{j^2-j_{cr}^2}$) to the eye. Interfacial DMI is fixed to $D=\SI{1.0}{mJ/m^2}$. The shading in panels (c) and (d) indicate which layers, if any, are precessing, as labeled in panel (c).}\label{fig:1}
\end{figure*}
\section{Methods}
Micromagnetic simulations are performed using the Mumax3~\cite{Vansteenkiste2014} software. Material parameters are $\mathcal{T}=\SI{1}{nm}$, $M_s=\SI{1.4e6}{A/m}$, $A=\SI{1.0e-11}{J/m}$. For skyrmion dynamics simulations, the modified Slonczewski-like torque module has been used (with enabled damping-like torque corresponding to $\theta_{SH}=0.1$ and the fixed layer polarization along the −y direction). Unless specified otherwise, $\mathcal{P}=\SI{6}{nm}$ ($f=1/6$), $K_u=\SI{1.72e6}{J/m^3}$ ($Q=1.4$), $\alpha=0.3$.
For skyrmions (domain walls) the cell size is $1.5~\text{nm}\times 1.5~\text{nm} \times 1~\text{nm}$ ($1.5~\text{nm}\times 15~\text{nm} \times 1~\text{nm}$) with the simulation size of $1125~\text{nm}\times 1125~\text{nm}\times \mathcal{N}\mathcal{P}$ and periodic boundary conditions applied in the $x$- and $y$- directions. We note that the large cell size in the y direction for the DW simulations prevents Bloch line nucleation, leading to uniform precession that is more readily treated analytically. When using smaller ($1.5~\text{nm}$) cells in such simulations, but maintaining periodic boundary conditions, Bloch line formation is much more random than in the skyrmion simulations, sometimes occurring and sometimes not, which inhibits the systematic analysis of the dynamics. This observation is attributed to the symmetry and continuity of the DW simulations (with periodic boundary conditions), which artificially inhibits the formation of Bloch lines. We note that when omitting periodic boundary conditions in the y direction and using a reduced width to simulate a racetrack, the 3D simulations tend to become unstable as the DW decoupling from layer to layer is more pronounced. This problem does not occur in the full micromangetic skyrmion simulations, as skyrmions tend to be more rigid.
The differential equations in the analytical model are solved numerically using the explicit NDSolve method of the Wolfram Mathematica 11.3 software.
\section{Micromagnetic simulations}
We first consider the dynamics of an isolated straight DW in a multilayer film with ultrathin magnetic (M) layers ($\mathcal{T}<l_{ex}$) of thickness $\mathcal{T}=1~\text{nm}$, consisting of $\mathcal{N}=15$ multilayer repeats with a period of $\mathcal{P}=6~\text{nm}$ separated by nonmagnetic spacer layers. Although the composition of spacer layers has no effect on the DW analysis, here we imply that they consist of heavy metal layers (H) and symmetry breaking layers (S) incorporated into an asymmetrically stacked heterostructure of [H/M/S]$_{\mathcal{N}}$-type, similar to those studied in a number of recent experimental works in which room-temperature skyrmions have been stabilized~\cite{Woo2016,Moreau-Luchaire2016,Litzius2016,Legrand2017,Legrand2017a,Buttner2017e,Lemesh2018a,Kyung2019}. We assume a saturation magnetization $M_s=\SI{1.4e6}{A/m}$, quality factor $Q=2K_u/\mu_0M_s^2=1.4$ (where $K_u$ is the uniaxial magnetocrystalline anisotropy constant and $\mu_0$ is the vacuum permeability), exchange stiffness $A=\SI{1.0e-11}{J/m}$, and interfacial DMI, $D=\SI{1.0}{mJ/m^2}$, representative of typical experimental skyrmion-hosting multilayers~\cite{Buttner2015b,Woo2016,Moreau-Luchaire2016a,Litzius2016,Wiesendanger2016,Buttner2017e}. The static DW profile in such a material exhibits a twisted character as shown elsewhere~\cite{Dovzhenko2016,Legrand2017a,Lemesh2018} and depicted in Fig.~\ref{fig:1}(a). The DW profile varies from N\'eel of one chirality ($\sin(\psi)=1$) at layer number $i=0$ to N\'eel of the opposite chirality ($\sin(\psi)=-1$) at $i=14$, with layers 12 and 13 exhibiting a Bloch-like character ($\sin(\psi)\approx 0$).
We perform full 3D micromagnetic simulations (see Methods) of current-driven motion by damping-like SOT, assuming a spin Hall angle $\theta_{\text{SH}}=0.1$~\cite{Emori2013,Liu2012a} and damping constant $\alpha=0.3$~\cite{Yuan2003,Metaxas2007,Schellekens2013}. Here, the SOT is provided by the charge current $j$ that flows in the heavy metal layer along the $x$-direction (see Fig.~\ref{fig:1}(a)). The simulations reveal that for small current densities ($j<\SI{3.75e11}{A/m^2}$), the DW translates uniformly with a linear trajectory (position versus time) at a velocity proportional to $j$ (Fig.~\ref{fig:1}(c)). The DW angles ($\psi_i$) slightly readjust in all layers in accordance with the new dynamic equilibrium, but the profile of the twist remains qualitatively the same. The situation changes drastically once the current exceeds a threshold $j_{cr}\approx\SI{3.75e11}{A/m^2}$, at which point the DW at layer $i=13$ begins to precess (Figs.~\ref{fig:1}(a),(b)). The frequency of this (nonuniform) precession is of the order of approximately 1~GHz as shown in Fig.~\ref{fig:1}(d).
Above $j_{cr}$, the DW translates with an oscillatory trajectory. The corresponding average velocity $v(j)$ becomes nonlinear in this regime, with a slight drop at $j_{cr}$ followed by a (sublinear) increase at higher $j$ (Figs.~\ref{fig:1}(c)). The precessional frequency of the precessing layer monotonically increases with increasing $j$, as indicated in Fig.~\ref{fig:1}(d). At higher currents, more layers begin to precess, resulting in additional Walker breakdown-like features in $v(j)$. For instance, at $j\approx\SI{5e11}{A/m^2}$, the precession also initiates in layer $i=12$. This behavior leads to a substantial reduction in the velocity compared to that expected from extrapolation from the low-$j$ mobility (slope of $v$ vs $j$) in the absence of precession.
\begin{figure*}
\includegraphics[width=1.0\linewidth]{Figure2}
\caption{{\bf The evolution of twisted skyrmions right above the critical current.} {\bf(a, b)} $x$-$z$ cross sections of low- and high DMI skyrmions and {\bf(c, d)} in-plane micromagnetic profiles of the precessing layer (arrows depict the direction of spin rotation). The upper (lower) row corresponds to $D=\SI{0.25}{mJ/m^2}$, $j=\SI{12.5e11}{A/m^2}$, $i=9$ ($D=\SI{2.0}{mJ/m^2}$, $j=\SI{17.0e11}{A/m^2}$, $i=14$). The corresponding DW width, ($\Delta_i$) and DW angle ($\psi_i$) are depicted as a function of DMI in Fig.~1 of Ref.~\cite{Lemesh2018}.}
\label{fig:2}
\end{figure*}
Similar behavior is observed for stray field skyrmions~\cite{Buttner2018}, as seen in the micromagnetic snapshots in Fig.~\ref{fig:2} and the $v(j)$ curves in Fig.~\ref{fig:3}(a). Here, we performed 3D micromagnetic simulations using the same material parameters as above, except for the DMI, which is varied in the range $D=0$ to $D=\SI{3.0}{mJ/m^2}$. For each DMI value, the magnetic field was adjusted to yield the same equilibrium skyrmion radius of $R\approx50~\text{nm}$. At $D=0$ the Bloch layer is in the center of the film and since the SOT on the top half of the film cancels that in the bottom half, the skyrmion is immobile. Increasing $D$ shifts the position of the Bloch layer upward, as shown previously~\cite{Lemesh2018}, leading to a nonzero net Thiele effective force~\cite{Thiele1973} from the SOT to drive the skyrmion. We find that Walker breakdown-like behavior occurs generally in multilayers with $D>0$, where it is mediated by the generation and motion of Bloch lines in the DW that bounds the bubblelike skyrmion.
Figure~\ref{fig:2} shows micromagnetic snapshots during SOT-driven motion just above $j_{cr}$ for the cases $D=0.25$ and $D=\SI{2.0}{mJ/m^2}$. These simulations correspond, respectively, to cases where a Bloch layer exists near the center of the film (low DMI) and where the entire stack is N\'eel (high DMI). Corresponding $v(j)$ curves are shown in Fig.~\ref{fig:3}(a). Precessional motion tends to initiate in either the Bloch-like layer for intermediate DMI or in the top-most layer once the DMI exceeds the threshold at which all other layers are N\'eel. The critical current tends to increase for larger $D$, as seen in Fig.~\ref{fig:3}(a)).
We find that current also leads to an elongation of the skyrmion, which increases as the current density increases. This accounts for the larger distortion seen in Fig.~\ref{fig:2}(d) compared to Fig.~\ref{fig:2}(c), where a larger driving current is used in the latter simulation to drive the system into precession. The corresponding eccentricity of a skyrmion as a function of current and DMI are visualized as contours plotted in Figure~\ref{fig:6}(a). These distorted objects maintain dynamic equilibrium during the injection of current. However, when the current is switched off, they go back to their original circular shape, with a diameter set by the applied field.
From Fig.~\ref{fig:3}(a), we can also observe that the velocities of simulated skyrmions are limited to the same $v_{cr}$ as given by the analytical DW theory (as discussed next). This indicates that precessing DWs play a defining velocity-limiting role in skyrmion propagation. Indeed, such precessing DWs can be found in two radial sections of every supercritical skyrmion. Similarly to our observations for multilayer DWs, as $j$ increases past $j_{cr}$, additional layers begin to precess, in this case through the creation of Bloch lines\footnote{Micromagnetic tools can provide only a qualitative understanding of the process of Bloch line/point nucleation. For quantitative analysis, atomistic simulations should be used.} in additional layers. This implies that the topological charge for skyrmions beyond this threshold is not fixed, but rather varies with time. This is seen in Figs.~\ref{fig:3}(b) and (c) which show the time-averaged topological charge in each layer at various current densities (Fig.~\ref{fig:3}(b)) and the dynamical evolution of the topological charge ((Fig.~\ref{fig:3}(c)) for an exemplary case $D=\SI{1}{mJ/m^2}$. Hence, our results imply that multilayer-based skyrmions may not be topologically well-defined dynamically, even when driven at relatively low velocities.
\begin{figure}
\includegraphics[width=1.0\linewidth]{Figure3}
\caption{{\bf The dynamic properties of twisted skyrmions.} {\bf(a)} Skyrmion velocity as a function of current density. Continuous lines correspond to analytical DW model calculations, and symbols correspond to 3D micromagnetic simulations for skyrmions, with closed symbols indicating precession-free motion and open symbols denoting precessional motion. {\bf(b)} Average topological charge as a function of layer number and current density for $D=\SI{1.0}{mJ/m^2}$. The topological charge is computed as $N = 1/4\pi\int \mathbf{m}\cdot\left(\partial \mathbf{m}/\partial x\times \partial \mathbf{m}/\partial y \right) \d x \d y $~\cite{Belavin1975}. {\bf(c)} Instantaneous topological charge as a function of time for $D=\SI{1.0}{mJ/m^2}$ and $j=\SI{12e11}{A/m^2}$. }\label{fig:3}
\end{figure}
\section{Analytical model for multilayer domain walls}
The observed behaviors have all the hallmarks of Walker breakdown, a well-known transition from steady-state to precessional dynamics exhibited by DWs once they reach a threshold velocity $v_{cr}$~\cite{Schryer1974,Malozemoff1979,Thiaville2012}. Walker breakdown is however not expected for damping-like SOT-driven motion, at least in 2D systems. This threshold is related to the ``stiffness'' of $\psi$ against rotation away from its equilibrium orientation, which can be characterized by an effective field $H_{\text{stiff}}$ acting on the DW moment. In the one-dimensional DW model \cite{Schryer1974,Malozemoff1979}, $v_{cr}=\gamma \mu_0 \Delta H_{\text{stiff}}/2$, where in the case of strong DMI, $H_{\text{stiff}}$ is approximately equal to twice the DMI effective field, $2H_D$ \cite{Thiaville2012}. Hence, the Walker threshold cannot be reached for SOT-driven DWs in single-layer films, since $\gamma \mu_0 \Delta H_D$ also corresponds to the asymptotic velocity limit imposed by the dampinglike SOT symmetry~\cite{Thiaville2012} (see Eq.~\eqref{eq:singlesdmi} in Appendix~\ref{sec:walker}).
In multilayers, however, we see that the Walker limit can indeed be reached. The process, which we treat in detail below, can be understood qualitatively as follows. We find that precession initiates in the layer at which the sum of the interfacial DMI ($D$) and the DMI-like surface-volume stray field energy component ($D_{sv}$~\cite{Lemesh2018}),
\begin{align}
D_{\text{eff}}(i) &=D+D_{sv}(i)\label{eq:thr1},\\
D_{sv} (i)&=-\frac{\mathcal{N}}{\pi f}\sum_{j=0}^{\mathcal{N}-1} F_{sv, ij}(\Delta)\text{sgn}(i-j)\label{eq:thr2}.
\end{align}
is minimum. This corresponds to the most Bloch-like layer ($i_{cr}=i_{\text{Bloch}}$) in the case of a twisted DW structure, or to the topmost layer ($i_{cr}=(\mathcal{N}-1)\delta_{1, \text{sgn}(D)}$) when all other layers are N\'eel.
Since $D_{\text{eff}}(i)$ is reduced from $D$, and is close to zero for certain layers in twisted DWs, the effective field that supplies the restoring torque on a driven DW in those layers is small. Although the SOT cannot directly drive such Bloch layers due to its symmetry, in the case of a multilayer, as depicted in Fig.~\ref{fig:1}(b), there is a net Thiele effective force due to the magnetostatic coupling and the dampinglike SOT acting on the other layers. Thus, as long as there is finite DMI that ensures there are more N\'eel-like DWs of one orientation than the other, then the least ``stiff'' layers can be driven beyond $v_{cr}$.
To model this behavior analytically, we treat the DW as a classical object with the Lagrangian density $\mathcal{L}$ and the Rayleigh dissipation functional $\mathcal{F}$ expressed similarly to the single-layer case~\cite{Boulle2011,Boulle2013},
\begin{align}
\mathcal{L} & = \sigma_{tot}^{ 1, \mathcal{N}} - \frac{M_sf}{\gamma\mathcal{N}}\sum_{i=0}^{\mathcal{N}-1} \int_{-\infty}^{+\infty} \d x\left\{\left(\psi_i-\pi/2\right) \sin(\theta_i)\frac{{\d \theta_i}}{\d t} \right\},\label{eq:lagrangian}\\
\mathcal{F} & = \frac{\alpha M_sf}{2\gamma \mathcal{N}}\sum_{i=0}^{\mathcal{N}-1}\int_{-\infty}^{+\infty} \d x\left[\frac{\d\mathbf{m_i}}{\d t}-\frac{\gamma}{\alpha}B_{\text{DL}}\mathbf{m_i}\times \mathbf{\hat{y}}\right]^2\label{eq:diss}.
\end{align}
where $\gamma=\SI{1.76e11}{sA/kg}$ is the gyromagnetic ratio, $f=\mathcal{T}/\mathcal{P}$ is the scaling factor, $B_{\text{DL}}=\frac{\hbar\theta_{\text{SH}}}{2eM_s \mathcal{T}}j$ is the damping like SOT effective field, $\hbar$ is the reduced Planck constant, and $\sigma_{tot}^{ 1, \mathcal{N}} = \int_{-\infty}^{+\infty}\mathcal{E}_{tot}^{ 1, \mathcal{N}}(x)\d x $ is the total cross-sectional energy density of the DW normalized per single layer repeat. In Appendix~\ref{sec:derivations}, we consider a more general equation in the presence of STT and magnetic fields, while here we focus on the specific case of a field-free system with an in-plane current that carries only the dampinglike SOT.
\begin{figure}
\includegraphics[width=1.0\linewidth]{Figure4}
\caption{
{\bf The dynamics of twisted DWs given by analytical theory.} {\bf(a)} DW velocity and precessional frequency as a function of current density. {\bf(b)} DW position as a function of time and current density in the proximity of the critical current ($j_{cr}=\SI{6.4e11}{A/m^2}$). {\bf(c)} The same as in {\bf(b)} but after subtracting the linear $\langle v \rangle * t$ contribution, where $\langle v \rangle$ is the time-averaged velocity and $t$ is the time. For all cases, $D=\SI{1.0}{mJ/m^2}$. }\
\label{fig:4}
\end{figure}
We assume that the DW in every layer follows a Walker profile~\cite{Schryer1974}, with a constant DW angle $\psi_i$ and a polar angle $\theta_i(x) = \arctan\{\exp[\mp(x-q_i)/\Delta_i]\} $, where upper (lower) sign stands for $\downarrow|\uparrow$ ($\uparrow|\downarrow)$ DW state. Assuming that all the layers are perfectly coupled ($q_i\equiv q$) and have identical width ($\Delta_i\equiv\Delta$), we can use the result of our earlier work~\cite{Lemesh2018}, where we showed that in magnetic multilayers, the total energy of twisted straight DWs can be expressed as
\begin{align} \sigma_{tot}^{ 1, \mathcal{N}} (\Delta,& \psi_i)=\frac{2A}{\Delta} f +2K_{u} \Delta f \mp \frac{\pi D f}{\mathcal{N}}\sum_{i = 0}^{\mathcal{N}-1} \sin(\psi_i)\notag\\
&+ \sum_{i = 0}^{\mathcal{N}-1} \sum_{j = 0}^{\mathcal{N}-1} \left\{F_{s, ij} (\Delta) +\sin(\psi_i)\sin(\psi_{j}) F_{v, ij} (\Delta)\notag \right.\\&\left.\pm \sin(\psi_i)\text{sgn}(i-j)F_{sv, ij} (\Delta)\right\} \label{eq:exactstray}
\end{align}
with generic functions $F_{\alpha, ij}$ defined in Ref.~\cite{Lemesh2018} (and Eq.~\eqref{eq:A2} in Appendix~\ref{sec:derivations}). Using the Walker profile, we can also integrate Eqs.~\eqref{eq:lagrangian},~\eqref{eq:diss}, which results in analytical expressions for $\mathcal{L}(\Delta, \psi_i)$ and $\mathcal{F}(\Delta, \psi_i) $ as shown in Appendix~\ref{sec:derivations}. The equation of motion can then be obtained from the Lagrange-Rayleigh equations using the generalized coordinates $X=q, \psi_i, \Delta$:
\begin{align}
\frac{\partial\mathcal{L}}{\partial X}-\frac{\partial}{\partial t}\frac{\partial\mathcal{L}}{\partial \dot{X}}-\frac{\partial}{\partial x}\frac{\partial\mathcal{L}}{\partial X'}+\frac{\partial\mathcal{F}}{\partial \dot{X}}&=0,
\end{align}
Considering the case of a {\it freely propagating} DW ($\dot{q}\neq0$) with time independent width ($\dot{\Delta}=0$), these equations reduce to
\begin{align}
\dot{q}&= \frac{\mp \Delta}{\alpha \mathcal{N}\cos(\chi)} \sum_{j=0}^{\mathcal{N}-1}\left[\frac{\pi \gamma B_{\text{DL}} }{2} \sin(\psi_j)- \dot{\psi_j}\right]\label{eq:qdyn3}
\end{align}
where $\psi_i(t)$ can be found from the following $\mathcal{N}$ equations:
\begin{align}
&\sum_{j=0}^{\mathcal{N}-1} \left[\dot{\psi_j}\left( \frac{1}{\mathcal{N}} + \alpha^2\delta_{ij}\right) -\sin(\psi_j) \frac{\pi \gamma B_{\text{DL}}}{2\mathcal{N}} \right]\notag \\ &= \frac{\alpha \gamma \cos(\psi_i-\chi)}{2M_s\Delta} \left(\pm \pi D- \frac{\mathcal{N}}{f}\times \notag \right.\\ & \left. \times \sum_{j=0}^{\mathcal{N}-1}\left[(1+\delta_{ij})\sin(\psi_{j}-\chi) F_{v, ij} \pm \text{sgn}(i-j)F_{sv, ij}\right]\right)\label{eq:psidyn3}.
\end{align}
Here, $\Delta$ is the average DW width (described by Eq.~\eqref{eq:deltadyn0} in Appendix~\ref{sec:derivations}), which can be approximated from static equations (see Eqs.~(5, 6) in Ref.~\cite{Lemesh2018}), since it depends only weakly on $j$. Note that we also introduced the DW tilt $\chi$~\cite{Boulle2013}, although in contrast to Ref.~\cite{Boulle2013}, $\chi$ here is a fixed parameter rather than a conjugate variable. We find from numerics that the critical current (when present) takes a minimum value at $\chi=0$, which corresponds to the straight transverse DW. This is in line with the fact that for a skyrmion, the precession typically initiates nearby its front and back edges, as seen in Fig.~\ref{fig:2}. Hence, for our further analysis, we focus on the case $\chi=0$.
The steady state analysis of Eqs.~\eqref{eq:qdyn3},~\eqref{eq:psidyn3} predicts that the Walker breakdown phenomenon is generally present in films with $\mathcal{N}>2 $ and is absent for single or bi-layers (as shown in Appendix~\ref{sec:walker}). The resulting numerical solution of Eqs.~\eqref{eq:qdyn3},~\eqref{eq:psidyn3} is depicted in Fig.~\ref{fig:4}, using the same material parameters as those used for the simulations in Fig.~\ref{fig:1}. Our theoretical model accurately captures the critical layer number in which precession originates, although at slightly higher current densities compared with micromagnetic simulations. It also captures the monotonic increase of the precession frequency with current. Above the precessional threshold, we see a transition from stationary translational motion to oscillatory motion (Figs.~\ref{fig:4}(b), (c)), as occurs in conventional Walker breakdown, and is evidenced in our micromagnetic simulations. We note that at higher currents, micromagnetic simulations generally result in a larger number of precessing layers than predicted by our model, which we attribute to three factors: (i) at high currents, DWs tend to decouple laterally, as is evident from Fig.~\ref{fig:1}(a) and Fig.~\ref{fig:2} (also see Figs.~\ref{fig:7}(b),(c) in Appendix~\ref{sec:extra}), which fundamentally affects their dynamics, (ii) in simulations, the cell size is finite, and (iii) our analytical equations are generally more constrained compared with micromagnetic simulations.
\section{Onset of precessional dynamics}~\label{sec:five}
Our model, though developed for straight DWs, also accurately predicts the $v(j)$ characteristics for skyrmions, as seen in Fig.~\ref{fig:3}(a), where our analytical results are overlayed with the 3D micromagnetically-modeled results, using the same material parameters. We note that the velocities are substantially lower than those predicted previously~\cite{Lemesh2018,Legrand2018} with models that imposed stationary dynamics (dot-dashed lines in Fig.~\ref{fig:7}(a) of Appendix~\ref{sec:extra}) on twisted DWs, emphasizing the qualitative and quantitative impact of precession on $v(j)$.
\begin{figure}
\includegraphics[width=1.0\linewidth]{Figure5}
\caption{{\bf Critical parameters as a function of DMI constant $D$ and number of multilayer repeats $\mathcal{N}$.} {\bf(a)} Critical layer number $i_{cr}$ for the precession onset and {\bf(b)} the Bloch layer number $i_{\text{Bloch}}$~\cite{Lemesh2018}. {\bf(c)} Critical current $j_{cr}$ for the onset of the first Walker breakdown event and {\bf(d)} the corresponding critical velocity $v_{cr}$. Film parameters are for a film with $f=1/6$, $Q=1.4$, $\alpha=0.3$. Brown color in Figs.~(a),(c) separates regions for which the critical current (if exists) exceeds $j=\SI{50e11}{A/m^2}$, in Fig.~(d) the brown color indicates regions with $v_{cr}$ higher than $\SI{50}{m/s}$, and in Fig.~(b) it indicates regions with no Bloch layers.}\label{fig:5}
\end{figure}
We generally find that Walker breakdown tends to start in the layer whose DW profile is the closest to being Bloch. This is evident from Fig.~\ref{fig:5}(a), which depicts the location of the Walker layer, $i_{cr}$, as a function of the number of multilayer repeats $\mathcal{N}$ and DMI. Comparing with Fig.~\ref{fig:5}(b), which depicts the location of the Bloch layer~\cite{Lemesh2018} (if it exists), we can conclude that the correlation between $i_{cr}$ and $i_{\text{Bloch}}$ is indeed very high. There are two notable differences: (i) at no DMI, there is no Walker breakdown in the system as it is completely immobile and (ii) at high DMI all the DWs are N\'eel. However, in the latter case, the precession can still occur, although it would always initiate in the topmost layer. This is a consequence of the fact that at high DMI, high currents have the largest impact on the DW angle for a layer that has the smallest $D_{\text{eff}}$, i.e., the one that is near the top of the multilayer stack. Once its DW angle deviates sufficiently from the N\'eel-like configuration, precession ensues. One can thus expect that for low DMI, precession starts close to the middle of the multilayer (approaching $i_{cr}=\mathcal{N}/2$ as $D\rightarrow 0$), while for high DMI, it always begins in the top layer, $i_{cr}=\mathcal{N}-1$ (or $i_{cr}=0$ for negative DMI).
Figure~\ref{fig:5}(c) depicts the critical current as a function of DMI and $\mathcal{N}$. We see that $j_{cr}$ diverges for very small and large values of DMI, with the effect being more dominant for smaller $\mathcal{N}$. For higher $\mathcal{N}$, $j_{cr}(D)$ generally exhibits a wide plateau. The corresponding critical velocity is depicted in Fig.~\ref{fig:5}(d), from which one can find that unless the DMI is very strong, the critical velocity of the DW remains more or less constant. This suggests that $v_{cr}$ is largely independent of the position of the Bloch layer within the multilayer. This is reasonable, since the Bloch layer is defined by a (near) vanishing of $D_{\text{eff}}$, and the restoring torque in this layer should be the same regardless of its position in a multilayer. This is in general agreement with the established model for Walker breakdown~\cite{Malozemoff1979}, according to which a spin texture exhibits a nonuniform precession once its velocity exceeds a certain critical value.
Figure~\ref{fig:6} examines the onset of precession in more detail, focusing on multilayers with $\mathcal{N}=15$ (i.e., focusing on a vertical cut in Figs.~\ref{fig:5}(c) and \ref{fig:5}(d)). The critical velocity and current are illustrated in Figs.~\ref{fig:6}(a) and \ref{fig:6}(b) as a function of $D$. One can see that the analytically-computed trends for both $j_{cr} (D)$ and $v_{cr}(D)$ are in very good agreement with micromagnetic simulations (depicted with points for DWs and stars for skyrmions). $v_{cr}$ is approximately constant up to some value of DMI, wherein the Bloch layer reaches the top of the film. At this point, $v_{cr}$ increases approximately linearly with DMI, as a consequence of the disappearance of the Bloch-like layers, in which case, it is more difficult to drive the precession. This corresponds to a behavior akin to that in single-layer films shown previously~\cite{Thiaville2012}, so $v_{\text{cr}}\propto (D-D_{cr})$. We note that the jagged appearance of the analytical calculation (solid lines in Fig.~\ref{fig:6}(a) and \ref{fig:6}(b) and in the corresponding contour plots of Fig.~\ref{fig:5}) result from the fact that $i$ is a discrete variable so that the Bloch-most layer is in general not located exactly at the node in $D_{\text{eff}}$.
In Fig.~\ref{fig:6}(b), we plot the analytical model solutions for $v_{cr} (D)$ for different quality factors, $Q$, and scaling factors, $f$. We find that the only difference between the resulting $v_{cr}(D)$ curves are in the transition point ($D_{cr}$). Both the offset ($v_{0}$) of the curve $v_{cr}(D)$ (when $D<D_{cr}$) and the slope of the $v_{cr} (D)$ curve (when $D>D_{cr}$) remain approximately independent of $f$ and $Q$ (and $\Delta$). We also find that $v_0$ scales linearly with $M_s$, and with $\mathcal{T}$, but is independent of $\mathcal{N}$ (once there are more than a few layers).
Finally, the critical current and velocity also depend on the Gilbert damping parameter, $\alpha$. As evidenced from Fig.~\ref{fig:6}(c), $j_{cr}(\alpha)$ increases approximately linearly with $\alpha$ (except for very small $\alpha$). The critical velocity, on the other hand, is constant for $\alpha \gtrsim 0.1$, as it is in conventional Walker breakdown, while for smaller $\alpha$, it varies approximately linearly with current. This dependence of $v_{cr}$ on $\alpha$ originates from the dynamical readjustment of all non-precessing layers $\psi_i$ after every $2\pi$ cycle of precession (resulting in DW oscillations and spreaded DW angles as depicted in Fig.~\ref{fig:8}(c) for layers 0 and 1 in a trilayer heterostructure). When the damping is low, this readjustment occurs over a timescale comparable with the precession period, so all the layers contribute to the $\sum_i \dot{\psi_i}$ term of Eq.~\eqref{eq:qdyn3}. At this point, a further decrease of damping leads to a smaller fraction of time that the non-precessing layers spend in a steady state, which leads to a smaller net DW velocity. In contrast, for high damping, this dynamical readjustment becomes essentially instantaneous, so only the critical precessing layer contributes to the $\sum_i \dot{\psi_i}$ term. In this case, since its precession is not a function of $\alpha$, the critical velocity of the DW is independent of $\alpha$.
\begin{figure}
\includegraphics[width=1.0\linewidth]{Figure6}
\caption{ {\bf(a)} Critical current $j_{\text{cr}}$ and {\bf(b)} critical velocity $v_{\text{cr}}$ as a function of DMI for $\alpha = 0.3$. ({\bf(c)} Critical current $j_{\text{cr}}$ and {\bf(d)} critical velocity $v_{\text{cr}}$ for $D = \SI{1.0}{mJ/m^2}$ as a function of $\alpha$. Results in panels (b)–(d) are shown for several values of the quality factor $Q$ and scaling factor $f$. Continuous lines represent the results obtained from the analytical model, and discrete symbols represent the results of 3D micromagnetic simulations of straight DWs (circles) and skyrmions (stars). Dotted lines in Fig.~(b) represent $v_{cr}$ given by our simplified model (Eqs.~\eqref{eq:ourwalker},~\eqref{eq:ourfieldwalker}). The color shading in panel (a) indicates the map of skyrmion eccentricity. In all figures, $\mathcal{N}=15$.}\label{fig:6}
\end{figure}
\section{Simplified model for the precessional onset threshold}
Since the observed precessional onset in many ways mimicks conventional Walker breakdown in the presence of DMI, we provide a practical and physically-intuitive model for estimating $v_{cr}$, which is valid for $\alpha\gtrsim 0.1$. Similarly to field-driven DW precessional onset (see Appendix~\ref{sec:walkerfield}), the critical velocity can be found from
\begin{align}
v_{cr}=\Delta\gamma \mu_0 H_\text{\text{stiff}}/2\label{eq:ourwalker},
\end{align}
where $H_\text{\text{stiff}}$ is the strength of the effective ``stiffness'' field that is proportional to the energy difference between states with the critical layer having the Bloch and the N\'eel configurations, defined as
\begin{align}
H_\text{\text{stiff}}&=\left|\sigma_{tot}^{ 1, \mathcal{N}} (\Delta, \psi_i=\psi_{\text{stat}}, \psi_{cr}=0)\notag \right.\\ & \left. -\sigma_{tot}^{ 1, \mathcal{N}} (\Delta, \psi_i=\psi_{\text{stat}}, \psi_{cr}=k \pi/2\right|/(\mu_0 M_s f/\mathcal{N})\label{eq:ourfieldwalker},
\end{align}
Here, $\sigma_{tot}^{ 1, \mathcal{N}}$ is taken from Eq.~\eqref{eq:exactstray}, $\psi_{\text{stat}}$ and $\Delta$ are defined from Eqs. (5, 6) in Ref.~\cite{Lemesh2018}, and $k$ is either $+1$ or $-1$ (chosen for maximum $H_\text{\text{stiff}}$). The resulting $v_{cr}(D)$ curve (dotted lines in Fig.~\ref{fig:6}(b) for the case of Q=1.4, f=1/6) closely follows the numerical and micromagnetic results, even though it was estimated purely from static energy considerations.
Note that Eq.~\eqref{eq:ourfieldwalker} gives a reasonable value of $v_{cr}$ even at $D=0$ ($v_{cr}=\SI{32.3}{m/s}$). Since in this case, the ``critical'' layer is exactly in the middle of the multilayer stack, surface-volume stray field interactions play no role due to symmetry. Indeed, $D_{sv}(i)$ given by Eq.~\eqref{eq:thr2} at point $ i=(\mathcal{N}-1)/2$ is zero~\cite{Lemesh2018}. Thus, since interlayer interactions vanish for the critical layer, the stiffness field can be approximated from the in-plane shape anisotropy field for a single magnetic layer ($\mathcal{N}=1$, $f=1$), i.e.,
\begin{align}
H_{\text{stiff}}\approx \frac{|(\sigma_{tot}^{ 1, 1}(\psi=0)-\sigma_{tot}^{ 1, 1}(\psi=k\pi/2)|}{\mu_0 M_s}\equiv \frac{2K_{\perp}}{\mu_0 M_s},
\end{align}
where $K_{\perp}=\pi\mu_0M_s^2\Delta^2\mathcal{T}^{-1}G_v(\mathcal{T}/2\pi \Delta)$ is the effective transverse anisotropy~\cite{buttner2015}, with $G_v$ defined in Eq.~\eqref{eq:A3} and in Ref.~\cite{Lemesh2018}. In our case, the magnetic layer is ultrathin ($\mathcal{T}<l_{ex}$), so we can use a thin film approximation for $K_{\perp}$ given by Refs.~\cite{Tarasenko1998,buttner2015},
\begin{align}
K_{\perp}=\frac{\ln(2)\mathcal{T}\mu_0M_s^2}{2\pi\Delta}.
\end{align}
Since Eq.~\eqref{eq:ourwalker} is a general expression for the Walker velocity (see Ref.~\cite{Tarasenko1998} and Appendix~\ref{sec:walker}), we can use it to finally express the critical velocity as
\begin{align}
v_{0}\approx \gamma \mu_0M_s\mathcal{T}\ln(2)/(2\pi),
\end{align}
which equals $v_{0}=\SI{34.2}{m/s}$ for our film parameters, in close agreement with the values we observed in Fig.~\ref{fig:6}(b). This expression is valid for multilayers with $|D|<D_{cr}$.
Once $|D|>D_{cr}$, we can use the high-DMI limit, assuming that all layers are homochiral N\'eel at equilibrium. In this case, the surface-volume interactions are also absent. Ignoring the interlayer interactions with the upper layer, the upper DW can evolve from N\'eel to the Bloch-like configuration only upon reaching $v_{\text{max}}=\pi \gamma D/(2 M_s)$ as discussed in Appendix~\ref{sec:walkerdmi} and in Ref.~\cite{Thiaville2012}. Thus, when $|D|>D_{cr}$ we can finally obtain
\begin{align}
v_{cr}\approx v_{0}+\frac{\pi \gamma }{2M_s}\left(|D|-D_{cr}\right),\label{eq:vcr}
\end{align}
where $D_{cr}$ (which is positive in our notations) can be roughly approximated by $D_{sv}(0)$ given by Eq.~\eqref{eq:thr2} or more accurately, from the static equations (5) and (6) given in Ref.~\cite{Lemesh2018}, wherein one needs to find D that yields $\sin(\psi_{\mathcal{N}-1})=0$. The latter approach gives $D_{cr}=\SI{1.55}{mJ/m^2}$ for our film parameters, so for the exemplary case of $D=\SI{2.0}{mJ/m^2}$, Eq.\eqref{eq:vcr} results in $v_{cr}=\SI{123}{m/s}$, in agreement with simulations and numerical data plotted in Fig.~\ref{fig:6}(b).
\section{Conclusion}
In conclusion, we show that the phenomenon of Walker breakdown is generally expected to occur in multilayer ferromagnetic films with $\mathcal{N}\geq3$ and finite DMI, both for DWs and for stray field skyrmions. It occurs due to the combined effect of complex surface-volume stray field interactions, interfacial DMI, and SOT. In this current-induced effect, DWs precess with frequencies in the GHz range. Through simple energetic considerations, we find that the critical velocity for precession for twisted DWs and skyrmions is approximately the same as the Walker velocity for field-driven precession of DWs in a single-layer film with the same properties as each layer in the multilayer. Although damping-like SOT can drive DWs and skyrmions in single-layer films far beyond the Walker velocity without precession owing to the SOT symmetry, when such layers are incorporated into a multilayer with stray field interactions, precession is generally predicted to occur.
These results have important implications for potential applications of room-temperature skyrmions in racetrack devices. Although magnetic multilayers of the type treated here have been widely used to demonstrate stable magnetic skyrmions at room temperature ~\cite{Woo2016,Litzius2016,Legrand2017,Hrabec2017,Woo2017}, the critical velocities for precession for typical material parameters are only of order of tens of m/s. This result implies that in ferromagnetic multilayers, even when the DMI is capable of statically stabilizing skyrmions with a well-defined topological charge, the topological properties are ill-defined during translation, even at relatively modest velocities. Our predictions hence have important technological implications for the use of multilayer-based skyrmions in racetrack devices, since the upper limit for uniform translational velocities is quite low. For this reason, the use of ferromagnetic films for such applications is not technologically viable. However, since it is the stray field interactions that are ultimately responsible for precessional dynamics identified here, our work points to low-magnetization materials such as ferrimangetic and antiferromagnets~\cite{Buttner2018,Caretta2018} as an alternative path toward realizing practical devices. Finally, while this work provides analytical tools to identify material parameters to allow for optimization of skyrmions for such applications, it may also point to new applications, such as current-driven tunable nano-oscillators~\cite{Carpentieri2015,Garcia-Sanchez2016} based on engineered precessional frequencies in skyrmions.
\begin{acknowledgments}
This work was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES) under Award \#DE-SC0012371 (initial development of domain wall models) and by the DARPA TEE program (examination of instabilities in current-driven dynamics).
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,619 |
Glasstream is a famous boat design/builder . Every product of this design/builder is an authentic and stylish hit. At BoatBuys.com, you can find 50 new and used Glasstream boats for sale by owners and by confirmed dealers in all price ranges. To make the search process effortless for you, we offer different filters that can be applied to the catalog: not only by the price range, but also by the model, length, year, fuel type, location etc. The Glasstream brand is a well-known quality sign, and we invite you to find your next best boat deal by this brand. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,407 |
require "spec_helper"
describe AvataxClient::Request::Address, type: :avatax do
let(:required) { { label: "SingleLocation" } }
let(:address_attributes) do
{ city: "Austin", country: "USA", line1: "300 Main", region: "TX", postal_code: 78_758 }
end
it "requires a label" do
key = "label"
msg = required_message(property: key)
expect { described_class.new(latitude: 123, longitude: 456) }.to raise_error(ArgumentError, msg)
end
it "address attributes are not required when geocoded" do
%i[city country line1 region postal_code].each do |_k|
expect {
described_class.new(label: "SingleLocation",
latitude: "123",
longitude: "123")
}.to_not raise_error
end
end
%i[city country line1 region postal_code].each do |key|
it "requires a #{key} when latitude and longitude are not present" do
msg = required_message(property: key)
missing = required.merge(address_attributes.except(key))
expect {
described_class.new(missing)
}.to raise_error(ArgumentError, msg)
end
end
it "translates postalCode to postal_code" do
address_attributes.delete(:postal_code)
postal_code = 77_450
address_attributes[:postalCode] = postal_code
address_attributes[:label] = "ShipFrom"
trans = described_class.new(address_attributes)
expect(trans.postal_code).to eq postal_code
end
describe ".collection_to_body" do
let(:from) do
{
city: "Austin",
country: "USA",
label: "ShipFrom",
line1: "300 Main",
region: "TX",
postal_code: 78_758
}
end
let(:to) do
{
city: "Katy",
country: "USA",
label: "ShipTo",
line1: "123 Main",
region: "TX",
postalCode: 77_450
}
end
it "transforms an Array of addresses to a hash" do
addresses = [
described_class.new(from),
described_class.new(to)
]
hash = described_class.collection_to_body(addresses)
expect(hash.key?("ShipFrom")).to be true
expect(hash["ShipFrom"][:postalCode]).to eq from[:postal_code]
expect(hash.key?("ShipTo")).to be true
expect(hash["ShipTo"][:postalCode]).to eq to[:postalCode]
end
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,467 |
\section{Introduction}
\label{sec:Introduction}
The last few years have seen an upsurge in the interest in the dynamics of modulated nonlinear oscillators \cite{Dykman2012b}. There have emerged several new areas of research where this dynamics plays a central role, such as nanomechanics, cavity optomechanics, and circuit quantum electrodynamics. The vibrational systems of the new generation are mesoscopic. On the one hand, they can be individually accessed, similar to macroscopic systems, and are well-characterized. On the other hand, since they are small, they experience comparatively strong fluctuations of thermal and quantum origin. This makes their dynamics interesting on its own and also enables using modulated oscillators to address a number of fundamental problems of physics far from thermal equilibrium.
Many nontrivial aspects of the oscillator dynamics are related to the nonlinearity. Essentially all currently studied mesoscopic vibrational systems display nonlinearity. For weak damping, even small nonlinearity becomes important. It makes the frequencies of transitions between adjacent oscillator energy levels different. Where several levels are occupied, the dynamics strongly depends on the interrelation between the width of the ensued frequency comb and the oscillator decay rate. An important consequence of the nonlinearity is that, when an oscillator is resonantly modulated, it can have coexisting states of forced vibrations, i.e., display bistability \cite{LL_Mechanics2004}.
One of the general physics problems addressed with modulated nonlinear oscillators is fluctuation-induced switching in systems that lack detailed balance, see \cite{Dykman1979a,Dmitriev1986a,Kautz1988,Vogel1990,Dykman1998,Lapidus1999,Siddiqi2005,Kim2005,Aldridge2005,Stambaugh2006,Almog2007,Mahboob2010} for the classical and \cite{Dykman1988a,Vogel1988,Kinsler1991,Marthaler2006,Peano2006a,Katz2007,Serban2007,Vijay2009,Mallet2009,Peano2010,Wilson2010,Verso2010a} for the quantum regime. A remarkable property of the switching rate in the quantum regime is fragility. The rate $W_{\rm sw}$ calculated for $T=0$, where the system has detailed balance \cite{Drummond1980c}, is exponentially different from the rate calculated for $T>0$, where the detailed balance is broken \cite{Dykman1988a,Marthaler2006}. Recently the effect of fragility of the rates of rare events was also found in the problem of population dynamics \cite{Khasin2009}. There, too, a small change of the control parameter (infinitesimal, in the semiclassical limit) leads to an exponentially strong rate change. The nature of the dynamics and the sources of fluctuations in a quantum oscillator and in population dynamics are totally different, and it is important to understand how it happens that they display common singular features.
An important source of quantum fluctuations is the coupling of the oscillator to a thermal bath. It leads to oscillator relaxation via emission of excitations in the bath accompanied by transitions between the oscillator energy levels. The transitions lead to relaxation only on average, in fact they happen at random, giving rise to a peculiar quantum noise. For a resonantly modulated oscillator, the noise causes diffusion over the oscillator quantum states in the external field, which are the quasienergy (Floquet) states. As a result, even where the bath temperature is $T=0$, the distribution over the states has a finite width, the effect of quantum heating \cite{Dykman2011}.
We discuss quantum heating for a resonantly modulated oscillator and compare the predictions with the recent experiment \cite{Ong2013} where the effect was observed. The spectral manifestation of quantum heating is considered, with the focus on the influence of dissipation on the oscillator spectral characteristic of interest for sideband spectroscopy, the technique which was nicely implemented in the experiment \cite{Ong2013} using a microwave cavity with an embedded qubit.
We also study switching between the stable states of forced vibrations of an oscillator modulated close to its eigenfrequency. As quantum heating, switching occurs because of the quantum-noise induced diffusion over the oscillator states. It reminds switching of a classical Brownian particle over the potential barrier due to diffusion over energy \cite{Kramers1940} and therefore is called quantum activation. Generally, the rate of quantum activation largely exceeds the rate of switching via quantum tunneling. We develop an approach to calculating the rate of quantum activation, which naturally connects to the conventional formulation of the rare events theory in chemical and biological reaction systems and in population dynamics \cite{Touchette2009,Kamenev2011}. This approach provides a new insight into the fragility of the switching rate of the oscillator.
The dynamics of a periodically modulated harmonic quantum oscillator coupled to a thermal reservoir is one of exactly solvable problems of physical kinetics \cite{Schwinger1961,Zeldovich1969,Zeldovich1970}. However, the solution disregards the fact that nonresonant modulation can open a new channel for oscillator relaxation, where a transition between the oscillator energy levels is accompanied by an emission (absorption) of an excitation in the medium, while the energy deficit is compensated by the modulation. Alternatively, the role of the medium can be played by another mode with the relaxation rate higher than that of the oscillator \cite{Dykman1978}. This mechanism underlies the cooling studied in cavity optomechanics. We provide a brief comment in order to unify various mechanisms of the change of the quantum distribution of the oscillator by nonresonant modulation.
\section{Quasienergy spectrum and the master equation}
\label{sec:q_heating}
\subsection{Hamiltonian in the rotating frame}
\label{subsec:model}
A major type of the internal oscillator nonlinearity of interest for the effects we will discuss is the Duffing nonlinearity, where the potential energy has a term quartic in the oscillator displacement $q$; in quantum optics, it corresponds to the Kerr nonlinearity. The simplest types of resonant modulation that lead to the bistability of the oscillator are additive modulation at frequency $\omega_F$ close to the oscillator eigenfrequency $\omega_0$ and parametric modulation (modulation of the oscillator frequency) at frequency $\approx 2\omega_0$ \cite{LL_Mechanics2004}. The analysis of quantum fluctuations in these two systems has much in common \cite{Dykman2012}, and the method that we will develop here applies to the both types of systems. For concreteness, we will consider here additive modulation. The oscillator Hamiltonian is
\begin{eqnarray}
\label{eq:H_0(t)}
H_0=\frac{1}{2}p^2+\frac{1}{2}\omega_0^2q^2 +\frac{1}{4}\gamma q^4 + H_F(t), \qquad H_F=-qA\cos\omega_Ft,
\end{eqnarray}
where $q$ and $p$ are the oscillator coordinate and momentum, the mass is set equal to one,
$\gamma$ is the anharmonicity parameter, and $A$ is the modulation amplitude. We assume that the the modulation is resonant and not too strong, so that
\begin{equation}
\label{eq:delta_omega}
|\delta\omega|\ll \omega_0, \qquad \delta\omega=\omega_{F}-\omega_0; \qquad |\gamma|\langle q^2
\rangle\ll\omega_0^2.
\end{equation}
A periodically modulated oscillator is described by the Floquet, or quasienergy states $\Psi_{\varepsilon}(t)$. They provide a solution of the Schr\"odinger equation $i\hbar\partial_t\Psi=H_0(t)\Psi$ that satisfies the condition $\Psi_{\varepsilon}(t+t_F)=\exp(-i\varepsilon t_F/\hbar)\Psi_{\varepsilon}(t)$, where $t_F=2\pi/\omega_F$. This expression defines quasienergy $\varepsilon$. To find quasienergies and to describe the oscillator dynamics it is convenient to change to the rotating frame. This is done by the standard canonical transformation $U(t)=\exp\left(-ia^{\dag}a\,\omega_Ft\right)$, where $a^{\dag}$ and $a$ are the raising and lowering operators of the oscillator. We introduce slowly varying dimensionless coordinate $Q$ and momentum $P$ in the rotating frame,
\[
\fl
U^{\dag}(t)q U(t) = C_{}(Q\cos\varphi_{}+P\sin\varphi_{}), \qquad
U^{\dag}(t)p U(t) = -C_{}\omega_F(Q\sin\varphi_{} - P\cos\varphi_{}).\]
Here, $\varphi =\omega_Ft$ and the scaling factor is $C=\left|8\omega_F(\omega_F-\omega_0)/3\gamma\right|^{1/2}$. The commutation relation between $P$ and $Q$ has the form
\begin{equation}
\label{eq:lambda}
[P,Q]=-i\lambda,\qquad \lambda=\hbar/(\omega_F C_{}^2)\equiv 3\hbar|\gamma|/8\omega_F^2|\omega_F-\omega_0| .
\end{equation}
Parameter $\lambda\propto \hbar$ plays the role of the Planck constant in the quantum dynamics in the rotating frame. It is determined by the oscillator nonlinearity, $\lambda\propto \gamma$. For concreteness we assume that $\gamma,\delta\omega >0$; the oscillator displays bistability for $\gamma\delta\omega>0$.
In the range (\ref{eq:delta_omega}) the oscillator dynamics can be studied in the rotating wave approximation (RWA). The RWA Hamiltonian is
\begin{eqnarray}
\label{eq:H_rotating_frame}
\tilde H_0=U^{\dag}H_0 U-i\hbar U^{\dag}\dot{U}\approx
\frac{3}{8}E_{\rm sl}\hat{g}_{}, \nonumber\\
g(Q,P) =
\frac{1}{4}\left(P^2+Q^2 -1\right)^2 - \beta^{1/2}Q,\quad \beta = 3|\gamma|A^2/32\omega_F^3|\omega_F-\omega_0|^3,
\end{eqnarray}
where $\beta$ is the scaled modulation intensity and $E_{\rm sl}=\gamma C_{}^4$ is the characteristic energy of motion in the rotating frame. This motion is slow on the time scale $\omega_F^{-1}$. Note that $E_{\rm sl}\ll \omega_0^2\langle q^2\rangle \sim \omega_0^2C^2$.
Operator $\hat g_{}=g(Q,P)$ is the dimensionless Hamiltonian in the rotating frame. In the RWA, the Schr\"odinger equation for the RWA wave function $\psi(Q)$ in dimensionless slow time $\tau$ reads
\begin{equation}
\label{eq:Schrodinger_eq}
i\lambda\dot\psi \equiv i\lambda \partial_{\tau}\psi= \hat g \psi, \qquad \tau= t|\delta\omega|\equiv (\lambda E_{\rm sl}/\hbar)t.
\end{equation}
Operator $\hat g$ is independent of time and has a discrete spectrum, $\hat g|n\rangle=g_n|n\rangle$. The eigenvalues $g_n$ give the quasienergies in the RWA, $\varepsilon_n= (3E_{\rm sl}/8)g_n$ (we are using an extended $\varepsilon$-axis rather than limiting $\varepsilon$ to the analog of the first Brillouin zone $0\leq \varepsilon<\hbar\omega_F$). Function $g(Q,P)$ has the shape of a tilted Mexican hat and is shown in Fig.~\ref{fig:quasienergy}(a); the quasienergy levels are shown in Fig.~\ref{fig:quasienergy}(b) .
\noindent
\begin{figure}[h]
\begin{center}
\includegraphics[width=15cm]{fig_1_quasienergyc.eps}
\end{center}
\caption{(a) The Hamiltonian function in the rotating frame in the RWA. The extrema of $g(Q,P)$ correspond to the stable vibrational states in the limit of weak damping. (b) The cross-section $g(Q,0)$ and the quasienergy levels of the states localized about the extrema of $g(Q,P)$. Points $Q_{\min}, Q_{\max}$ and $Q_{\cal S}$ indicate the positions of the minimum, the local maximum, and the saddle point of $g(Q,P)$, respectively. The plots refer to $\beta = 0.01$ and $\lambda = 0.041$. (c) The transitions between the Fock states of the oscillator with energies $E_N\approx \hbar\omega_0(N+1/2)$ accompanied by emission of excitations in the bath, e.g., photons. Some of the corresponding transitions between quasienergy states are shown by small arrows in (b). The stationary state of the oscillator is formed on balance between relaxation and excitation by periodic modulation $F(t)$. }
\label{fig:quasienergy}
\end{figure}
In contrast to the Hamiltonian $H_0$, $\hat g$ is not a sum of the kinetic and potential energies. As seen from Fig.~\ref{fig:quasienergy}, the eigenstates localized near the local maximum of $g(Q,P)$ correspond to semiclassical orbits on the surface of the ``inner dome" of $g(Q,P)$; these states become stronger localized as $g_n$ {\em increases} toward the local maximum of $g(Q,P)$. The quasienergy level spacing $\propto \lambda E_{\rm sl}$ is small compared to the distance between the oscillator energy levels in the absence of modulation,
$|\varepsilon_n-\varepsilon_{n+1}|\sim \lambda E_{\rm sl} \ll \hbar\omega_0$.
\subsection{Master equation for linear coupling to the bath}
\label{subsec:master_equation}
The analysis of the oscillator dynamics is often done assuming that the oscillator is coupled to a thermal bath in such a way that the coupling energy is linear in the oscillator coordinate $q$ and thus in the oscillator ladder operators $a, a^{\dag}$ \cite{Schwinger1961}. In this case the coupling energy $H_i$ and the typical relaxation rate $\Gamma$ are of the form
\begin{equation}
\label{eq:linear_relaxation}
H_i= ah_{\rm b}+{\rm H.c.},\qquad \Gamma\equiv\Gamma(\omega_0)=\hbar^{-2}{\rm Re}~\int\nolimits_0^{\infty}dt\langle [h_{\rm b}^{\dag}(t),h_{\rm b}(0)]\rangle_{\rm b}e^{i\omega_0t},
\end{equation}
where $h_{\rm b}$ depends on the bath variables only and $\langle\ldots\rangle_{\rm b}$ denotes thermal averaging over the bath states. Relaxation (\ref{eq:linear_relaxation}) corresponds to transitions between neighboring energy levels of the oscillator in the lab frame, with energy transferred to bath excitations, see Fig.~\ref{fig:quasienergy}(c). The renormalization of the oscillator parameters due to the coupling is assumed to have been incorporated. For a smooth density of states of the bath, resonant modulation does not change the decay rate parameter, $\Gamma(\omega_F)\approx \Gamma(\omega_0)$. However, it excites the oscillator, as sketched in Fig.~\ref{fig:quasienergy}(c). In a stationary state of forced vibrations (in the lab frame) the energy provided by the modulation is balanced by the relaxation.
To the second order in the interaction (\ref{eq:linear_relaxation}), the master equation for the oscillator density matrix $\rho$ in dimensionless time $\tau$ reads
\begin{eqnarray}
\label{eq:master_eq}
\dot\rho\equiv\partial_{\tau}\rho=&& i\lambda^{-1}[\rho, \hat g]-\hat\kappa\rho, \qquad \hat\kappa\rho= \kappa(\bar n + 1)(a^\dagger a\rho-2a\rho a^\dagger+\rho a^\dagger a)\nonumber\\
&&+\kappa\bar n(a a^\dagger\rho-2a^\dagger\rho a+\rho aa^\dagger), \qquad \kappa = \Gamma/|\omega_F-\omega_0|.
\end{eqnarray}
Here, the term $\propto [\rho,\hat g]$ describes dissipation-free motion, cf. (\ref{eq:Schrodinger_eq}). Operator $\hat \kappa \rho$ describes dissipation and has the same form as in the absence of oscillator modulation \cite{Mandel1995,DK_review84}; $\kappa$ is the dimensionless decay rate, and $\bar n$ is the oscillator Planck number,
\begin{equation}
\label{eq:a_interms_Q}
a=(2\lambda)^{-1/2}(Q+iP),\qquad \bar n \equiv \bar n(\omega_0) =[\exp(\hbar\omega_0/k_BT)-1]^{-1}.
\end{equation}
In the classical limit $\lambda\to 0$ the oscillator described by (\ref{eq:master_eq}) can have one or two stable states of forced vibrations. Their positions in the rotating frame $(Q_{\rm a},P_{\rm a})$ are given by the stable stationary solutions of the classical equations of motion of the oscillator
\begin{equation}
\label{eq:classical}
\dot Q = \partial_Pg -\kappa Q, \qquad \dot P = -\partial_Q g -\kappa P.
\end{equation}
Equations (\ref{eq:classical}) are, essentially, the mean-field equations for the moments Tr$(Q\rho)$, Tr$(P\rho)$ for $\lambda\to 0$. For small damping $Q_{\rm a}$ and $P_{\rm a}$ are close to the extrema of $g(Q,P)$.
\section{Quantum heating}
\label{sec:quantum_heating}
\subsection{Balance equation}
\label{subsec:balance_equation}
We will concentrate on the oscillator dynamics in the case where the oscillator is strongly underdamped and its motion is semiclassical,
\begin{equation}
\label{eq:semiclassical}
\lambda \ll 1, \qquad \kappa \ll 1.
\end{equation}
In this case the number of quasienergy states localized about the extrema of $g(Q,P)$ is large, $\propto 1/\lambda$ [the scaled quasienergies of such states $g_n$ lie between the value of $g$ at the corresponding extremum and the saddle point value $g_{\cal S}$ of $g(Q,P)$ in Fig.~\ref{fig:quasienergy}]. Also, the spacing between the levels is large compared to their width, $|g_n-g_{n\pm 1}|\gg \lambda\kappa$. Where the latter condition is met, the off-diagonal matrix elements of the density matrix on the quasienergy states $\rho_{nm}\equiv \langle n|\rho|m\rangle$ ($n\neq m$) are small. To the lowest order in $\kappa$ the oscillator dynamics can be described by the balance equation for the populations $\rho_{nn}$ of quasienergy states. From (\ref{eq:master_eq})
\begin{equation}
\label{eq:balance_eqn}
\fl
\dot\rho_{nn}=\sum_m\left(W_{mn}\rho_{mm}-W_{nm}\rho_{nn}\right), \qquad W_{mn}=2\kappa\left[(\bar n + 1)|a_{nm}|^2 +\bar n |a_{mn}|^2\right],
\end{equation}
where
$a_{nm}\equiv \langle n|a|m\rangle$. We disregard tunneling when defining functions $|n\rangle\equiv \psi_n(Q)$, i.e., we use the wave functions localized about the extrema of $g(Q,P)$; the effect of tunneling is exponentially small for $\lambda \ll 1$. We count the localized states off from the corresponding extremum, i.e., for a given extremum the state with $n=0$ has $g_n$ closest to $g(Q,P)$ at the extremum.
An important feature of the rates of interstate transitions $W_{mn}$ is that, even for $T=0$ (and thus $\bar n = 0$), there are transitions both {\it toward} and {\it away from} the extrema of $g(Q,P)$. This is because the wave functions $|n\rangle$ are linear combinations of the wave functions of the oscillator Fock states, see Fig.~\ref{fig:quasienergy}(c). Therefore, even though relaxation corresponds to transitions down in the oscillator energy in Fig.~\ref{fig:quasienergy}(c), the transitions up and down the quasienergy have nonzero rates. One can show that, for the both extrema of $g(Q,P)$, the rates of transitions toward an extremum are larger than away from it. Therefore, depending on where the system was prepared initially, it would most likely move to one or the other extremum of $g(Q,P)$. This is why the extrema correspond to the stable states of forced vibrations of the modulated oscillator in the classical limit of a large number of localized states.
For small effective Planck constant $\lambda \ll 1$, the rates $W_{mn}$ can be calculated in an explicit form by finding the matrix elements $a_{mn}$ in the WKB approximation \cite{Dykman1988a,Guo2013}. The problem is then related to that of classical conservative motion with Hamiltonian $g(Q,P)$ and with equations of motion of the form $\dot Q=\partial_Pg, \dot P=-\partial_Qg$. A significant simplification comes from the fact that the classical trajectories $Q(\tau;g)$ are described by the Jacobi elliptic functions. As a result, $Q(\tau;g)$ is double-periodic on the complex-$\tau$ plane, with real period $\tau_p^{(1)}(g)$ and complex period $\tau_p^{(2)}(g)$. For $|m-n|\ll \lambda^{-1}$ the matrix element $a_{mn}$ is given by the Fourier $m-n$ component of the function $a(\tau;g_n) = (2\lambda)^{-1/2}[Q(\tau;g_n)+iP(\tau;g_n)]$\cite{LL_QM81}. It can be calculated along an appropriately chosen closed contour on the complex $\tau$-plane and is determined by the pole of $a(\tau; g_n)$. In particular, for the states localized about the local maximum of $g(Q,P)$ we obtain
\begin{equation}
\label{eq:rate_small_attractor}
|a_{n+k\;n}|^2= \frac{k^2\nu_n^4}{2\beta\lambda}
\frac{\exp[ k\nu_n\,{\rm Im}~(2\tau_*-\tau_p^{(2)})]}{|\sinh[ ik\nu_n\,\tau_p^{(2)}/2]|^2}, \quad \nu_n\equiv \nu(g_n)=2\pi/\tau_p^{(1)}(g_n).
\end{equation}
Here, $\tau_*\equiv \tau_*(g_n)$ and $\tau_p^{(2)} \equiv \tau_p^{(2)}(g_n)$ [Im~$\tau_*, {\rm Im}~\tau_p^{(2)}>0$]; $\tau_*(g)$ is the pole of $Q(\tau;g)$ closest to the real axis;
$\nu(g)$ is the dimensionless frequency of vibrations in the rotating frame with quasienergy $g$. To the leading order in $\lambda$, we have $W_{n\;n+k} = W_{n-k\;n}$ for $n,n\pm k \gg 1$.
\subsection{Effective temperature of vibrations about a stable state}
\label{subsec:T_e}
Equation~(\ref{eq:rate_small_attractor}) has to be modified for states very close to the extrema of $g(Q,P)$. Near these extrema the classical motion of the oscillator in the rotating frame is harmonic vibrations. One can introduce raising and lowering operators $b$ and $b^\dagger$ for these vibrations (via squeezing transformation) and expand $g(Q,P)$ near an extremum as
\begin{eqnarray}
\label{eq:squeezed_operators}
Q-Q_{\rm a}+iP=(2\lambda)^{1/2}(b\cosh \varphi_* - b^{\dag}\sinh \varphi_*),\nonumber\\
\hat g\approx g(Q_{\rm a},0) + \lambda\nu_0\left(b^{\dag}b+1/2\right){\rm sgn}\partial_Q^2 g,
\qquad \nu_0=\left|\partial_Q^2 g\partial_P^2 g\right|^{1/2}
\end{eqnarray}
[$(Q_{\rm a},P=0)$ is the position of the considered extremum; it is given by equation $\partial_Qg =Q(Q^2-1)-\beta^{1/2}=0$]. The derivatives of $g$ in (\ref{eq:squeezed_operators}) are evaluated at $(Q_{\rm a},P=0)$. Parameter $\phi_*$ is given by equation $\tanh \varphi_*=(|\partial_Q^2 g|^{1/2}-|\partial_P^2 g|^{1/2})/(|\partial_Q^2 g|^{1/2}+|\partial_P^2 g|^{1/2})$.
From (\ref{eq:balance_eqn}) and (\ref{eq:squeezed_operators}), near an extremum of $g$ we have
\begin{eqnarray}
\label{eq:rates_near_extremum}
W_{m+1\;m}=2\kappa(m+1)(\bar n_e+1),\qquad W_{m\;m+1}=2\kappa(m+1)\bar n_e,\nonumber\\
\bar n_e=\bar n\cosh^2\phi_* + (\bar n+1)\sinh^2\phi_*,
\end{eqnarray}
whereas $W_{m\;m+k}=0$ for $|k|>1$. Equation (\ref{eq:rates_near_extremum}) is a familiar expression for the transition rates between the states of a harmonic oscillator coupled to a thermal bath, with $\bar n_e$ being the Planck number of the excitations of this fictitious bath at the frequency of vibrations in the rotating frame $\nu_0\delta\omega$.
From (\ref{eq:rates_near_extremum}), the stationary distribution of the modulated oscillator over its quasienergy states near an extremum of $g(Q,P)$ is of the Boltzmann type, with effective temperature ${\mathcal T}_e=\lambda\nu_0/ \ln[(\bar{n}_e+1)/\bar{n}_e]$ \cite{Marthaler2006,Dykman2012}. In agreement with the qualitative picture discussed above, this temperature is nonzero even where the temperature of the true bath is $T=0$. This is the effect of quantum heating due to quantum fluctuations in a nonequilibrium system.
\noindent
\begin{figure}[h]
\begin{center}
\includegraphics[width=5cm,angle=0]{fig_2a_temperature2.eps} \hspace{2cm}
\includegraphics[width=5.2cm]{fig_2b_spectra1.eps}
\end{center}
\caption{(a) The effective Planck number $\bar n_e$ of the vibrations about the large-amplitude state of the modulated oscillator, which corresponds to the minimum of $g(Q,P)$ in the small-damping limit; $\beta$ is the scaled driving field intensity (\ref{eq:H_rotating_frame}). Squares: experimental data \cite{Ong2013}. Solid line: equation (\protect\ref{eq:rates_near_extremum}) for $\bar n=0$ (see also \cite{Dykman2012}). Triangles: the estimate of the experimentally measured parameter discussed in Appendix A. (b) The scaled power spectra of the oscillator occupation number $\hat n=a^\dagger a$ for vibrations about the large-amplitude stable state for $|\delta\omega|/\kappa = 3.9$ (the value used in \cite{Ong2013}). The black and red curves correspond to $\beta = 0.17$ and 0.8. The triangles in (a) are determined from the ratio $r_\Phi$ of the heights of the lower and higher peaks of $\Phi_{nn}$ as $r_\Phi/(1-r_\Phi)$.}
\label{fig:q_heating}
\end{figure}
Quantum heating of a resonantly modulated oscillator was recently observed in an elegant experiment \cite{Ong2013} using a mode of a microwave cavity with an embedded Josephson junction \cite{Bertet2012}. The occupation of the excited quasienergy states was revealed using a two-level system (a transmon qubit) as a probe. As seen from Fig.~\ref{fig:q_heating}, the results of the experiment are in a qualitative agreement with the above theory. The agreement improves for larger scaled field intensity $\beta$ (\ref{eq:H_rotating_frame}), where the ratio $\kappa/\nu_0$ is smaller. It is in the range of small $\kappa/\nu_0$ that the quantum temperature is a good characteristic of the distribution over quasienergy states, as the lifetime of these states largely exceeds the reciprocal level spacing (scaled by $\hbar$). Still, even for not too small $\kappa/\nu_0$, the technique developed in \cite{Ong2013} makes it possible to reveal the broadening of the stationary distribution of the modulated oscillator due to quantum fluctuations far from equilibrium. A characteristic of this effect is discussed in Appendix A and the corresponding results are shown in Fig.~\ref{fig:q_heating}.
\section{Switching between the stable states}
\label{sec:switching}
The effect of diffusion over quasienergy states due to quantum fluctuations is not limited to the quantum heating described above. Along with small fluctuations, which lead to comparatively small deviations of quasienergy from its value at an extremum of $g(Q,P)$, there occasionally occur large fluctuations. They push the oscillator far away from the initially occupied extremum. It is clear that, if as a result of such fluctuation, the oscillator goes ``over the quasienergy barrier" to states localized about the other extremum, with probability $\approx 1$ it will then approach this other extremum. Such transition corresponds to switching between the stable states of forced vibrations via the quantum activation mechanism. As seen from Fig.~\ref{fig:quasienergy}(a), with an accuracy to a factor $\sim 1/2$ the switching rate $W_{\rm sw}$ is determined by the probability to reach the saddle-point value $g_{\cal S}$ of $g(Q.P)$.
The switching rate is small, as switching requires that the oscillator makes many interlevel transitions with rates $W_{mn}$ smaller than the rates of transitions in the opposite direction, $W_{nm}$. Therefore, before the oscillator switches, there is formed a quasistationary distribution over its states localized about the initially occupied extremum of $g(Q,P)$. This is similar to what happens in thermally activated switching over a high barrier \cite{Kramers1940}. However, in contrast to systems in thermal equilibrium, a modulated oscillator generally does not have detailed balance. Its statistical distribution has a simple Boltzmann form with temperature ${\cal T}_e$ only for small damping and only close to the extrema of $g(Q,P)$. Therefore the standard technique developed for finding the switching rate in quantum equilibrium systems \cite{Langer1967,Coleman1977,Affleck1981,Caldeira1983} does not apply. Also, even for $T\to 0$ an oscillator modulated close to its eigenfrequency generally does not switch via tunneling (see \cite{Vogel1988,Peano2006a,Serban2007,Larsen1976,Sazonov1976,Dmitriev1986,Wielinga1993,Marthaler2007a} for the theory of tunneling switching for additive and parametric modulation). Switching via quantum activation is exponentially more probable.
\subsection{Relation to chemical kinetics and population dynamics}
\label{subsec:chem_kinetics}
For small scaled decay rate $\kappa$ the switching rate $W_{\rm sw}$ can be obtained from the balance equation (\ref{eq:balance_eqn}). An approach to solving this equation was discussed earlier \cite{Dykman1988a,Marthaler2006}. Here we provide a formulation that gives an insight into how the oscillator actually moves in switching and also makes a direct connection with the technique developed in chemical kinetics and population dynamics. The balance equation is broadly used in these areas. It describes chemical or biochemical reactions in stirred reactors (no spatial nonuniformity). The reactions can be thought of as resulting from molecular collisions in which molecules change, and if the collision duration is small compared to the reciprocal collision rate the kinetics is described by a Markov equation \cite{vanKampen_book}
\begin{equation}
\label{eq:chem_kin}
\dot\rho({\bf X},\tau)=\sum_{\bf r}[W({\bf X}-{\bf r},{\bf r})\rho({\bf X}-{\bf r},\tau)-W({\bf X},{\bf r})\rho({\bf X},\tau)].
\end{equation}
Here, ${\bf X}=(X_1,X_2,...)$ is the vector that gives the numbers of molecules $X_i$ of different types $i$, and $\rho$ is the probability for the system to be in a state with given ${\bf X}$; $W({\bf X},{\bf r})$ is the rate of a reaction in which the number of molecules changes from ${\bf X}$ to ${\bf X}+{\bf r}$. Typically, $X_i$ are large, $X_i\propto N\gg 1$, where $N$ is the total number of molecules. In contrast, the change of the number of molecules in an elementary collision is $|{\bf r}|\sim 1$, because it is unlikely that many molecules would collide at a time. Equation (\ref{eq:chem_kin}) is also often used in population dynamics, including epidemic models, cf. \cite{ Anderson1992}. In this case the components of ${\bf X}$ give populations of different species.
Since the number of molecules (population) is large, $N\gg 1$, fluctuations are small on average. Disregarding fluctuations corresponds to the mean-field approximation. In this approximation one can multiply (\ref{eq:chem_kin}) by ${\bf X}$ and sum over ${\bf X}$ while assuming that the width of the distribution $\rho({\bf X})$ is small. This gives the equation of motion for the scaled mean number of molecules (population)
\begin{equation}
\label{eq:mean_field_chemical}
\dot{\overline{\bf x}} = \sum_{\bf r} {\bf r} w(\overline{\bf x},{\bf r}), \qquad {\bf x} = {\bf X}/N,\qquad w({\bf x},{\bf r})=W({\bf X},{\bf r})/N.
\end{equation}
Stable solutions of (\ref{eq:mean_field_chemical}) give the stable states of chemical (population) systems. There may be also unstable stationary or periodic states. In population dynamics, an unstable stationary solution of (\ref{eq:mean_field_chemical}) can be the state where one of the species goes extinct.
Equation (\ref{eq:chem_kin}) describes diffusion in the space of variables ${\bf x}$. Along with small $(\propto N^{-1/2}$) fluctuations around the stable states, this diffusion leads to rare large deviations ($\sim {\cal O}(N^{-1})$ in ${\bf x}$-space) and to switching between the stable states. There is an obvious similarity between diffusion over the number of molecules and diffusion over quasienergy states of a modulated oscillator, but there are also some subtle differences, which we discuss below. There is also an obvious difference, with profound consequences: in the case of an oscillator the transition rates $W_{mn}$ (\ref{eq:balance_eqn}), (\ref{eq:rate_small_attractor}) are not limited to $|m-n|\sim 1$.
\subsection{The eikonal approximation}
\label{subsec:eikonal}
The role of the large number of molecules (population) in a modulated oscillator is played by the reciprocal effective Planck constant $\lambda^{-1}$, which determines the number of states localized about the extrema of $g(Q,P)$, cf. Fig.~\ref{fig:quasienergy}. For $\lambda\ll 1$ it is convenient to switch from the state number $n$ to the classical mechanical action $I$ for the Hamiltonian orbits $Q(\tau;g),P(\tau;g)$, which are described by equations $\dot Q=\partial_Pg(Q,P), \dot P=-\partial_Qg(Q,P)$,
\begin{equation}
\label{eq:I_defined}
I=I(g)= (2\pi)^{-1}\int_0^{2\pi/\nu(g)}P(\tau;g)\dot Q(\tau;g)d\tau, \qquad \partial_gI=\nu^{-1}(g),
\end{equation}
where $\nu(g)$ is the vibration frequency for given $g$ [$2\pi I$ gives the area of the cross-section of the surface $g(Q,P)$ in Fig.~\ref{fig:quasienergy}(a) by plane $g=$const]. One can show that, in spite of the nonstandard form of $g(Q,P)$, the semiclassical quantization condition has the familiar form $I_n\equiv I(g_n)=\lambda (n+1/2)$.
In the semiclassical approximation the rates of transitions between quasienergy states $W_{mn}$ become functions of the quasicontinuous variable $I$ and can be written as $W_{mn}= W(I_m,n-m)$. The dependence of $W$ on $I$ is smooth, as seen from (\ref{eq:balance_eqn}) and (\ref{eq:rate_small_attractor}), $W(I_m,n-m)\approx W(I_n, n-m)$ for typical $|n-m|\ll 1/\lambda$.
Similar to (\ref{eq:mean_field_chemical}), in the neglect of quantum fluctuations the equation for $\overline I = \sum\nolimits_nI_n\rho_n$ has a simple form
\begin{equation}
\label{eq:mean_field_I}
\dot{\overline I}= \sum_r rw(\overline I,r),\qquad w(I,r)=\lambda W(I,r).
\end{equation}
This equation shows how the oscillator is most likely to evolve. Using that the matrix element $a_{mn}$ in the expression (\ref{eq:balance_eqn}) for the rate $W_{mn}$ is the $(n-m)$th Fourier component of function $(2\lambda)^{-1/2}[Q(\tau;g_m) + iP(\tau;g_m)]$, one can show by invoking the Stokes' theorem that the time evolution of $\overline I$ is extremely simple,
\begin{equation}
\label{eq:deterministic_I}
\dot{\overline I}=-2\kappa \overline I, \qquad \overline I< I_{\cal S},
\end{equation}
where $I_{\cal S}$ is the value of $I(g)$ for $g$ approaching the saddle-point value $g_{\cal S}$ from the side of the extremum of $g(Q,P)$ of interest; the values of $I_{\cal S}$ are different on the opposite sides of $g_{\cal S}$. Equation (\ref{eq:deterministic_I}) coincides with the result for the evolution of $\overline g$ for a classical modulated oscillator \cite{Dykman1979a}. We note that the semiclassical approximation breaks down very close to the saddle point (in particular, the relation $W(I_m,n-m)\approx W(I_n, n-m)$ clearly ceases to apply), but the width of the corresponding range of $I$ goes to zero as $\lambda\to 0$.
We now consider the quasistationary distribution $\rho_n$ about the initially occupied stable state. It is formed on times $(\kappa |\delta\omega|)^{-1}\ll t\ll W_{\rm sw}^{-1}$. To find $\rho_n$ far from the stable state we use the eikonal approximation \cite{Dykman1988a,Marthaler2006}, but in the form similar to that used in chemical kinetics and population dynamics \cite{Kamenev2011}. We set
\begin{equation}
\label{eq:eikonal_defined}
\rho_n = \exp[-R(I_n)/\lambda]
\end{equation}
and assume that $|\partial_IR| \ll \lambda^{-1}$. Then $\rho_{n+r}\approx \rho_n\exp[-r\partial_IR]$ for $|r|\ll \lambda^{-1}$ and, to the leading order in $\lambda$, the balance equation (\ref{eq:balance_eqn}) becomes
\begin{equation}
\label{eq:eikonal_Hamiltonian}
\partial_\tau R=-{\cal H}(I,\partial_IR), \qquad {\cal H}(I,p_I)=\sum_r w (I,r)[\exp(rp_I) -1].
\end{equation}
Equation (\ref{eq:eikonal_Hamiltonian}) has the form of the Hamilton-Jacobi equation for an auxiliary system with coordinate $I$, momentum $p_I$, and action variable $R(I)$ \cite{LL_Mechanics2004}. It thus maps the problem of finding the distribution of the modulated oscillator, which is formed by quantum fluctuations, onto the problem of classical mechanics. The quasistationary distribution is determined by the stationary solution of (\ref{eq:eikonal_Hamiltonian}), i.e., by the solution of equation ${\cal H}(I,\partial_IR)=0$. If there are several solutions, of physical interest is the solution with the minimal $R(I)$, as it gives the leading-order term in $\ln\rho_n$.
\subsection{Optimal switching trajectory}
\label{subsec:instanton}
An advantageous feature of the formulation (\ref{eq:eikonal_Hamiltonian}) is that it provides an insight into how the quantum oscillator evolves in large fluctuations that lead to occupation of quasienergy states far from the initially occupied extremum of $g(Q,P)$. Even though the diffusion over quasienergy states is a random process and different sequences of interstate transitions can bring the system to the given quasienergy state, the probabilities of such sequences are strongly different. Of physical interest is the most probable sequence, known as the optimal fluctuation. For classical fluctuating systems it has been understood theoretically and shown in experiment and simulations \cite{Kamenev2011,Luchinsky1997b,Hales2000,Ray2006,Chan2008} that the evolution of the system in the optimal fluctuation, i.e., the optimal fluctuational trajectory is given by the classical trajectory of the auxiliary Hamiltonian system, which in the present case is described by equation
\begin{equation}
\label{eq:optimal_trajectory}
\dot I=\partial{\cal H}(I,p_I)/\partial p_I, \; \dot p_I=-\partial {\cal H}(I,p_I)/\partial I;\quad R(I)=\int_0^Ip_IdI.
\end{equation}
The concept of the optimal fluctuational trajectory can be extended to the quantum oscillator. Such trajectory for the action variable $I$ is well-defined, since any information of the oscillator phase is automatically erased and the range of the $I$ values largely exceeds the quantum uncertainty in $I$, which is $\propto \lambda$. Therefore the optimal fluctuational trajectory $I(t)$ can be measured in the experiment in the same way as it is done in classical systems.
In (\ref{eq:optimal_trajectory}) we have set $R(0)=0$ and thus ignored the normalization factor (an analog of the reciprocal partition function) in the expression (\ref{eq:eikonal_defined}) for $\rho_n$. This factor leads to a correction $\propto\lambda$ to $R(0)$. Since, to logarithmic accuracy, the switching rate is determined by the probability of approaching the saddle-point value $I_{\cal S}$ of $I$, we have
\begin{equation}
\label{eq:R_A_defined}
W_{\rm sw}\sim \kappa |\delta\omega|\exp(-R_A/\lambda), \qquad R_A=R(I_{\cal S}).
\end{equation}
Parameter $R_A$ plays the role of the effective activation energy for switching via quantum activation, with the effective Planck constant $\lambda$ replacing the temperature in the conventional expression for thermally activated switching.
Optimal fluctuations away from the extremum of $g(Q,P)$ are described by optimal trajectories that emanate from $I=0$, which is reflected in (\ref{eq:optimal_trajectory}). The value of the momentum $p_I\equiv \partial_IR$ for $I\to 0$ on the trajectory can be found by noticing that the distribution over quasienergy near the extremum of $g(Q,P)$ is of the form of the Boltzmann distribution with effective temperature ${\cal T}_e$, and thus $R\propto I/{\cal T}_e$; from (\ref{eq:rates_near_extremum}) $p_I= \ln[(\bar n_e +1)/\bar n_e]$ for $I\to 0$. Then from (\ref{eq:eikonal_Hamiltonian}), $\dot I =2\kappa I$ on the optimal trajectory for $I\to 0$. As expected, the system moves along the optimal fluctuational trajectory away from the stable state of fluctuation-free dynamics.
The facts that $p_I\neq 0$ at the starting point of the optimal trajectory and that the state $I=0$ lies on the boundary of the available values of $I$ are connected with each other and present a distinctive feature of the oscillator dynamics. In chemical kinetics and population dynamics usually stable states lie in the middle of the space of dynamical variables ${\bf X}$. The probability distribution has a Gaussian maximum at such ${\bf X}$, and then the momentum on the optimal trajectory is equal to zero \cite{Khasin2009,Kamenev2011}. The states $(I=0,p_I=0)$ and $(I=0,p_I= \ln[(\bar n_e +1)/\bar n_e])$ are stationary points of the Hamiltonian ${\cal H}(I,p_I)$. From (\ref{eq:rates_near_extremum}), $\dot I=\dot p_I= 0$ at these points. The motion of the system near these points is exponential in time and is shown in Fig.~\ref{fig:instanton}.
\noindent
\begin{figure}[h]
\begin{center}
\includegraphics[width=15cm]{fig_3_instantond.eps}
\end{center}
\caption{(a) The mean-field (fluctuation free) and optimal fluctuational trajectories of the action variables. Because the system has detailed balance for $\bar n=0$, the optimal trajectory in this case is the time-reversed mean-field trajectory. The data refer to the trajectories for the local maximum of $g(Q,P)$ in Fig.~\protect\ref{fig:quasienergy} for $\beta = 0.035$. The shape of the trajectory changes discontinuously where $\bar n$ becomes nonzero; the trajectories for $\bar n=0$ and $\bar n\to 0$ coincide only for $I<I_{\bar n}$. The limit $\bar n\to 0$ is taken with the constraint $\bar n\gg \lambda^{3/2}$. (b) The phase portrait of the auxiliary Hamiltonian system that describes large fluctuations of the oscillator in the small-damping limit. The real-time instantons in (a) correspond to the trajectories in phase space of the same color. The gray area shows the region where ${\cal H}(I,p_I)$ remains finite for $\bar n\neq 0$; for $\bar n=0$, ${\cal H}$ remains finite in the whole region shown in the figure. (c) The logarithm of the probability distribution $R(I_n)\approx -\lambda\ln\rho_n$ for $\bar n=0$ and $\bar n>0$. }
\label{fig:instanton}
\end{figure}
Figure~\ref{fig:instanton}(a) shows the mean-field (fluctuation-free) trajectory $\overline I(t)$ and the optimal trajectory $I(t)$ obtained numerically from equations (\ref{eq:deterministic_I}) and (\ref{eq:optimal_trajectory}), respectively. An interesting feature of the considered model of the modulated quantum oscillator is that it satisfies the detailed balance condition for $T=0$ and thus $\bar n = [\exp(\hbar\omega_0/k_BT)-1]^{-1} =0$ \cite{Drummond1980c}. This is seen from the explicit expression for the rates (\ref{eq:balance_eqn}) and (\ref{eq:rate_small_attractor}), as for $\bar n=0$ they meet the familiar detailed balance condition $W_{n\;n+k}/W_{n+k\;n}=\exp(-k/\xi_n)$ [the explicit form of $\xi_n\equiv \xi(I_n)$ follows from (\ref{eq:rate_small_attractor})]. Therefore $p_I=1/\xi(I)$, and one can show from (\ref{eq:optimal_trajectory}) that $\dot I=2\kappa I$. As a consequence, the optimal fluctuational trajectory $I(t)$ is the time-reversed mean-field trajectory $\overline I(t)$. This is a generic feature of classical systems with detailed balance, see \cite{Luchinsky1997b}. Our results show that the symmetry also holds in quantum systems provided the notion of a trajectory is well-defined.
Of special interest is the vicinity of the saddle-point value of the action variable $I_{\cal S}$, see Fig.~\ref{fig:instanton}. In a dramatic distinction from chemical kinetics, there is no slowing down of $I(t)$ near $I_{\cal S}$. The quantity $I_{\cal S}$ is a boundary value of $I$ for states localized about a given extremum of $g(Q,P)$ in Fig.~\ref{fig:quasienergy}. Functions $\dot{\overline I}$ and $\dot I$ are discontinuous there. This is an artifact of the balance equation approximation, which applies in the weak damping limit where the dimensionless frequency $\nu(g)\gg \kappa$. For $g\to g_{\cal S}$ the frequency $\nu(g)\to 0$, and the approximation breaks down. With account taken of decay, in the region of bistability the oscillator has a ``true" unstable stationary state in the neglect of fluctuations. Both the mean-field trajectory and the optimal trajectory in phase space are moving away/approaching this state exponentially in time, cf.~\cite{Dykman1988a}, but the region of $I$ where it happens is very narrow for small $\kappa$.
\subsection{Fragility in the problem of large rare fluctuations}
\label{subsec:fragility}
A striking feature of optimal fluctuational trajectories obvious from Fig.~\ref{fig:instanton} is that these trajectories have different shapes depending on whether the oscillator Planck number is $\bar n=0$ or $\bar n > 0$. The discontinuous with respect to $\bar n$ change of the trajectories and the associated change of the logarithm of the distribution $R(I)$ and of the activation energy for switching $R_A$ show the fragility of the detailed-balance solution for $\bar n =0$ \cite{Dykman1988a,Marthaler2006}. It has been found that the fragility also emerges in a very different type of problem, the problem of population dynamics described by equation (\ref{eq:chem_kin}) \cite{Khasin2009}. In particular, the well-known result for the rate of disease extinction in the presence of detailed balance \cite{Weiss1971,Leigh1981,Jacquez1993,Doering2005a} can change discontinuously with the varying elementary rates $W({\bf X},{\bf r})$ as the detailed balance is broken.
We now show that the condition for the onset of fragility proposed in \cite{Khasin2009} applies also to the modulated oscillator, even though the divergence it reveals shows up in a different fashion. The condition relies on the expression for the switching exponent. Similar to how it was done for the oscillator, this exponent can be found by seeking the solution of the master equation (\ref{eq:chem_kin}) in the eikonal form $\rho({\bf X})=\exp[-N\tilde R({\bf x})]$. To the leading order in $1/N$, the problem is then mapped onto Hamiltonian dynamics of an auxiliary system with mechanical action $\tilde R({\bf x})$. From (\ref{eq:chem_kin}), the Hamiltonian of the auxiliary system is
\begin{equation}
\label{eq:chem_Hamiltonian}
{\cal H}({\bf x},{\bf p})=\sum_{\bf r} w({\bf x},{\bf r})[\exp({\bf r}{\bf p})-1], \quad{\bf p} = \partial_{\bf x} \tilde R
\end{equation}
[as before, we use that $W({\bf X}-{\bf r},{\bf r})\approx W({\bf X},{\bf r})$]. If the system is initially near a stable state ${\bf x}_a$ [a stable solution of (\ref{eq:mean_field_chemical})], $\tilde R({\bf x})$ is determined by the Hamiltonian trajectories that emanate from ${\bf x}_a$. From (\ref{eq:chem_Hamiltonian}), $\tilde R({\bf x})=\int_{{\bf x}_a}^{\bf x} {\bf p} d{\bf x}$. The rate of switching from ${\bf x}_a$ (or extinction, in the extinction problem) is $W_{\rm sw}\propto \exp(-N\tilde R_A)$. Similar to the quantum oscillator,
\begin{equation}
\label{eq:activation_reaction}
\tilde R_A = \int_{{\bf x}_a}^{{\bf x}_{\cal S}}{\bf p} d{\bf x} = \int dt {\bf p}(t)\dot{\bf x}(t)dt.
\end{equation}
Here ${\bf x}_{\cal S}$ is the saddle point of the deterministic dynamics (\ref{eq:mean_field_chemical}); it can be shown that it is the Hamiltonian trajectory that goes to the saddle point that provides the switching or extinction exponent $\tilde R_A$, cf. \cite{Dykman1979a,Kamenev2011,Herwaarden1995}. Both ${\bf x}_a$ and ${\bf x}_{\cal S}$ are stationary points of the Hamiltonian ${\cal H}$, and the integral over time in (\ref{eq:activation_reaction}) goes from $-\infty$ to $\infty$. This is a significant distinction from the modulated oscillator problem; there equations (\ref{eq:optimal_trajectory}) and (\ref{eq:R_A_defined}) for the activation exponent can be written as
\[R_A = \int_{-\infty}^0dt p_I(t)\dot I(t),\]
where we set the instant where $I(t)$ reaches $I_{\cal S}$ on the optimal trajectory to be $t=0$.
A small change of the reaction rates $W({\bf X},{\bf r})\to W({\bf X},{\bf r})+ \epsilon W^{(1)}({\bf X},{\bf r})$ ($\epsilon \ll 1$) leads to the linear in $\epsilon$ change of the Hamiltonian, ${\cal H} \to {\cal H} + \epsilon {\cal H}^{(1)}$, as seen from (\ref{eq:chem_Hamiltonian}). The action is then also changed. To the first order in $\epsilon$, $\tilde R_A\to \tilde R_A+\epsilon\tilde R_A^{(1)}$. The correction term is given by a simple expression familiar from the Hamiltonian mechanics \cite{LL_Mechanics2004},
\begin{equation}
\label{eq:correction_to_action}
\epsilon\tilde R_A^{(1)} = -\epsilon\int dt {\cal H}^{(1)}\Bigl({\bf x}(t),{\bf p}(t)\Bigr),
\quad {\cal H}^{(1)}=\sum_{\bf r} w^{(1)}({\bf x},{\bf r})(e^{{\bf p}{\bf r}}-1),
\end{equation}
where the integral is calculated along the {\it unperturbed} trajectory ${\bf x}(t),{\bf p}(t)$.
In the extinction problem the integral (\ref{eq:correction_to_action}) can diverge at the upper limit, $t\to\infty$. This is because in this problem ${\bf p}(t)$ remains finite for $t \to\infty$, and therefore if $w^{(1)}({\bf x}_{\cal S},{\bf r})$ is nonzero, ${\cal H}^{(1)}\neq 0$ for $t\to\infty$. The divergence indicates the breakdown of the perturbation theory; in the particular example studied in \cite{Khasin2009}, for $\epsilon\to 0$ the change of $\tilde R_A$ was $\sim \tilde R_A$.
For the modulated oscillator, the role of the small parameter $\epsilon$ is played by the Planck number $\bar n$.
If $w^{(0)}(I,r)$ is the transition rate for $\bar n=0$, then from (\ref{eq:balance_eqn}) the thermally induced term in the transition rate has the form $\bar n w^{(1)}(I,r)=\bar n[w^{(0)}(I,r)+w^{(0)}(I,-r)]$. Where the perturbation theory applies, the correction to the effective activation energy of switching reads
\begin{equation}
\label{eq:correction}
\bar n R_A^{(1)}=-\bar n\int_{-\infty}^0dt\sum_r w^{(1)}\Bigl(I(t),r\Bigr)\{\exp[rp_I(t)]-1\}.
\end{equation}
As we saw, in contrast to reaction/population systems, $p_I\neq 0$ for $t\to -\infty$. However, $ w^{(1)}(I,r)\propto I\propto \exp(2\kappa t)$ for $t\to -\infty$, therefore (\ref{eq:correction}) does not diverge for $t\to -\infty$. There is also no accumulation of perturbation for large $t$, as the integral goes to $t=0$. Therefore the cause of the fragility should be different from that in population dynamics/reaction systems.
As mentioned earlier, in contrast to reaction systems, for the oscillator the values of $r$ in (\ref{eq:correction}) can be large. Then the correction $R_A^{(1)}$ can diverge because of the divergence of the sum over $r$. This happens if on the optimal trajectory $w^{(1)}(I,r)$ decays with $r$ slower than $\exp(rp_I)$. From (\ref{eq:rate_small_attractor}), $w^{(1)}(I,r)$ decays with $r$ exponentially; in particular, $w^{(1)}(I,r)\propto \exp[-2r\nu(g)\tau_*(g)]$ for $r \gg 1$. The region of the values of $p_I$ where $\sum\nolimits_r w^{(1)}(I,r)\exp(rp_I)$ remains finite is shown in Fig.~\ref{fig:instanton}(b). As seen from this figure, the value of $p_I$ on the $\bar n=0$-trajectory can be too large for the sum over $r$ to converge. Then the perturbation theory becomes inapplicable. The trajectory followed in switching changes discontinuously where $\bar n$ changes from $\bar n=0$ to $\bar n>0$. The probability distribution also changes discontinuously. We note that $|p_I|\sim 1\ll \lambda^{-1}$ on the optimal fluctuational trajectory, which justifies the approximation (\ref{eq:eikonal_Hamiltonian}) that underlies the above analysis. It is clear that the optimal fluctuational trajectory $I(t)$ corresponds to the optimal fluctuational trajectory of the quasienergy $g(t)$, since the $I$ and $g$ variables are related by $\partial_gI=\nu^{-1}(g)$.
The instanton approximation relies on the assumption that the mean square fluctuations provide the smallest scale in the problem, similar to the wavelength in the WKB approximation \cite{LL_QM81}. If the system is perturbed and the perturbation is small, it can be incorporated into the prefactor of the rate of rare large fluctuations. If the perturbation is still small but exceeds the small parameter of the theory, it can be incorporated into the instanton Hamiltonian and leads to a correction to the exponent of the rare event rates. This correction is generically linear in the perturbation. However, this is apparently not a universal behavior, as the unperturbed solution can be fragile with respect to a perturbation. So far the fragility has been found in cases where the perturbation breaks the time-reversal symmetry.
An important problem is the crossover between the instanton solutions without and in the presence of the perturbation. For a modulated quantum oscillator it was recently addressed in \cite{Guo2013} (but the most probable fluctuational trajectories were not studied in this paper). The analysis \cite{Guo2013} shows that the very instanton approximation breaks down by thermal fluctuations, function $\partial_IR$ is not smooth for $\bar n >0$, rather it displays a kink. The threshold for the onset of this behavior is exponentially low in $\bar n$, with $|\ln\bar n|\lesssim \lambda^{-1}$. It corresponds to the regime where the rate of transitions between oscillator states induced by absorption of thermal excitations, which is $\propto \bar n$, becomes comparable with the switching rate $W_{\rm sw}$ calculated for $\bar n=0$. The region where the instanton approximation is inapplicable extends to $\bar n\lesssim \lambda^{3/2}$. This is why we indicate that the optimal trajectories in Fig.~\ref{fig:instanton} for $\bar n\to 0$ correspond to vanishingly small $\bar n$ compared to the small parameter of the theory $\lambda$, yet $\bar n \gtrsim \lambda^{3/2}$.
\section{Nonresonant modulation: a brief summary}
\label{sec:nonresonant}
Much attention has attracted recently the possibility of cooling mesoscopic oscillators, and the whole new area, the cavity optomechanics, has emerged, see \cite{Aspelmeyer2013} for a recent review. The cooling is performed by nonresonant modulation, with frequency $\omega_F$ significantly different from the oscillator frequency $\omega_0$. The very idea of cooling different types of quantum systems by a high-frequency field goes back to the mid-70s \cite{Dykman1978,Zeldovich1974,Shapiro1976}, about the same time when the laser cooling of atomic motion was proposed \cite{Hansch1975,Wineland1975}. The change of the distribution can be understood from Fig.~\ref{fig:heating} \cite{DK_review84,Zeldovich1974}. It refers to a system coupled by the modulating field to another system, which can be a thermal bath or a mode with a relaxation time much shorter than that of the system of interest, so that it serves effectively as a narrow-band thermal reservoir. The modulation provides a new channel of relaxation for the relatively slowly relaxing system of interest.
Figure~\ref{fig:heating} indicates possible transitions between the states of the system accompanied by energy exchange with the thermal reservoir. For example in (a), a transition of the system from the excited to the ground state is accompanied by a transition of the reservoir to the excited state with energy $\hbar\omega_{\rm b}=\hbar (\omega_0 + \omega_F)$, with the energy deficit compensated by the modulation. On the other hand, a transition of the system from the ground to the excited state requires absorbing an excitation in the thermal reservoir, which is possible only when such excitation is present in the first place. The ratio of the state populations of the system is determined by the ratio of the rates of transitions up and down in energy, and thus by the population of the excited states of the thermal reservoir with energy $\hbar\omega_{\rm b}$. If the corresponding process is the leading relaxation process, the effective temperature of the system becomes $T^*=(\omega_0/\omega_{\rm b})T$. It means there occurs effective cooling for $\omega_0\ll \omega_{\rm b}$. Similarly, for $\omega_0\gg \omega_{\rm b}$ the modulation leads to heating of the system, see Fig.~\ref{fig:heating}(b). In the case sketched in Fig.~\ref{fig:heating} (c), the induced transitions from the ground to the excited state of the system are more probable then from the excited to the ground state, which leads to a negative effective temperature for strong modulation.
\noindent
\begin{figure}[h]
\begin{center}
\includegraphics[width=7cm]{fig_4_heating_NJP_color.eps}
\end{center}
\caption{Modulation-induced relaxation processes leading to cooling (a), heating (b), and population inversion (c); $\omega_0$, $\omega_F$, and $\omega_{\rm b}$ are the frequency of the system (the oscillator, in the present case), the modulation frequency, and the frequency of the mode (or a thermal bath excitation) to which the oscillator is coupled by the modulation, respectively; the relaxation time of the mode is much shorter than that of the oscillator. Strong modulation imposes on the oscillator the probability distribution of the fast-decaying mode in (a) and (b) and leads to population inversion in (c). If, in the absence of modulation, oscillator relaxation is described by the standard model (\ref{eq:linear_relaxation}), (\ref{eq:master_eq}), the distribution over the Fock states in the presence of modulation is of the form of the Boltzmann distribution \cite{Dykman1978}; in (c) the distribution over low-energy Fock states is described by negative temperature and the oscillator vibrates close to its eigenfrequency. }
\label{fig:heating}
\end{figure}
In the case of an oscillator, the system has many levels, but the above picture still applies. The goal of this section is to outline and compare different microscopic mechanisms of the coupling of the oscillator to the modulation and the bath. The unexpected feature is that the distribution of the oscillator over its Fock states can be of the Boltzmann form with an effective temperature determined by the strength and frequency of the modulation \cite{Dykman1978}. However, this is the case only provided the major mechanism of oscillator relaxation in the absence of modulation is the conventional mechanism (\ref{eq:linear_relaxation}), which in a phenomenological classical description of oscillator dynamics corresponds to a friction force proportional to the oscillator velocity.
A simple model of the modulation-induced dissipation is where the external field parametrically modulates the coupling of the oscillator to a thermal bath. The coupling Hamiltonian is
\begin{equation}
\label{eq:coupling_parametric}
H_i^{(F)}=-qh_{\rm b}^{(F)}A\cos\omega_Ft.
\end{equation}
Here $h_{\rm b}^{(F)}$ depends on the variables of a thermal bath, or it can be the coordinate of a comparatively quickly decaying mode coupled to a thermal bath. The interaction (\ref{eq:coupling_parametric}) has the same structure as the interaction (\ref{eq:linear_relaxation}), except that it can lead to decay processes with the energy transfer $\hbar(\omega_0\pm\omega_F)$, cf. Fig.~\ref{fig:heating}(a) and (b). Therefore the structure of the master equation for the oscillator should not change, but the decay parameters and the Planck numbers of excitations created in decay should change appropriately.
The interaction can also lead to decay processes with energy transfer $\omega_F-\omega_0$, for the appropriate modulation frequencies. In this case absorption of bath excitations is accompanied by oscillator transitions down in energy. Respectively, in the master equation (\ref{eq:master_eq}) in the expression for the rates of transitions due to excitation absorption one has to formally replace $\bar n(\omega_0) \to \bar n(\omega_0-\omega_F) = -\bar n(\omega_F-\omega_0)-1$, which means that the friction coefficient becomes negative.
The above qualitative arguments can be confirmed by a formal analysis similar to that in \cite{Dykman1978}. It shows that in the RWA the master equation for the oscillator with account taken of the modulation-induced relaxation processes has the form (\ref{eq:master_eq}) with the relaxation parameter $\Gamma$ and the Planck number $\bar n$ replaced by $\Gamma_F=\Gamma + \Gamma_+ + \Gamma_- - \Gamma_{\rm inv}$ and $\bar n_F$,
\begin{equation}
\label{eq:master_heating}
\partial_t\rho = -\Gamma_F(\bar n_F + 1)(a^\dagger a\rho-2a\rho a^\dagger +\rho a^\dagger a)-\Gamma_F\bar n_F(a a^\dagger \rho-2a^\dagger\rho a + \rho a a^\dagger),
\end{equation}
where
\begin{eqnarray}
\label{eq:Gamma_heating}
\Gamma_{\pm,{\rm inv}}=\frac{A^2}{8\hbar\omega_0}\left|{\rm Re}~\!\int\nolimits_0^{\infty}dt\langle [h_{\rm b}^{(2)}(t),h_{\rm b}^{(2)}(0)]\rangle_{\rm b}e^{i(\omega_0\pm \omega_F)t} \right|,\nonumber\\
\fl
\bar n_F=\left\{\Gamma\bar n(\omega_0)+\Gamma_+\bar n(\omega_0+\omega_F)+\Gamma_-\bar n(\omega_0-\omega_F)+\Gamma_{\rm inv}\left[\bar n(\omega_F-\omega_0)+1\right]\right\}/\Gamma_F.
\end{eqnarray}
Here, $\Gamma_{\pm}$ give the rates of transitions at frequencies $\omega_0 \pm\omega_F$, which correspond to the processes sketched in Fig.~\ref{fig:heating}(a) and (b); $\Gamma_{\rm inv}$ gives the rate of processes sketched in Fig.~\ref{fig:heating}(c), where excitation of the oscillator is accompanied by excitation of the thermal bath. If these processes dominate, they lead to vibrations of the oscillator at frequency $\approx \omega_0$, with amplitude determined by other mechanisms of losses \cite{DK_review84}. Parameters $\Gamma_-$ and $\Gamma_{\rm inv}$ in (\ref{eq:Gamma_heating}) refer to the cases where $\omega_0>\omega_F$ and $\omega_0<\omega_F$, respectively. From (\ref{eq:master_heating}) and (\ref{eq:Gamma_heating}), the probability distribution of the oscillator is characterized by effective temperature $T_F=\hbar\omega_F/k_B\ln[(\bar n_F+1)/\bar n_F]$.
A similar behavior occurs if the modulation is performed by an additive force $A\cos\omega_Ft$, but the interaction with the narrow-band thermal reservoir is nonlinear in the oscillator coordinate, $H_i^{(2)}=q^2h_{\rm b}^{(2)}$. This case was considered in \cite{Dykman1978}. It reduces to the above formulation if one makes a canonical transformation $U(t)=\exp[ v^*(t)a-v(t)a^\dagger ]$ with $v(t)=A_{\rm osc}(2\hbar\omega_0)^{-1/2}(\omega_0\cos\omega_Ft+i\omega_F\sin\omega_Ft)$, where $A_{\rm osc}=A/(\omega_0^2-\omega_F^2)$ is the amplitude of forced vibrations of the oscillator. Indeed, as a result of this transformation $H_i^{(2)}$ transforms into $H_i^{(F)}$ in which the field amplitude $A$ is replaced with $-2A_{\rm osc}$ and $h_{\rm b}^{(F)}$ is replaced with $h_{\rm b}^{(2)}$.
The analysis of cooling of a vibrating mirror in an optical cavity can be also often mapped onto the analysis for the interaction (\ref{eq:coupling_parametric}). A quantum theory in this case was developed in \cite{Wilson-Rae2007,Marquardt2007}. It considers an oscillator (the mirror) coupled to a cavity mode driven by external radiation. If the radiation is classical, in the appropriately scaled variables the coupling and modulation are described by Hamiltonians $H_i^{\rm (m)}$ and $H_F^{\rm (m)}$, respectively,
\begin{equation}
\label{eq:cavity_mirror}
H_i^{\rm (m)}=c_{\rm b}qq_{\rm b}^2, \qquad H_F^{\rm (m)}=-q_{\rm b}A{}\cos\omega_Ft,
\end{equation}
where $q$ and $q_{\rm b}$ are the coordinates of the mirror and the mode. In cavity optomechanics one usually writes $H_i^{\rm (m)}=c_{\rm b}qa_{\rm b}^{\dag}a_{\rm b}$; the following discussion immediately extends to this form of the interaction.
In the absence of coupling to the mirror, the cavity mode is a linear system, hence $q_{\rm b}(t) = q_{{\rm b}\,0}(t) + [\chi_{\rm b}(\omega_F)\exp(-i\omega_Ft) + {\rm c.c.}]A{}/2$, where $q_{{\rm b}\,0}(t)$ is the mode coordinate in the absence of modulation and $\chi_{\rm b}(\omega)$ is the susceptibility of the mode \cite{Marquardt2007}. The coupling $H_i^{\rm (m)}$ in the interaction representation then has a cross-term $\propto q_0(t)q_{{\rm b}\,0}(t)\exp(\pm i\omega_Ft)$. Since the cavity mode serves as a thermal bath for the mirror, this term is fully analogous to $H_i^{(F)}$, with $q_{{\rm b}\,0}$ playing the role of $h_{\rm b}^{(F)}$.
\section{Conclusions}
\label{sec:Conclusions}
It follows from the results of this paper that a modulated nonlinear oscillator displays a number of quantum fluctuation phenomena that have no analog in systems in thermal equilibrium. Oscillator relaxation is accompanied by a nonequilibrium quantum noise. It leads to a finite-width distribution of the oscillator over its quasienergy states even for the bath temperature $T\to 0$. For resonant modulation, the distribution is Boltzmann-like near the maximum. We have discussed the recent experiment that confirmed this prediction and the effect of oscillator damping on the outcome of a sideband-spectroscopy based measurement of the distribution.
The quantum noise also leads to large rare events that form the far tail of the distribution of the oscillator over quasienergy and to switching between the coexisting states of forced vibrations. We have developed an approach to the analysis of the distribution tail and the switching rate, which makes a direct connection with the analysis of the corresponding problems in chemical and biological systems and in population dynamics. We show that, in a large deviation, the quasienergy of an underdamped oscillator most likely follow a well-defined {\it real} trajectory in {\it real} time. This trajectory is accessible to measurement. For $T=0$, where the oscillator has detailed balance, the most probable fluctuational trajectory is the time-reversed trajectory of the fluctuation-free (mean-field) relaxation of the oscillator to the stable state. Thermal fluctuations break the detailed balance condition and, even where the thermal Planck number $\bar n$ is small compared to the effective Planck constant, lead to an $\bar n$-independent change of the most probable fluctuational trajectory. We show that the criterion of the fragility, i.e., of a discontinuous change of the optimal fluctuation trajectory with the varying parameter can be formulated in a general form, that applies both to reaction systems with classical fluctuations and to the modulated quantum oscillator.
An interesting effect of nonresonant modulation of the oscillator is that it can significantly change the oscillator distribution over the Fock states, leading to heating, cooling, or excitation of self-sustained vibrations depending on the modulation frequency and the coupling to the thermal bath or a mode with a relaxation time shorter than that of the oscillator. We show that different coupling and modulation mechanisms can be described in a similar way and derive explicit expressions for the effective decay rate and temperature of a modulated oscillator.
\ack
We are grateful to P. Bertet and V. N. Smelyanskiy for the discussion. This research was supported in part by the ARO, grant W911NF-12-1-0235, and the Dynamics Enabled Frequency Sources program of DARPA. VP also acknowledges support from the ERC Starting Grant OPTOMECH.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,348 |
{"url":"https:\/\/quantumcomputing.stackexchange.com\/questions\/5452\/quantum-teleportation-with-moving-alice-and-bob","text":"# Quantum teleportation with moving Alice and Bob\n\nI have questions regarding quantum teleportation, which keep confusing me.\n\n1. Suppose Alice and Bob are in the same inertial frame $$K$$, and at time $$t$$ (in $$K$$) Alice teleports a quantum state to Bob. What I always hear is that this means that at time $$t$$, Bob has then got one of four states, although he does not yet know exactly which one of the four. Is this true?\n\n2. Now, what if Alice and Bob are both moving along the $$x$$-axis of $$K$$, in the same direction, both with the same speed $$v$$? If Alice does her part of the protocol at time $$t$$ (again, as seen in $$K$$), then if Bob is behind Alice (w.r.t. their common direction of movement in $$K$$), he must get the quantum state before $$t$$ in $$K$$, due to special relativity (as calculated by the Lorentz transformation, assuming his quantum state \"arrives\" at the same time as Alice sends it, in the inertial frame where both of them are at rest). This sounds weird as if the cause had happened after the effect.\n\n3. And what if Alice and Bob are not in the same inertial frame? Then the point in time Alice executes her part in her inertial frame does not correspond to any single point in time in Bob\u2019s inertial frame. So what can we say about the arrival time of the quantum state to Bob?\n\nNote: Cross-posted to Physics. I've accepted this answer there.\n\n\u2022 Btw, I think the \"arrival time\" is well-defined, e.g. in K via the following experiment: let Alice teleport |0> at time t, and let Bob measure at time t+dt in the computational basis. Let them repeat this 100 times, where dt>0 is a small constant. Then they run another 100 times using -dt (i.e. Bob will measure before t). Using Alices's records, let them later select those cases when Bob really got |0>: they will see that during the first 100 runs Bob always got |0> as measurement result in those cases, but not during the second 100 runs. So they conclude that the \"arrival time\" was at dt=0. Feb 10 '19 at 21:21\n\u2022 Hmm, this experiment for the \"arrival time\" would not work, there would not be qualitative difference between the results of the first and second 100 runs. So I think the answer to my question is that there is no such thing as arrival time. But then why do the books I read talk about immediate effect on Bob's qubit? Feb 10 '19 at 21:45\n\u2022 I'm voting to close this question as off-topic because the OP already accepted an answer on physics.SE\n\u2013\u00a0glS\nFeb 12 '19 at 9:55\n\u2022 @glS I don't think this should be closed as off-topic just because it has received an answer elsewhere. I've elaborated on my reasoning in chat. Feb 12 '19 at 16:36\n\u2022 Answered here: physics.stackexchange.com\/questions\/459986\/\u2026 Jul 27 at 17:50","date":"2021-10-22 07:13:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 12, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7663963437080383, \"perplexity\": 342.54156153063644}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585460.87\/warc\/CC-MAIN-20211022052742-20211022082742-00250.warc.gz\"}"} | null | null |
Telugu Poll Play Dhanush Quiz Play world cup Quiz Play Avengers Quiz Play IPL Quiz Play NGK Movie Quiz
Slideshows Telugu Articles
Telugu Tv Channels Telugu TV Shows Telugu TV Serials Telugu Awards Serials Actors Serials Actress
Other names of Allu Arjun: Bunny
Home Movie-Actor Telugu Telugu Movie-Actor
Overview News Galleries Trailers Movie Review Articles Awards
DOB : 08-04-1983
Star Sign : Taurus
» Movie-Actor
» Supporting Actors
The actor was born to producer Allu Arvind and Nirmala on April 8, 1983. H Hence, it makes sense to say that acting bug ran in him too. Allu Arjun is related to some pretty big-time Tollywood heroes (Chiranjeevi, Pawan Kalyan Pawan needs no introduction to South Indian people >> Read More... , Ram Charan Ram Charan is a famous Tollywood actor and an entr >> Read More... are maternal uncles) and has kept their flame burning on the silver screen. Considering big heroes like Chiranjeevi, Pawan and Charan as famous actors in Indian screen could have earned him passage to Tollywood royalty. Allu Arjun still realizes that he needs his space in Indian screen. Today, he is undoubtedly a popular name in Telugu cinema. He is also the paternal grandson of comedian Allu Rama Lingaiah Allu Rama Lingaiah is a veteran Telugu comedy acto >> Read More... . As a two-year-old boy then, the actor had begun his journey in a Chiranjeevi's film ' Vijetha Click to look into! >> Read More... ' and appeared in two scenes as actress Subha's son. Later he also acted as a dancer in "Daddy' and was appreciated.
In ' Gangotri Click to look into! >> Read More... ' (2003), he made his first appearance as a hero. He was just one of the younger actors then who had 'the spark' when he arrived on the screen. His next movies like ' Arya Click to look into! >> Read More... ', 'Bunny',' Desamuduru,' Parugu Click to look into! >> Read More... ' etc. were super duper hits in Telugu circuits. He established an image of his own and even was crowned as a best dancing star in the current era of Tollywood.
One of his films 'Arya-2' was released in Telugu and was dubbed in Malayalam, Tamil and Oriya languages with the same title and became a blockbuster in Kerala. The film gained him a huge fan base in Kerala too. Later, the film was remade in Bengali also. Over the years, the actor matured in his acting skills and tried to play his character flawlessly. His characterization evoked sympathy from the audiences, and thus he strengthened his position with every release that he had in his hands. His rise to the top of the competitive world of Tollywood was a new wonder that turned every studio projects into box office hits, and even go on to garner award nominations for his performances. In 2011, a film 'Badrinath' hit the screens and turned a big success. For this particular movie, he learned martial arts and the film was a success at the box office. Next year, another Telugu movie of his named ' Julayi Click to look into! >> Read More... ' was dubbed into Malayalam, Tamil and Hindi languages also.
Later in 2010, he appeared even in experimental films too ('Varudu' and 'Vedam). He has won Nandi award twice ('Arya' & 'Parugu') and Filmfare awards thrice ('Parugu','Vedam' and ' Race Gurram Click to look into! >> Read More... '). The actor even appeared for a 3-D film ' Rudhramadevi Click to look into! >> Read More... ' which was the highest grossing film in 2015. Allu Arjun got married to Sneha Reddy Sneha Reddy is the daughter of a rich businessman, >> Read More... on March 6, 2011, and has a son Allu Ayaan born in 2014. Allu Arjun is unanimously identified as a wonderful actor and a true professional. In many of his films, his performances have been undeniably exceptional.
Aditya Babu
Aditya Babu is a Tollywood producer and actor. He was born on November 18, 1984. He is the son of Tollywood producer late J.D. Sompalli. He had the interest of becoming a hero at the age of 8 years and he is also a die-hard fan of Kamal Hassan and Chiranjeevi. Before coming to the acting field, Aditya learnt about film making in Satyanand University of Acting at Visakhapatnam and started Aditya arts banner and produced his first film Jagadam in the year 2007 and later Arya 2. In Kannada, Aditya produced remakes of Bhadra and Ready. He even trained actors Prabhas, Pawan Kalyan and Mahesh Babu in the past. He debuted in Kannada movie titled Anthu Inthu Preethi Banthu in the year 2008 and he produced and acted in the movie titled Chalaki in the year 2010. He prefers to do college going roles in his future projects.
Allu Sirish
Allu Sirish is a South Indian actor whose two or more members have been involved in the film industry. His many relatives ( Chiranjeevi, Pawan Kalyan, Ram Charan) had featured predominantly in South Indian movies His elder brother is Telugu actor Allu Arjun. Getting started in Tollywood is tough for newcomers but not for one who comes from a family of the film fraternity. But it can be a huge advantage if show business runs in the family. The only thing is that the actor has to work twice hard as to prove them. He is the youngest son in his family and was born to Allu Aravind and Nirmala Allu on May 30, 1987. His father Allu Aravind is a noted film producer. He is the nephew of Chiranjeevi and Ram Charan is his cousin. Allu Sirish grew up in the shadow of one famous sibling Allu Arjun, who skyrocketed to fame and were enormously popular. It seems only natural that son of film producer Allu Arvind would seek the limelight of his own. He had a brief foray as a child artist in two films earlier. Later he returned to acting as a hero with a bi-lingual film 'Gowravam/Gouravam'(Telugu/Tamil) which released in 2013. His second film 'Kotha Janta'(Telugu) released in 2014. With a short stint as a child artist, his role as a hero in two films (Kotha Janta and Gowravam) failed to impress in the box office. Film fraternity is hopeful that movie fans will accept Allu Sirish as a promising newcomer. His brother Allu Arjun is very famous in Tollywood, and his movie fans would recognize him if they saw his first face in trailer or poster of a film. It is true that Allu Sirish would also be benefited from his dad's and bother's success. Of late, the actor Allu Sirish has teamed up with Lavanya Tripathi for an upcoming Telugu movie. Director Parasuram is also making another movie with Allu Sirish, which has been launched. As a child artiste, he acted in a film 'Pratibandh' (1990) which was a debut film for Chiranjeevi in Hindi heartlands. Later, he acted in another Tamil movie 'Mayabaza' (1995) as a child actor. Making debut as a child artiste, he completed his graduation in Mass Communication from R.D. National College Hyderabad. The actor is currently settled in Hyderabad. He had moved to the Hyderabad from Chennai when he was in the tenth standard. The actor will be hosting the International Indian Film Academy Awards (IIFA) which is going to be held for the first time in the month of December in Hyderabad. The actor was also a producer of Aamir Khan's movie Ghajini.
Best Dancer In Tollywood?
Best On-Screen Combination ?
Which Tollywood Actress Look Gorgeous In Half Saree ?
List Of Telugu Movies Released In September 2019
Top 10 Telugu Horror Movies You Should Not Watch Alone
Top 10 Latest Telugu Romantic Movies
Which Outfit Suits The Best For This Actress ?
Who Is The Best Onscreen Couple In Tollywood ?
Who Is Best In Punch Dialogues?
Top Tollywood Celebrities Who Were Born in December
Top Tollywood Celebrities Who Were Born in November
Top Tollywood Celebrities Who Were Born in October
Top 10 New Tollywood Actors
Top 10 Filmfare Award Winners Of Telugu Films
Top 10 Movies Produced Under The Banner Of Geetha Arts
Top 10 Blockbuster Music Album Of Tollywood
Ten Tollywood Movies Made A Mockery Out Of Science
Top 10 Tollywood Actors With Great Physique
Top 10 Actors Of Tollywood And Their Luxurious Cars
Top 10 Tollywood Actors In Lady Outfits
Top 10 Blockbuster Movie Scripts Written By Trivikram Srinivas
OTHER MOVIE ACTORS
Abhijeeth Poondla
Bhabani Prasad
Kallu Chidambaram
Raghu Babu
Shashank
Click Here For More Movie Actors | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 872 |
{"url":"https:\/\/www.tug.org\/pipermail\/texhax\/2008-January\/009931.html","text":"# [texhax] xymatrix question\n\nJustin C. Walker justin at mac.com\nMon Jan 28 19:23:49 CET 2008\n\nThanks for your reply!\n\nOn Jan 28, 2008, at 5:58 AM, Michael Barr wrote:\n\n> First, let me suggest that such questions are more appropriate for\n> the xy-pic help group (xy-pic at tug.org) than texhax.\n\nAgreed; I didn't think to look for 'xy' mailing lists. I'll check\nthere for any such queries in the future.\n\n> The answer to your specific question is to enclose the argument\n> in a \\vcenter:\n> $\\left. > \\vcenter{\\xymatrix{A\\ar[r]&B\\\\C\\ar[r]&D}} > \\right\\} >$\n>\n> This is also useful if you want to put an \\eqno that is vertically\n> centered, e.g.:\n> $$\\left. > \\vcenter{\\xymatrix{A\\ar[r]&B\\\\C\\ar[r]&D}} > \\right\\} > \\eqno(*)$$\n\nJust what I needed.\n\nThanks again.\n\nJustin\n\n--\nJustin C. Walker, Curmudgeon-At-Large\nInstitute for the Enhancement of the Director's Income\n--------\nWhen LuteFisk is outlawed,\nOnly outlaws will have LuteFisk\n--------","date":"2022-08-09 10:01:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9059491753578186, \"perplexity\": 9666.253577682872}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882570921.9\/warc\/CC-MAIN-20220809094531-20220809124531-00472.warc.gz\"}"} | null | null |
Do you like golf? Than this ones for you! Today on the NH Business Show I speak with Rob Zimmerman owner of 3 Up Golf about a really cool product he offers.
Running a business is tough, so whenever you get a chance to add revenue with a business that easily works with yours. Today on the NH Business Show I speak with Katie Booker, owner of Rosewood Chalk as she talks about chalk couture and how she uses it.
Today on the NH Business Show I speak with Phillip Falardeau, co-owner of Live Free Chiropractic. This isn't your normal interview! Phillip and I have a great chat about his history in the business and many other off topic things.
Not all care facilities are the same. If you want the best care for your elderly loved ones, you should speak with Jeff Swanton of Care Patrol.
Insurance is a widely talked about topic. It can be controversial and is often the focus of lots of anger. Today on the NH Business Show I speak with Timothy Hirsch, agent for Planstin Inc, and we're discussing an alternative to health insurance that isn't covered by the media or insurance companies.
Crafts are a great way to make your home feel cozier, and they also make great gifts. Today on the NH Business Show I speak with Katie Booker owner of RKB Primitive Signs & Gifts about her company and how it's become what it is today.
Tonight on the NH Business Show I speak with Bob and Sue Brusa about her 24 years as a rep for Reliv and their combined experience and within the company. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,969 |
There are 3 lawyers in South Royalton, VT. Below is a list of the 10 most popular lawyers on Lawyer Map. Are you looking for a lawyer specialized in a specific legal issue? Refine your search by selecting a legal issue.
NEED A SOUTH ROYALTON LAWYER?
Lawyer Map is one of the worlds largest lawyer directories listing a total of 3 lawyers in South Royalton, Vermont.
Each South Royalton lawyer profile covers office location, office hours, contact details, area of practice, education, client recommendations, lawyer biography and more.
Our goal is to make your search for the right lawyers, attorneys & law firms in South Royalton, Vermont quick and easy. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,993 |
Sabre Celebrates a Decade of Growth, Innovation and Technology in India
Bangalore, India – Sept. 24, 2015 – Sabre Corporation (NASDAQ: SABR), the leading technology provider to the global travel industry, celebrated the tenth anniversary of its technology center in Bangalore, India. Opened in 2005, the center has helped Sabre expand and enhance its technology solutions to its global customer base of more than 425,000 travel agents, 400 airlines and 176,000 hotel properties. Sabre's strategic center in Bangalore supports multiple technology functions including product development, enterprise data and analytics and operations research. It also provides customer care and support, implementation and consulting services, and product and solutions to clients globally.
"I am fortunate to have been a part of the original nine-member team that set-up the Bangalore center in 2005. Ten years on, our commitment to quality, innovation and customer-centric solutions has set a benchmark from which Sabre has grown globally," said Shail Maniar, vice president and managing director, Sabre India. "Over the years, we have built a global center of excellence with a focus on technology innovation, acquiring, nurturing and retaining top talent, and staying at the forefront of our competition," said Maniar. Set up to enhance Sabre's existing global operations and capabilities, the center now plays a critical role in Sabre's commitment to providing customers with quality products and solutions, based on innovative technology and driven by best-in-class customer support.
Innovation highlights:
The development of market-leading forecasting and optimization models for Sabre AirVision Revenue Manager, which have enabled over 30 of Sabre's low cost carrier customers to accurately manage their revenue. A market leader in this segment, Sabre is now focused on building next-generation revenue management solutions analytics, integrated with data analytics and business intelligence to help airlines further optimize their revenue in real-time.
The development of Sabre's award-wining Hotel RFP (Request for Proposal) solution, one of the industry's top online hotel sourcing and rate negotiation tools for corporations and agencies. The solution won Travel Weekly's 2014 Gold Magellan Award and Global Business Traveler Association (GBTA) 2011 Business Traveler Innovation People's Choice Award.
The development of Sabre AirVision Market Intelligence, the industry's first online tool powered by global demand data that helps airlines make informed flight operation decisions, such as when to fly, where to fly, and how frequently to fly. The Bangalore team gathered 10 years' of airline data and developed algorithms to help Sabre's airline customers operate more efficiently and profitably.
The Bangalore center played a key role in developing GetThere, Sabre's corporate travel booking tool, which provides unmatched cost control for more than 3000 corporations in the world, including the majority of Fortune 200 companies. GetThere processes more than 12 million transactions annually, delivers $2 billion in annual savings to our customers, and is currently used in 95 countries.
Customer highlights
The Bangalore center developed mission-critical software to ensure a seamless travel experience for passengers' post-merger of US Airways into American Airlines in October 2015.
The center regularly supports major airline reservations migrations to SabreSonic Customer Sales and Service (CSS). Key migrations include Cambodia Angkor Air, Aero Mexico, Philippine Airlines and Kulula Air.
The center has been instrumental in upgrading the SabreSonic Check-in platform with industry-leading flight management capabilities to support airlines during flight disruptions. The platform has helped airlines improve productivity, increase targeted merchandising opportunities, and reduce training costs. Over 28 airlines have been migrated to SabreSonic Check-in and currently more than 50 airlines across Asia, Australia, Europe and Central America, are a part of the updated check-in system.
The Bangalore team led the introduction of a new user experience for the Sabre cruise The enhancements reduced booking workflow by 30%, helping agents complete bookings faster. The team also scaled the capabilities of Sabre Cruise to offer new content from Vikings River Cruises (2013), AmaWaterways and Avalon Waterways (2014-15), bringing travel agents more cruise options for their customers.
The Bangalore team supported Air India's project to deploy a fully integrated flight control system, with solutions from Sabre AirVision and Sabre AirCentre suites. The products are expected to offer Air India with increased efficiencies, minimal disruptions, enhanced punctuality and passenger connections, and better scheduling. Having a local footprint has helped Sabre drive close engagement with Air India.
Growth & Culture
Awarded 'India's Best Companies to Work For' in 2009 by the Great Place to Work (GPTW), the center provides employees a working environment that encourages personal growth and development, including opportunities to pursue higher education from India's leading university, Birla Institute of Technology and Science (BITS).
Center team members collaborated with the Ministry of Civil Aviation, Government of India to set up India's First Aviation University in 2015. Sabre will play a central role in guiding the formation of the curriculum and providing tools and platforms for training. This collaboration with the University will help build a stronger aviation workforce in the APAC region.
In June, the Bangalore center expanded its facility to accommodate its growing employee base. The decision to invest and grow the center is further testament to the quality of talent that have been nurtured through the center over the past ten years. The center currently has over 1000 employees.
Employees have been actively involved in community building initiatives as a part of Sabre's global Corporate Social Responsibility program. From 2007 onwards, Sabre has been involved in funding education of underprivileged children at Parikrma Humanity Foundation in Bangalore
Sabre employees in Bangalore being acknowledged for their contribution to the company during the 10th anniversary celebration.
About Sabre Corporation
Sabre Corporation is the leading technology provider to the global travel industry. Sabre's software, data, mobile and distribution solutions are used by hundreds of airlines and thousands of hotel properties to manage critical operations, including passenger and guest reservations, revenue management, flight, network and crew management. Sabre also operates a leading global travel marketplace, which processes more than $110 billion of estimated travel spend annually by connecting travel buyers and suppliers. Headquartered in Southlake, Texas, USA, Sabre serves customers in more than 160 countries around the world.
India technology travel | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,825 |
Kyle Buis
Words and Images With Experience
TV News Website Writing
Daily Newspaper Design
California: A State of Survival
The Rim Fire: One Year Later
Appeal-Democrat Archive Writing
Taking care of the hill people
July 3, 2007 Kyle Buis
When mobile home park owners Bob and Dee Kearns lost a customer in the early 1970s, a new community care center was born. Thirty-three years later, that dream is kept alive by the first two doctors to staff what is now the Sutter North Medical Foundation's Brownsville branch.
Dr. William Hoffman and Dr. John Rose first came to the area through the National Health Service Corps. In July 1974, a California Department of Forestry barracks was converted into the main health building. In the coming years, a kitchen became a dental office and the bathroom became a lab.
The changeover was done by community members eager to avoid a 40-minute drive to the nearest hospital in Yuba City. That drive is what cost the Kearns a customer and led them to get together with the rest of the Brownsville community to pitch in and make the medical center a reality.
Both doctors said it was that spirit that keeps it open to this day.
"If we didn't have their backing, we wouldn't be open," Hoffman said.
Even with that support, the first year was rough for the two doctors. To keep in touch with the outside world, they needed a private line, which someone generously gave up. Neither of them knew that calls to the hospital in Yuba City were long distance until the first phone bill came.
To add to the chaos, neither doctor had been trained in billing procedures, and Hoffman had to try to set up the lab by himself.
They were able to get by with surplus goods from the NHSC, including sutures, casting equipment and a sterilizer.
Since they were the only medical center in the area, the clinic received many calls from the Yuba County sheriff and the California Highway Patrol to get patients patched up and ready for the drive down the hill.
One call had Rose driving to a scene 10 minutes away where the sheriff told him a man had been stabbed in the heart. During that drive, Rose was ready for one of three scenarios. If the man was dead, there was nothing he could do. If it was only a superficial wound, it would be a simple stitch job.
When Rose showed up, the man was sitting up and looked fine. After checking his blood pressure though, Rose knew it was the third option he prepared for. He was bleeding internally and needed to get to the emergency room as soon as possible. Rose did what he could and gave very simple instructions.
"I put the hoses in him and told the driver to drive like hell," Rose said.
The man was both lucky and unlucky when he got to the hospital. The knife barely nicked his heart and was between two major blood vessels less than an inch apart, Rose said. Either one would have made him bleed. But that's where his luck ended.
Deputies found he had a few outstanding warrants and the man ended up in jail next to the person who stabbed him.
Not all of the calls take the doctors to houses. Hoffman remembered when a drunken driver went down a ravine late at night during a raging storm.
While the rain poured and the winds whipped around, Hoffman had to slide down on a rope where he couldn't see any further than 10 feet. The conditions didn't faze him.
"I just wanted to get the job done." Hoffman said, "Of course, I was more athletic back then."
He was met down at the wrecked car by two rescuers with flashlights on the man inside who was bleeding out from his scalp. Dangling from a rope, Hoffman cleaned and stitched his patient's head so he could be pulled out of the ravine.
"A few months later you couldn't tell it happened," Hoffman said of the man's injury.
In either case, neither patient would have lived to the hospital. That care has evolved since then to become the Yuba-Feather Medical Group and then in 1993, part of Sutter North Medical Group.
Between the support of the community and the medical group, the two still see patients today who would still have a long drive to get care if it weren't for them.
"You can be a shut-in, not drive and still get care," Hoffman said.
The combination of the first few years of chaos and the remote location were far from the best conditions. Rather than making more money somewhere else, Hoffman and Rose stuck it out and still see patients at the Brownsville Medical Clinic to this day.
"People said we were crazy, but here we are 33 years later," Rose said.
Previous Post Up, down and away
Next Post Quick action stymies blaze
Writing Highlights
Why Isn't Proposition 9 On The California Election Ballot?
California Senators Hold On To Pay Because Of New Jersey Court Decision
How A California Supreme Court Ruling Changed Independent Contractors
'Channel 13 News' Facebook Privacy Hoax Spreading Again
California: A State of Survival – A five-part series on California's drought.
The Rim Fire: One Year Later – A day-by-day look back at one of the more devastating fires in California history.
Social Media Editor
CBS Television Stations
Writing and editing content for our news website (CBS13.com) and social media channels. This was most recently spotlighted in our coverage of the Camp Fire in Paradise. I tracked evacuation orders and fire updates through multiple agencies while fielding questions from the public.
Educating reporters and staff on best social media practices.
Managing content flow of social media channels in the afternoon and evening hours.
Writing, editing and gathering information in breaking news situations, including the Oroville Dam Spillway evacuations.
Maintaining social media presence on Facebook and Twitter.
Engaging in social listening through Google Analytics, Facebook Insights and Signal. Video production on CBS website and Facebook Live using a Tricaster system and Anvato.
Lead Copy Editor
Sacramento News & Review
Editing copy from staff writers and freelancers
Proofreading layouts, managing deadlines while working within Associated Press and local style.
Leading the copy team and training writers and freelancers
Technology Writer
Appeal-Democrat
Initially brought on as a reporting intern, the company created a copy editing position that involved layout & working on our site. Started a technology blog, and column as well as covered Macworld in 2009 and 2010. The newspaper was named one of the 10 Newspapers That Do It Right by Editor & Publisher Magazine in 2010.
© Kyle Buis. All rights reserved. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 144 |
package io.astefanutti.metrics.cdi.se;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.hasItem;
import static org.junit.Assert.assertThat;
import java.util.Arrays;
import java.util.Set;
import javax.inject.Inject;
import org.eclipse.microprofile.metrics.MetricRegistry;
import org.eclipse.microprofile.metrics.Timer;
import org.hamcrest.Matchers;
import org.jboss.arquillian.container.test.api.Deployment;
import org.jboss.arquillian.junit.Arquillian;
import org.jboss.arquillian.junit.InSequence;
import org.jboss.shrinkwrap.api.Archive;
import org.jboss.shrinkwrap.api.ShrinkWrap;
import org.jboss.shrinkwrap.api.asset.EmptyAsset;
import org.jboss.shrinkwrap.api.spec.JavaArchive;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import io.astefanutti.metrics.cdi.se.util.MetricsUtil;
@RunWith(Arquillian.class)
public class OverloadedTimedMethodBeanTest {
private final static String[] TIMER_NAMES = { "overloadedTimedMethodWithNoArguments", "overloadedTimedMethodWithStringArgument",
"overloadedTimedMethodWithListOfStringArgument", "overloadedTimedMethodWithObjectArgument" };
private Set<String> absoluteMetricNames() {
return MetricsUtil.absoluteMetricNames(OverloadedTimedMethodBean.class, TIMER_NAMES);
}
@Deployment
static Archive<?> createTestArchive() {
return ShrinkWrap.create(JavaArchive.class)
// Test bean
.addClasses(OverloadedTimedMethodBean.class, MetricsUtil.class)
// Bean archive deployment descriptor
.addAsManifestResource(EmptyAsset.INSTANCE, "beans.xml");
}
@Inject
private MetricRegistry registry;
@Inject
private OverloadedTimedMethodBean bean;
@Test
@InSequence(1)
public void overloadedTimedMethodNotCalledYet() {
Assert.assertTrue("Metrics are not registered correctly", registry.getMetrics().keySet().containsAll(absoluteMetricNames()));
// Make sure that all the timers haven't been called yet
assertThat("Timer counts are incorrect", registry.getTimers().values(), hasItem(Matchers.<Timer> hasProperty("count", equalTo(0L))));
}
@Test
@InSequence(2)
public void callOverloadedTimedMethodOnce() {
Assert.assertTrue("Metrics are not registered correctly", registry.getMetrics().keySet().containsAll(absoluteMetricNames()));
// Call the timed methods and assert they've all been timed once
bean.overloadedTimedMethod();
bean.overloadedTimedMethod("string");
bean.overloadedTimedMethod(new Object());
bean.overloadedTimedMethod(Arrays.asList("string1", "string2"));
assertThat("Timer counts are incorrect", registry.getTimers().values(), hasItem(Matchers.<Timer> hasProperty("count", equalTo(1L))));
}
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 3,381 |
{"url":"https:\/\/habr.com\/en\/company\/postgrespro\/blog\/486104\/","text":"MVCC in PostgreSQL-7. Autovacuum\n\n\u2022 Translation\nTo remind you, we started with problems related to isolation, made a digression about low-level data structure, discussed row versions in detail and observed how data snapshots are obtained from row versions.\n\nThen we explored in-page vacuum (and HOT updates) and vacuum. Now we'll look into autovacuum.\n\nAutovacuum\n\nWe've already mentioned that normally (i.\u00a0e., when nothing holds the transaction horizon for a long time) VACUUM usually does its job. The problem is how often to call it.\n\nIf we vacuum a changing table too rarely, its size will grow more than desired. Besides, a next vacuum operation may require several passes through indexes if too many changes were done.\n\nIf we vacuum the table too often, the server will constantly do maintenance rather than useful work \u2014 and this is no good either.\n\nNote that launching VACUUM on schedule by no means resolves the issue because the workload can change with time. If the table starts to change more intensively, it must be vacuumed more often.\n\nAutovacuum is exactly the technique that enables us to launch vacuuming depending on how intensive the table changes are.\n\nWhen autovacuum is turned on (the autovacuum configuration parameter set), the autovacuum launcher daemon process is started, which plans the work. Vacuuming itself is done by autovacuum worker processes, several instances of which can run in parallel.\n\nThe autovacuum launcher process composes a list of databases where any activity takes place. The activity is determined from statistics, and to collect it, the track_counts parameter must be set. Never turn off autovacuum and track_counts, otherwise, the autovacuum feature won't work.\n\nOnce every autovacuum_naptime seconds, autovacuum launcher starts (using the postmaster process) a worker process for each database on the list. In other words, if there is some activity in a database, worker processes will be sent to it at an interval of autovacuum_naptime seconds. To this end, if a few (N) active databases are available, worker processes are launched N times as often as every autovacuum_naptime seconds. But the total number of simultaneously running worker processes is limited by the autovacuum_max_workers parameter.\n\nWhen started, a worker process connects to the database assigned to it and starts with composing a list of:\n\n\u2022 All the tables, materialized views and TOAST tables that require vacuuming.\n\u2022 All tables and materialized views that require analysis (TOAST tables are not analyzed since they are always reached with index access).\n\nThen the worker process vacuums and\/or analyzes objects on the list one at a time and completes when vacuuming is finished.\n\nIf the process could not do all the work planned in autovacuum_naptime seconds, the autovacuum launcher process will send one more worker process to this database, and they will work together. \u00abTogether\u00bb just means that the second process will build its own list and work through it. So, only different tables will be processed in parallel, but there is no parallelism at the level of one table \u2014 if one of the worker processes is already handling a table, another process will skip it and proceed further.\n\nNow let's clarify in more detail what is meant by \u00abrequires vacuuming\u00bb and \u00abrequires analysis\u00bb.\n\nRecently the patch was committed that allows the vacuum to process indexes in parallel with background workers.\n\nWhat tables require vacuuming?\n\nVacuuming is considered to be required if the number of dead (i.\u00a0e., outdated) tuples exceeds the specified threshold. The statistics collector is permanently keeping track of the number of dead tuples, which is stored in the pg_stat_all_tables table. And two parameters specify the threshold:\n\n\u2022 autovacuum_vacuum_threshold defines an absolute value (the number of tuples).\n\u2022 autovacuum_vacuum_scale_factor defines the share of rows in the table.\n\nIn summary: vacuuming is required if pg_stat_all_tables.n_dead_tup >= autovacuum_vacuum_threshold + autovacuum_vacuum_scale_factor * pg_class.reltupes.\n\nWith the default settings, autovacuum_vacuum_threshold = 50 and autovacuum_vacuum_scale_factor = 0.2. autovacuum_vacuum_scale_factor is, certainly, the most important here \u2014 it's this parameter that is critical for large tables (and it's them that possible issues are associated with). The value of 20% seems unduly high, and most likely it will need to be considerably reduced.\n\nOptimal values of the parameters may vary for different tables and depend on the table sizes and specifics of the changes. It makes sense to set generally suitable values and, if the need arises, do special tweaking of the parameters at the level of certain tables by means of storage parameters:\n\n\u2022 autovacuum_vacuum_threshold and toast.autovacuum_vacuum_threshold.\n\u2022 autovacuum_vacuum_scale_factor and toast.autovacuum_vacuum_scale_factor.\n\nTo avoid getting confused, this is reasonable to do only for few tables that are distinguished among the rest by the amount and intensity of changes and only when the globally set values fail to work fine.\n\nBesides, you can turn autovacuum off at the table level (although we can hardly think of a reason why it could be necessary):\n\n\u2022 autovacuum_enabled and toast.autovacuum_enabled.\n\nFor example, last time we created the vac table with autovacuum turned off in order to manually control vacuuming for demo purposes. The storage parameter can be changed as follows:\n\n=> ALTER TABLE vac SET (autovacuum_enabled = off);\n\n\nTo formalize all the above, let's create a view that shows which tables need vacuuming at this point in time. It will use the function that returns the current value of the parameter and takes into account that the value can be redefined at the table level:\n\n=> CREATE FUNCTION get_value(param text, reloptions text[], relkind \"char\")\nRETURNS float\nAS $$SELECT coalesce( -- if the storage parameter is set, we take its value (SELECT option_value FROM pg_options_to_table(reloptions) WHERE option_name = CASE -- for TOAST tables, the parameter name differs WHEN relkind = 't' THEN 'toast.' ELSE '' END || param ), -- otherwise, we take the value of the configuration parameter current_setting(param) )::float;$$ LANGUAGE sql;\n\n\nAnd this is the view:\n\n=> CREATE VIEW need_vacuum AS\nSELECT st.schemaname || '.' || st.relname tablename,\nget_value('autovacuum_vacuum_threshold', c.reloptions, c.relkind) +\nget_value('autovacuum_vacuum_scale_factor', c.reloptions, c.relkind) * c.reltuples\nst.last_autovacuum\nFROM pg_stat_all_tables st,\npg_class c\nWHERE c.oid = st.relid\nAND c.relkind IN ('r','m','t');\n\n\nWhat tables require analysis?\n\nThe situation with automatic analysis is similar. Those tables are considered to require analysis whose number of updated (since the last analysis) tuples exceeds the threshold specified by two similar parameters: pg_stat_all_tables.n_mod_since_analyze >= autovacuum_analyze_threshold + autovacuum_analyze_scale_factor * pg_class.reltupes.\n\nThe default settings of automatic analysis are somewhat different: autovacuum_analyze_threshold = 50 and autovacuum_analyze_scale_factor = 0.1. They can also be defined at the level of storage parameters of separate tables:\n\n\u2022 autovacuum_analyze_threshold\n\u2022 autovacuum_analyze_scale_factor\n\nSince TOAST tables are not analyzed, they do not have such parameters.\n\nLet's also create a view for analysis:\n\n=> CREATE VIEW need_analyze AS\nSELECT st.schemaname || '.' || st.relname tablename,\nst.n_mod_since_analyze mod_tup,\nget_value('autovacuum_analyze_threshold', c.reloptions, c.relkind) +\nget_value('autovacuum_analyze_scale_factor', c.reloptions, c.relkind) * c.reltuples\nmax_mod_tup,\nst.last_autoanalyze\nFROM pg_stat_all_tables st,\npg_class c\nWHERE c.oid = st.relid\nAND c.relkind IN ('r','m');\n\n\nExample\n\nLet's set the following parameter values for experiments:\n\n=> ALTER SYSTEM SET autovacuum_naptime = '1s'; -- to aviod waiting long\n=> ALTER SYSTEM SET autovacuum_vacuum_scale_factor = 0.03; -- 3%\n=> ALTER SYSTEM SET autovacuum_vacuum_threshold = 0;\n=> ALTER SYSTEM SET autovacuum_analyze_scale_factor = 0.02; -- 2%\n=> ALTER SYSTEM SET autovacuum_analyze_threshold = 0;\n\n=> SELECT pg_reload_conf();\n\n pg_reload_conf\n----------------\nt\n(1 row)\n\n\nNow let's create a table similar to the one used last time and insert one thousand rows into it. Autovacuum is turned off at the table level, and we will be turning it on by ourselves. Without this, the examples will not be reproducible since autovacuuming can be triggered at a bad time.\n\n=> CREATE TABLE autovac(\nid serial,\ns char(100)\n) WITH (autovacuum_enabled = off);\n=> INSERT INTO autovac SELECT g.id,'A' FROM generate_series(1,1000) g(id);\n\n\nThis is what our view for vacuuming will show:\n\n=> SELECT * FROM need_vacuum WHERE tablename = 'public.autovac';\n\n tablename | dead_tup | max_dead_tup | last_autovacuum\n----------------+----------+--------------+-----------------\npublic.autovac | 0 | 0 |\n(1 row)\n\n\nAttention here should be given to two things. First, max_dead_tup = 0 although 3% of 1000\u00a0rows make 30\u00a0rows. The thing is that we do not have statistics on the table yet since INSERT does not update it on its own. Until the table gets analyzed, zeros will remain since pg_class.reltuples = 0. But let's look at the second view for analysis:\n\n=> SELECT * FROM need_analyze WHERE tablename = 'public.autovac';\n\n tablename | mod_tup | max_mod_tup | last_autoanalyze\n----------------+---------+-------------+------------------\npublic.autovac | 1000 | 0 |\n(1 row)\n\n\nSince 1000\u00a0rows have been changed (added) in the table, which is greater than zero, automatic analysis must be triggered. Let's check this:\n\n=> ALTER TABLE autovac SET (autovacuum_enabled = on);\n\n\nAfter a short pause we can see that the table has been analyzed and correct 20\u00a0rows are shown in max_dead_tup instead of zeros:\n\n=> SELECT * FROM need_analyze WHERE tablename = 'public.autovac';\n\n tablename | mod_tup | max_mod_tup | last_autoanalyze\n----------------+---------+-------------+-------------------------------\npublic.autovac | 0 | 20 | 2019-05-21 11:59:48.465987+03\n(1 row)\n\n\n=> SELECT reltuples, relpages FROM pg_class WHERE relname = 'autovac';\n\n reltuples | relpages\n-----------+----------\n1000 | 17\n(1 row)\n\n\nLet's get back to autovacuuming:\n\n=> SELECT * FROM need_vacuum WHERE tablename = 'public.autovac';\n\n tablename | dead_tup | max_dead_tup | last_autovacuum\n----------------+----------+--------------+-----------------\npublic.autovac | 0 | 30 |\n(1 row)\n\n\nAs we can see, max_dead_tup has already been fixed. Another thing to pay attention to is that dead_tup = 0. The statistics show that the table does not have dead tuples..., and this is true. There is nothing to vacuum in the table yet. Any table used exclusively in append-only mode will not be vacuumed and therefore, the visibility map won't be updated for it. But this makes use of index-only scan impossible.\n\n(Next time we will see that vacuuming will sooner or later reach an append-only table, but this will happen too rarely.)\n\nA lesson learned: if index-only scan is critical, it may be required to manually call a vacuum process.\n\nNow let's turn autovacuum off again and update 31 lines, which is one line greater that the threshold.\n\n=> ALTER TABLE autovac SET (autovacuum_enabled = off);\n=> UPDATE autovac SET s = 'B' WHERE id <= 31;\n=> SELECT * FROM need_vacuum WHERE tablename = 'public.autovac';\n\n tablename | dead_tup | max_dead_tup | last_autovacuum\n----------------+----------+--------------+-----------------\npublic.autovac | 31 | 30 |\n(1 row)\n\n\nNow the condition of vacuum triggering is met. Let's turn autovacuum on and after a short pause we will see that the table has been processed:\n\n=> ALTER TABLE autovac SET (autovacuum_enabled = on);\n=> SELECT * FROM need_vacuum WHERE tablename = 'public.autovac';\n\n tablename | dead_tup | max_dead_tup | last_autovacuum\n----------------+----------+--------------+-------------------------------\npublic.autovac | 0 | 30 | 2019-05-21 11:59:52.554571+03\n(1 row)\n\n\nVACUUM does not block other processes since it works page by page, but it does produce additional load on the system and can considerably affect the performance.\n\nThrottling for VACUUM\n\nTo be able to control the vacuum intensity and therefore, its effect on the system, the process alternates work and waiting. The process will do about vacuum_cost_limit conventional units of work and then it will sleep for vacuum_cost_delay\u00a0ms.\n\nThe default settings are vacuum_cost_limit = 200 and vacuum_cost_delay = 0. The last zero actually means that VACUUM does not sleep, so a specific value of vacuum_cost_limitdoes not matter at all. The reasoning behind this is that if an administrator did have to manually launch VACUUM, he is likely to wish vacuuming to be done as fast as possible.\n\nNevertheless, if we do set the sleeping time, the amount of work specified in vacuum_cost_limit will be composed of the costs of work with pages in the buffer cache. Each page access is estimated as follows:\n\n\u2022 If the page is found in the buffer cache, vacuum_cost_page_hit = 1.\n\nThat is, with the default settings of vacuum_cost_limit, 200 cache pages or 20 disk pages or 10 pages with eviction can be processed in one go. It's clear that these figures are pretty tentative, but it does not make sense to select more accurate ones.\n\nThrottling for autovacuuming\n\nFor vacuum processes, load throttling works the same way as for VACUUM. But for autovacuum processes and manually launched VACUUM to work with different intensity, autovacuum has its own parameters: autovacuum_vacuum_cost_limit and autovacuum_vacuum_cost_delay. If these parameters have the value of -1, the value of vacuum_cost_limit and\/or vacuum_cost_delay is used.\n\nBy default autovacuum_vacuum_cost_limit = -1 (i.\u00a0e., the value of vacuum_cost_limit = 200 is used) and autovacuum_vacuum_cost_delay = 20\u00a0ms. On modern hardware, autovacuum will be really slow.\n\nIn version 12, the value of autovacuum_vacuum_cost_delay is reduced to 2\u00a0ms, which can be taken for a more appropriate first approximation.\n\nBesides, we should note that the limit specified by these settings is common for all worker processes. In other words, when the number of simultaneous worker processes is changed, the overall load remains unchanged. So, to increase the autovacuum performance, when adding worker processes, it makes sense to also increase autovacuum_vacuum_cost_limit.\n\nUse of memory and monitoring\n\nLast time we observed how VACUUM used RAM of size maintenance_work_mem to store tids to be vacuumed.\n\nAutovacuum does absolutely the same. But there can be many simultaneous worker processes if autovacuum_max_workers is set to a large value. Moreover, all the memory is allocated at once rather than as the need arises. Therefore, for a worker process, its own limitation can be set by means of the autovacuum_work_mem parameter. The default value of this parameter is -1, i.\u00a0e., it is not used.\n\nAs already mentioned, VACUUM can also work with a minimum memory size. But if indexes are created on the table, a small value of maintenance_work_mem can entail repeated index scans. The same is true for autovacuum. Ideally, autovacuum_work_mem should have a minimum value such that no repeated scans occur.\n\nWe've seen that to monitor VACUUM, the VERBOSE option can be used (which cannot be specified for autovacuum) or the pg_stat_progress_vacuum view (which, however, shows only the current information). Therefore, the main means to monitor autovacuuming is to use the log_autovacuum_min_duration parameter, which outputs the information to the server message log. It is turned off by default (set to -1). It is reasonable to turn this parameter on (with the value of 0, information on all autovacuum runs will be output) and watch the figures.\n\nThis is what the output information looks like:\n\n=> ALTER SYSTEM SET log_autovacuum_min_duration = 0;\n\n pg_reload_conf\n----------------\nt\n(1 row)\n\n\n=> UPDATE autovac SET s = 'C' WHERE id <= 31;\n\n\nstudent\\$ tail -n 7 \/var\/log\/postgresql\/postgresql-11-main.log\n\n2019-05-21 11:59:55.675 MSK [9737] LOG: automatic vacuum of table \"test.public.autovac\": index scans: 0\npages: 0 removed, 18 remain, 0 skipped due to pins, 0 skipped frozen\ntuples: 31 removed, 1000 remain, 0 are dead but not yet removable, oldest xmin: 4040\nbuffer usage: 78 hits, 0 misses, 0 dirtied\navg read rate: 0.000 MB\/s, avg write rate: 0.000 MB\/s\nsystem usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n2019-05-21 11:59:55.676 MSK [9737] LOG: automatic analyze of table \"test.public.autovac\" system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n\n\nAll the necessary information is available here.\n\nTo remind you, it often makes sense to lower the threshold for vacuum triggering in order to process less data at a time rather than increase the memory size.\n\nIt may also be reasonable to use the above views to monitor the length of the list of tables that require vacuuming. Increase of the list length will indicate that the autovacuum processes lack time to do their job and the settings need to be changed.","date":"2020-06-04 05:36:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4639255702495575, \"perplexity\": 3833.520755930152}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590347439019.86\/warc\/CC-MAIN-20200604032435-20200604062435-00143.warc.gz\"}"} | null | null |
requirejs.config({
baseUrl: 'bower_components/oraculum/examples/gh-pages/coffee',
paths: {
// RequireJS plugins
'cs': '../../../../require-cs/cs',
'text': '../../../../requirejs-text/text',
'coffee-script': '../../../../coffee-script/extras/coffee-script',
// FactoryJS
'Factory': '../../../../factoryjs/dist/Factory',
'BackboneFactory': '../../../../factoryjs/dist/BackboneFactory',
// Util libs
'jquery': '../../../../jquery/dist/jquery',
'backbone': '../../../../backbone/backbone',
'bootstrap': '../../../../bootstrap-css/js/bootstrap',
'underscore': '../../../../underscore/underscore',
// Other stuff
'md': '../markdown',
'mu': '../../../../../',
// Markdown
'marked': '../../../../marked/lib/marked',
'highlight': '../../../../highlightjs/highlight.pack'
},
shim: {
bootstrap: {deps: ['jquery']},
marked: { exports: 'marked' },
highlight: { exports: 'hljs' },
jquery: { exports: 'jQuery' },
'jquery-ui/sortable': ['jquery'],
'jquery-ui/draggable': ['jquery'],
underscore: { exports: '_' },
backbone: {
deps: ['jquery', 'underscore'],
exports: 'Backbone'
}
},
packages: [{
name: 'oraculum',
location: '../../../../oraculum/dist'
}, {
name: 'jquery-ui',
location: '../../../../jquery-ui/ui/'
}, {
name: 'muTable',
location: '../../../../../dist'
}],
callback: function () {
require(['cs!../../../../../examples/gh-pages/coffee/index']);
}
});
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,003 |
can access Val d'Europe in under 45 minutes.
Developed by Pierre & Vacances and Euro Disney, this new tourist attraction puts harmony between Man and Nature at the core of its concept. The park is structured around water, forest, land, sports and health.
Its DNA : to become a centre of excellence, research, information and training in the tourism sector. Its features: to manage and maintain the destination's appeal and reinforce the regional roots of existing and future participants.
Val d'Europe is a site for strolling as well as a place for shopping with vast paths and natural daylight. It also features a fitness centre and Les Terrasses, where a host of restaurants lead to a vast Baltard-inspired atrium.
La Vallée Village, an exceptional shopping experience in an open-air village. 110 fashion and luxury retail shops propose their collections from previous seasons at reduced prices.
Urban planning on a human scale with ample space aside for people, including numerous easy-to-get-to commercial and leisure facilities just a few minutes from your offices. Its Neoclassical-inspired architecture has won four international architecture awards between 2006 and 2008.
Val d'Europe is one of the only areas to have a clearly defined road map. With this long-term vision, the infrastructure and service needs can be anticipated thereby guaranteeing ongoing development. Phase IV is supported by an investment plan of €3.6b in private funds (€800m for Villages Nature Paris project included) and €280m in public funds. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,677 |
Culture - Sports
Torres Strait masks to be introduced in Vietnam for first time
An exhibition on Torres Strait masks will be introduced in Vietnam for the first time by the Australian Embassy in Vietnam together with the Vietnam Museum of Ethnology.
VNA Tuesday, May 15, 2018 09:56
Taste of Australia features various activities
Thursday, April 05, 2018 21:40
Vietnam, Australia elevate ties to strategic partnership
Thursday, March 15, 2018 16:26
Vietnam, Australia to set up strategic partnership in PM's visit
An exhibition on Torres Strait masks will be introduced in Vietnam for the first time by the Australian Embassy to Vietnam. (Photo: Australian Embassy)
Hanoi (VNA) – An exhibition on Torres Strait masks will be introduced in Vietnam for the first time by the Australian Embassy to Vietnam together with the Vietnam Museum of Ethnology.
The two-month-long exhibition at Hanoi's Vietnam Museum of Ethnology is part of activities celebrating the 45th anniversary of diplomatic relations between the two countries.
Representatives from the Embassy, the Vietnam Academy of Social Sciences, the Vietnam Museum of Ethnology and other cultural institutions will attend the opening ceremony which will take place on May 18, according to the Australian Embassy.
One kind of Torres Strait masks
(Photo: nma.gov.au)
Torres Strait is a network of islands located between Northern Australia and Papua New Guinea. The people of this area maintain vibrant artistic, spiritual and cultural traditions.
The making and wearing of beautifully decorated masks is one of the region's most distinctive traditions. While rooted in ancient protocols and spirituality, masks and mask making today are celebrated as expressions of artistic and cultural revival.-VNA
exhibition on Torres Strait masks Australian Embassy in Vietnam Vietnam Museum of Ethnology 45th anniversary of diplomatic relations vibrant artistic spiritual and cultural traditions Vietnam News Agency Vietnamplus Related stories Australia
Prime Minister Pham Minh Chinh and his spouse on February 9 offered flowers in tribute to President Ho Chi Minh at his Statue in the Asian Civilizations Museum (ACM) as part of their ongoing official visit to Singapore.
A record number of 10 Vietnamese runners will join 30,000 international athletes, including world record-holder Eliud Kipchoge, at the 2023 Boston Marathon, the world's oldest annual marathon event.
Valentine Concert to mark Vietnam-Italy diplomatic ties
A concert opening with the theme of 'Valentine Concert - From Italy with Love' is scheduled to take place at the Hanoi Opera House on February 11 - 12, and again on April 15, to celebrate the 50th anniversary of Vietnam-Italy diplomatic relations.
Golf tournament raises funds for community in Australia
A golf tournament was held on February 8 in Sydney to raise funds for community activities of overseas Vietnamese in Australia.
Musical adapted from world-famous children's book to hit the stage in Hanoi
Seventy children will hit the stage to put on another performance of the musical Totto-Chan: The Little Girl at the Window on the last weekend of February at the Youth Theatre in Hanoi.
Meo Vac flea market - a unique cultural trait in mountainous area
The colourful Meo Vac weekend flea market has become a unique cultural tradition of ethnic minority groups in the northern mountainous province of Ha Giang. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,262 |
Q: Java ExecutorService as implementation detail I'm working on a little library that invokes a callback on each member of a set of Objects. Ideally I want these tasks to be concurrent, so I create an ExecutorService inside my class. It's a private field, ideally the user needn't know that my class has an ExecutorService.
But then, I read that ExecutorServices need to be shutdown() after use or the program won't exit. However, a private ExecutorService is created as part of a static initializer in my library. My library doesn't know when the app is done doing work, so it can't call shutdown() on the ExecutorService. And the library user may not know the ExecutorService exists, and even if they did, they couldn't call shutdown() on it, as it is private.
How do I resolve this dilemma?
A: A better approach might be to have the user of your library inject (and manage) a ExecutorService. Your library is not in the best position to decide what kind of service to use (e.g. how many threads to use). This decision is ultimately one of the users of your library, who have a better understanding of the overall life-cycle of the application and in what kind of environment it is deployed.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,808 |
Daniel Cesar Cabrera, 25, passed away unexpectedly on March 5, 2019. He was born on September 14, 1994, a longtime resident of Watsonville and a graduate of Renaissance High School.
He is survived by his Mother, Lelani Lisiola, his Daughter, Athena Cabrera, his siblings; Robert Justin and Jessica. He was a beloved grandson to Evelyn and Dennis Custard, Robert and Lonna Blodgett, and Albina Cabrera. He was also the great grandson of Efigenio Ybarra, and is also survived by many aunts, uncles, and cousins. Daniel was preceded in death by his Father, Cesar Cabrera, and Grandfather, Santiago Ybarra. Daniel was a proud Father and loved spending time with his daughter Athena. He was a talented artist, enjoyed singing and was an aspiring rapper who performed locally. He loved being around friends and spending time with his family.
He brought so much love and joy to our lives and to those around him. His laid-back attitude and great sense of humor was contagious. We enjoyed our time with him. Daniel will always be remembered for his generosity, hard-work ethic, and kindness towards others. He had a good heart, a wonderful sense of humor and was the most forgiving person you could ever meet.
A visitation will be held Tuesday, March 12, 2019 from 5-9pm at Mehl's Colonial Chapel with a Rosary held at 7pm. Mass will be held Wednesday, March 13, 2019 at 1pm at Assumption Church. Burial to follow at Valley Public Cemetery. Mehl's Colonial Chapel was entrusted with the arrangements. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,097 |
Q: Advantage 8.1 vs 7.1 I am in the process of upgrading a few in-house applications from ADS 7.1 to 8.1.
I was told a while back that there are changes in return values of the AVG() function as well as some division calculations, but I cannot find any documentation on these changes.
Does anyone know what I'm talking about or have a link that explains the details?
A: The "Effects of Upgrading to Version 8.1" topic in the help file has a small paragraph about the change, but doesn't go into any details.
Basically, as of version 8.1 Advantage now adheres to the SQL standard with regards to integer division. Integer division expressions have the fractional portion truncated, where in the past they would result in a floating point result.
To address this change, you may have to cast certain expressions if you still want them to result in a floating point data type. For example:
This:
select int1 / int2 from mytable;
Would need to change to:
select cast( int1 as sql_float ) / int2 from mytable;
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,443 |
SET SCHEMA %schemaName%;
INSERT INTO TASKANA_SCHEMA_VERSION (VERSION, CREATED) VALUES ('1.0.6', CURRENT_TIMESTAMP);
ALTER TABLE TASK ADD COLUMN EXTERNAL_ID VARCHAR(64) NOT NULL DEFAULT '';
UPDATE TASK T1 SET(EXTERNAL_ID) = ('ETI:' CONCAT (SELECT RIGHT(ID, (LENGTH(ID) - 4) ) FROM TASK T2 WHERE T1.ID = T2.ID));
ALTER TABLE TASK ADD CONSTRAINT UC_EXTERNAL_ID UNIQUE (EXTERNAL_ID);
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,472 |
Q: I get an error while installing Windows XP "could not find place for swap file" When installing Windows XP from MS-DOS (HBCD-HirenBootCD) i get the following message:
"An internal Setup error has occoured.
Could not find place for swap file."
I can't boot from CD/USB so i've copied the cd to the HDD (from HBCD)
A: I guess you are translating the message and what the message means is Could not find space for swap file.
It's some time since I installed XP, but I think it tries initially to create a swap file of a size related to your installed memory (eg twice the RAM size). The message, if I'm interpreting it correctly, means that you don't have enough free space on your install disc.
For example, if you have the maximum 4GB of RAM (for a 32-bit OS), you will need a file this size for hibernation (hiberfil.sys), plus an 8GB file for swap (pagefile.sys), meaning that you need at least 12GB more than what the installed files occupy, so I would suggest a target disc of 32GB or more.
If you are installing in a VM, it is easy to configure a larger virtual drive - you could also make it expandable. You can also easily reduce the VM's memory to reduce the swap/hibernation overhead.
If you are installing to a hard disc, then you need a bigger disc, or you have to remove or disable RAM modules.
Once the system has been installed and booted, you can change the swap file location and size (though not the hibernation file).
By the way, the rule of thumb that swap size is double RAM size is a very poor one: the swap space needed depends on the number and size of the programs you run, and will normally need to increase if the RAM size is reduced.
A: this worked in the end:
*
*format drive c: (fat32)
*boot under freeDOS
*run setup (/i386/winnt.exe)
*when the setup ask, convert the drive to ntfs
PS: HBCD creates drives on that drive which corrupts the installation (also creates big swap files)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 291 |
Home » Letter From America: The Myth of 'Bengali Migration' to Arakan
Letter From America: The Myth of 'Bengali Migration' to Arakan
By Habib Siddiqui
Genocidal crimes don't occur by chance and require years of preparation. The rationale for justifying such horrendous crimes is often provided by the evil geniuses or the ideologues within the dominant group. This ominous task is sometimes shared by other powerful groups within the perpetrating community, e.g., religious, business, social and political leaders, let alone a chauvinist government, stimulating the general population to participate in the elimination campaign against the targeted group, which is viewed as 'different'. As a matter of fact, ideologues like Julius Streicher acted as the catalysts to expedite the 'Jewish solution' in Nazi Germany. They popularized Hitler's fascist agenda and made his mission a national project inside Germany.
In the context of Burma (now called Myanmar), we see similar state and non-state actors at play to eliminate the indigenous Rohingyas of Arakan (renamed the Rakhine state). Evil, ultra-racist intellectuals like Aye Chan, (late) Aye Kyaw and Khin Maung Saw played the role of Julius Streicher portraying the Rohingya as the infiltrators or intruders and the 'enemies within' from the next-door Bangladesh who needed to be killed or wiped out as a 'virus', planting unfathomed intolerance against them while the bigoted monks like Wirathu provided the religious rationale for 'purifying' the Buddhist motherland from the minority Muslims who are falsely depicted as a 'threat' to Buddhism and its way of life. The rest is history! With the direct involvement of all the organs of the government – from top to bottom, from the central to the local level – the genocide of the Rohingya became a national project inside Myanmar, which witnessed the massacre of some 24,000 and mass rape of tens of thousands in the last couple of years alone, plus the forced exodus of nearly a million of them to Bangladesh. More Rohingyas now live outside their ancestral land than inside the Rakhine state of Myanmar.
In spite of the decades of persecution against them since the Japanese occupation of British Burma during the Second World War, the Rohingya Muslims and Hindus have maintained their continuous existence in Arakan that is older than the ethnic Rakhine Buddhist majority who also share the same landmass. [The interested readers may like to read this author's book - The Forgotten Rohingya: Their Struggle for Human Rights in Burma, available in the Amazon.com.]
In my book – Muslim Identity and Demography in the Arakan State of Burma (Myanmar) – I have analyzed the British era data and put to rest the xenophobic claims of Bengali (or more particularly Chittagonian) infiltration to Arakan by showing that there is no credible evidence supporting that myth, propagated by the criminal Burmese government and its brainwashed supporters within the Buddhist community. And yet, seemingly, there is no shortage of such Buddhist enthusiasts who continue to justify the genocidal crimes of their criminal government against the Rohingya with their half-baked theories and faulty analyses.
Consider, e.g., the work of U Ne Oo (dated 18th August 2014) who claims to be the Coordinator for an organization called the Network for International Protection of Refugees (www.netipr.org/policy/node/39). The pdf file had a catchy title - Rohingya/Bengali: Migration After First Anglo-Burman War – a subject of much interest to me for nearly two decades.
Oo claims to examine competing claims from the pro-Rohingya and Burmese government sources and concludes "that the majority of present day Rohingya/Bengali group had migrated after 1824..."
A reading of Ne Oo's article shows that it was purely a propaganda attempt aimed at justifying the xenophobic treatment of the Rohingyas of Myanmar. It was not surprising that Oo uses Myanmar government source, quoting U Myint Thien's 2009 paper. The latter is shown as the Director of Historical Research Department, Burma, which had the task of justifying government position on contentious issues like the Rohingyas.
As a highly biased author, Oo has the habit of cherry-picking texts to suit his purpose. He does not surprise us, therefore, when he quotes Rev. Comstock who allegedly was on a mission to Arakan between 1834 and 1844. The Christian missionary or priest is presented as a 'reliable source' and quoted by Oo about demography inside Arakan (now called the Rakhine state). Oo writes, "On Page 224, describing the numbers of inhabitants:
"The population of Arakan at the present time (1842) is estimated at about 250,000. Of these, about 167,000 are Mugs, 40,000 are Burmese, 20,000 are Mussulmans, 10,000 are Kyens, 5,000 are Bengalese, 3,000 are Toungmroos, 2,000 are Kemees, 1,250 are Karens, and the remainder are of various races, in smaller number."
Any unbiased researcher would have problem taking such figures seriously from a priest whose task surely was neither civil administration nor population studies. The figures he cites cannot be reconciled with the population figures reported in 1826 by Mr Paton who was the first administrator of Arakan soon after the East India Company annexed the territory in the first Anglo-Burma War. There, Mr Paton reported that the total population of Arakan did not exceed 100,000 of which 60,000 were Maghs (Arakanese Buddhists) and 30,000 (Rohingya) Muslims." [The rest being Burmese settlers from outside Arakan.]
Even if one were to assume Comstock's figures, it is unlikely that the total population had grown to 2.5 times within a mere 16-years period without migration from other territories (since it would require an absurd annual growth rate of approx. 6% for population to grow organically by 150%). How could a Mug (Magh) population grow to 2.78 its size from 60,000 to 167,000 within the same period, if it was not for infiltration from elsewhere in British Burma, when all those territories from Bengal to Burma were under the British rule?
However, Oo is evasive on such anomalies. He does not question why the population of Muslims in Arakan reduced to 20,000 within the same period, a 33% reduction, while the Buddhist Mug population rose by 178% and the Burmese Buddhist population quadrupled. His despicable bias could not be hidden under the rug!
If Oo was sincere and unbiased in his study, he could not have ignored such anomalies with the Buddhist population inside the Arakan state in those early years of British annexation.
Oo cites a table (courtesy of Burmese propagandist Myint Thien) of Maung Daw area (which is close to today's Bangladesh border) to show the impact of Burma Act 1935. I reproduce the table below before analyzing the data statistically (note the arithmetic error in the net migration column for 1929).
Net Migration
Based on the data above (supposedly for the Muslim population alone), Oo concludes, "On examining the pattern of Indian immigrant/emigrant from above Table in 1921 to 1937, there is a surge in 1934 by Indian immigrants who had opted to remain in Burma."
Such a conclusion regrettably betrays the underlying facts of seasonal workers and lacks common-sense logic in the British-ruled India and Burma. While on the surface, the above table may give the impression that there was a net gain in population in Maung Daw since 1934 and that there was a net gain of 12,600 people in Maung Daw over a period of ten years, one cannot ignore the fact that this net gain boils down to a very small number of only 1,260 people per year compared to the average annual immigration of 254,900 people and emigration of 242,300 (see the graphical summary reports below). That is, the net gain in population was less than 0.5 percent. The data above also fails to explain the net loss of 137,000 Muslims in the first three years. How could there be more people opting for emigration than immigration?
As I have noted in my earlier work on Muslim demography in Arakan, the so-called immigrants were all seasonal workers who returned to their ancestral land inside Bangladesh once their tenure or temporary job was completed, and/or no new employment availed them. Treating population data in an isolated or static way as if they are immobile like building structures is ludicrous. Sadly, Oo not only makes such attempts but also makes conclusions that are symptomatic of cherry-picking with data without ever asking the question – why? But more problematic is his utter lack of statistical knowledge with demographic data while claiming to draw conclusions from the data to support the criminal xenophobia against the minority Rohingya people, which has resulted into genocidal crimes against the Rohingya by his fellow Buddhist marauders inside Myanmar.
In what follows, I share statistical analysis of the data for both immigration and emigration within the said period. (Note: we are limited here by small data sets, i.e., 10 yearly data.)
The graphical summary of the immigration data shows that the data follow normal Gaussian distribution function while the emigration data fails to exhibit normal behavior with a probability (P) value far below 0.05.
The Individual and Moving Range (I-MR) charts for the immigration data do not exhibit any out of control points while again the emigration data, as pointed out earlier, do show such unusual trends for the 2nd and 3rd year (i.e., 1929 and 1930).
In order to compare the two sets of data, immigration vs. emigration, we employed three statistical tests. The firs of these is the 2-sample t-test (which assumes normality). The p-value of 0.665 below show that there is not enough statistical evidence to reject the null hypothesis that the two samples are same.
Next, we employed the Mann-Whitney non-parametric test, which with a p-value of 0.212 show there is not enough statistical evidence to reject the null hypothesis that the two samples are same.
Next, we employed pairwise comparison to test if the two data sets are statistically different. Again, the p-value of 0.54 shows there is not enough statistical evidence to reject the null hypothesis that the two samples are same.
Conclusion: There is not enough evidence to reject the null hypothesis that the two sets of data are the same. That is, the immigration and emigration data are statistically similar.
This study fails to find statistical evidence in support of Oo's claim that "the majority of present day Rohingya/Bengali group had migrated after 1824..." The tabulated data shared by Oo failed to prove that during the British-era Muslims had moved into Maung Daw (and for that matter into other parts of Arakan). On the other hand, something that I have noted in my book – Muslim Identity and Demography in the Arakan State of Burma (Myanmar) – there is credible evidence to suggest that there was unnatural growth of Buddhist population inside Arakan during the early decades of the British occupation. To put succinctly: the so-called influx to Arakan was caused by the Rakhines (and other Buddhists) and not the Rohingyas (or the so-called Chittagonians from Bangladesh or British-ruled East Bengal).
Far from the mission of protecting the refugees, as his organization's name suggests, Oo's flawed work appears to be a smokescreen to justifying Myanmar government's on-going criminal activities that have resulted into forced exodus of the Rohingya people than anything else. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,692 |
\section{Introduction}
All varieties are defined over the field of complex numbers $\CC$.
Let $C$ be a smooth projective curve and $E$ be a vector bundle on $C$ of rank $r=\rk E$ and degree $d=\deg E$.
Then $E$ is said to be \emph{stable} (resp. \emph{semi-stable}) if, for every nonzero proper subbundle $F$ of $E$ with torsion-free quotient $E/F$, the slope $\mu(F)$ of $F$ is less than (resp. less than or equal to) $\mu(E)$ where the \emph{slope} is defined by $\mu(F)={\deg F}/{\rk F}$ \cite[p.~87]{Friedman98}.
We may assume that the quotients are locally free since a torsion-free sheaf is locally free on $C$.\linebreak
A semi-stable vector bundle $E$ is called \emph{strictly semi-stable} if $E$ is not stable.
It is natural to investigate the stability of the symmetric power $S^kE$ of $E$ when $E$ is stable.
Using the correspondence between the stability of $E$ and the irreducibility of its associated unitary representation $\rho_E: \pi_1(C)\to U(r)$ identified by Narasimhan and Seshadri \cite{Narasimhan-Seshadri65}, it is possible to prove the following.
\begin{Main0}[{\cite[p.~53]{Hartshorne70}}]
Let $C$ be a smooth projective curve of genus $g\geq 2$ and $E$ be a stable vector bundle\linebreak on $C$.
Then
\begin{enumerate}
\item $S^k E$ is semi-stable for every $k\geq 2$ for all $E$, and
\item $S^k E$ is stable for every $k\geq 2$ for sufficiently general $E$.
\end{enumerate}
\end{Main0}
The next question would be a classification of stable $E$ for which $S^k E$ is strictly semi-stable.
Because we are interested in the case where $r=2$ and $d$ is even, we may assume that $\det E=\Oo_C$ after substituting $E$ by $E\otimes L^{-1}$ for some line bundle $L$ with $L^2=\det E$.
Then $E$ is self-dual as $E\cong E^\vee\otimes (\det E)\cong E^\vee$.
Note that $S^k(E\otimes L^{-1})=S^kE\otimes L^{-k}$ and the stability is invariant under the twist by a line bundle.
As $\det S^k E=(\det E)^{\binom{k+1}{2}}=\Oo_C$ and $\deg S^k E=0$, $S^k E$ is not stable if it has a proper nonzero subbundle $V\to S^k E$ of $\deg V=0$ with the quotient vector bundle $Q$ of $\deg Q=0$.
By taking the dual of the quotient\linebreak $S^k E\to Q$ together with the self-duality of $S^k E$ given by
\[
(S^k E)^\vee
\cong S^k E^\vee
\cong S^k E,
\]
we get a subbundle $Q^\vee\to S^k E$ of $\deg Q^\vee=-\deg Q=0$ and $\rk V+\rk Q^\vee =\rk V+\rk Q=\rk S^k E=k+1$.
Thus $S^k E$ has destabilizing subbundles $V$ and $Q^\vee$, one of which has rank less than or equal to $\frac{k+1}{2}$ (see also {\cite[Proposition 2.1]{Choe-Choi-Kim-Park18}}).
\begin{Rmk}\label{first}
Let $E$ be a stable vector bundle on $C$ of rank $2$ with even degree.
Then $S^k E$ is strictly semi-stable if and only if it is destabilized by a subbundle $V\to S^k E$ of $\rk V\leq \frac{k+1}{2}$.
Similarly, it suffices to consider the quotient bundles $S^k E\to Q$ of $\rk Q\leq \frac{k+1}{2}$ to determine the stability of $S^k E$.
\end{Rmk}
By the remark and assuming the stability of lower symmetric powers, we have the following reduction.
\begin{Main1}
Let $C$ be a smooth projective curve of genus $g\geq 2$ and $E$ be a stable vector bundle on $C$\linebreak of rank $2$ with even degree.
Then $S^k E$ is stable for all $k\geq 2$ unless one of the following cases occurs.
\begin{enumerate}
\item $S^2 E$ is destabilized by a vector bundle of rank $1$
\item $S^3 E$ is destabilized by a vector bundle of rank $2$
\item $S^4 E$ is destabilized by a vector bundle of rank $2$
\item $S^6 E$ is destabilized by a vector bundle of rank $3$
\end{enumerate}
In particular, $S^k E$ is stable for every $k\geq 2$ if $S^m E$ is stable for all $m\leq 6$.
\end{Main1}
We will classify cases (1), (2) in Section 2, 3, and give a proof for the following result in Section 4.
\begin{Main1.5}
If $S^2 E$ is stable, then $S^k E$ is stable for every $k\geq 3$ except for finitely many $E$ in the moduli of stable vector bundles on $C$ of rank $2$ with fixed determinant of even degree.
\end{Main1.5}
For $k=2$, $S^2 E$ is strictly semi-stable if and only if it is destabilized by a quotient line bundle $S^2 E\to L$ of $\deg L=0$.
We will observe that this is equivalent to saying that $E$ is \emph{orthogonal} where\linebreak an orthogonal bundle is defined by a vector bundle $E$ with a nondegenerate symmetric bilinear form $E\otimes E\to L$ for some line bundle $L$.
According to Mumford's classification \cite{Mumford71}, the orthogonal bundles of rank $2$ are given by the direct images of line bundles on an unramified double covering $B\to C$.\linebreak
We will see that they form a positive dimensional family with fixed determinant.
On the other hand, Choi and Park \cite{Choi-Park15} show that there exists a stable $E$ whose associated ruled surface $\PP_C(E)$ admits a bisection of zero self-intersection using elementary transformations.
Then its symmetric square $S^2 E$ is strictly semi-stable due to the following correspondence.
\[
\left\{\text{$k$-sections $D$ on $\PP_C(E)$ with $D^2=2kb$}\right\}
~\leftrightarrow~
\left\{\text{line subbundles $L^{-1}\to S^k E$ with $\deg L=b$}\right\}
\]
In Section $2$, we will see a relation between orthogonal bundles and such ruled surfaces,
and find that every orthogonal bundle can be obtained by the method of Choi and Park \cite{Choi-Park15}.
For $k=3$, since $S^3 E$ is of rank $4$, if not stable, it is destabilized by a subbundle of rank $1$ or $2$.
\begin{Main2}
Let $C$ be a smooth projective curve of genus $g\geq 2$ and $E$ be a stable vector bundle on $C$ of rank $2$ with trivial determinant.
Then
\begin{enumerate}
\item $S^3 E$ is destabilized by a subbundle of rank $1$ only if $S^2 E$ is strictly semi-stable, and there exist only a finite number of such $E$,
\item $S^3 E$ is destabilized by a subbundle of rank $2$ if $S^2 E$ is strictly semi-stable, and there are only finitely many such $E$ with stable $S^2 E$.
\end{enumerate}
In particular, except a finite number of $E$, $S^3 E$ is strictly semi-stable if and only if $S^2 E$ is not stable.
\end{Main2}
In Section $3$, we will classify the exceptional cases as the ones
satisfying $S^2 E=\eta_*R$ for some unramified cyclic $3$-covering $\eta: C'\to C$ and $R\in J^0(C')\backslash \eta^*J^0(C)$ with $R^2=\Oo_{C'}$.
It also completes the description of $E$ with $S^4 E$ being destabilized by a line subbundle (see Proposition \ref{tosmall}).
For $k\geq 4$, the remaining cases for the stability of $S^k E$ are $k=4$ and $6$ as stated in Theorem \ref{higher}.
In Section 4, we will show that each case is further reduced to the case where $S^l E$ is destabilized by a line subbundle for some $l\geq k$.
Then, with the aid of the following corollary, we obtain the result that there are\linebreak only finitely many $E$ with trivial determinant where $S^2 E$ is stable but $S^k E$ is not stable for some $k\geq 3$.
\begin{Main3}
Let $C$ be a smooth projective curve of genus $g\geq 2$ and $E$ be a stable vector bundle on $C$\linebreak of rank $2$ with trivial determinant.
If $k\geq 3$, then there exist at most finitely many $E\in\SU_C(2,\Oo_C)$ where\linebreak $S^k E$ is destabilized by a line subbundle but $S^m E$ is not destabilized by a line subbundle for any $m<k$.
\end{Main3}
Under the correspondence between the line subbundles of $S^k E$ and the $k$-sections on $X=\PP_C(E)$, the result says that if $S^2 E$ is stable and $E$ has even degree, then there are only a finite number of $X$ which admits a $k$-section of zero self-intersection for some $k\geq 3$.
Notice that the class of $k$-secant divisors of zero self-intersection lies on the boundary of the closure of the cone of curves $\overline{\NE}(X)$ in $N_1(X)$ when $E$ is stable \cite[I:~p.~70]{Lazarsfeld04}.
Thus the result also answers to the question of how many $X=\PP_C(E)$ has closed $\NE(X)$ for the stable vector bundles $E$ of rank $2$ with even degree when $S^2 E$ is stable.
Due to an observation by Rosoff \cite[p.~123]{Rosoff02}, when $D$ is the $k$-section on $X=\PP_C(E)$ corresponding to a destabilizing line subbundle of $S^k E$, the induced $k$-covering $\pi: D\to C$ is necessarily unramified.
Also, the covering gives a destabilizing quotient line bundle $\pi^* E\to R$ so that $\pi^*E$ is not stable on $D$.\linebreak
Further, we can derive a stronger assertion that $\pi^*E$ is not only strictly semi-stable but it also splits into the direct sum of line bundles, and the line bundles are torsion elements in the Picard group when $k\geq 3$.
By composing a cyclic covering over which trivializes the torsion line bundles, we can conclude that $E$ is trivialized over an unramified finite covering of $C$.
Namely, $E$ satisfies the property called \emph{\'{e}tale triviality} \cite{Biswas20}, which is known to be equivalent to saying that $E$ is a \emph{finite} bundle introduced by Nori \cite{Nori76}.
\begin{Main4}
Let $E$ be a vector bundle on $C$ of rank $2$ with trivial determinant.
Assume that $S^2 E$ is stable.
Then $E$ is finite if and only if $S^k E$ is not stable for some $k\geq 3$.
\end{Main4}
\noindent
{\bf Acknowledgement.}
I would like to thank my thesis advisor, Prof. Yongnam Lee, for introducing this topic and giving valuable guidance.
I am also indebted to an anonymous reviewer of an earlier manuscript\linebreak not only for providing helpful suggestions to improve the presentation but also for pointing out a flaw in the previous proof of Theorem \ref{higher}.
This work will be part of my Ph.D. thesis.
I was partly supported by Samsung Science and Technology Foundation under Project Number SSTF-BA1701-04.
\vspace{1em}
\section{Semi-Stable Vector Bundles Whose Symmetric Square is Not Stable}
\subsection{Unramified Finite Coverings and Prym Varieties}
As mentioned in the introduction, the strict semi-stability of $S^k E$ has a lot to do with the unramified $k$-coverings of $C$.
In this subsection, we review the theory of unramified finite coverings.
The references are \cite[Exercise III.10.3 \& Exercise IV.2.6]{Hartshorne77} and \cite[Chapter 12]{Birkenhake-Lange04}.
Let $C$ be a smooth projective curve.
We will denote by
\begin{itemize}
\item $\Pic(C)$ the Picard group of $C$,
\item $J^n(C)$ the Jacobian of line bundles on $C$ of degree $n$, and
\item $J_m(C)$ the set of line bundles on $C$ of order $m$; the elements $L\in J^0(C)$ with $L^m=\Oo_C$.
\end{itemize}
We will also abuse notation to take a divisor $\bb$ from the group of divisors as $\bb\in\Pic(C)$ instead of $\Div(C)$.\pagebreak
\newgeometry{top=90pt,bottom=80pt,left=80pt,right=80pt,footskip=30pt}
Let $D$ be an unramified $k$-covering of $C$, that is, there is a finite surjective morphism $\pi: D\to C$ of degree $k$ whose ramification divisor is empty.
Then $\pi^*:\Pic(C)\to \Pic(D)$ induces $\pi^*:J^n(C)\to J^{kn}(D)$ since
$\deg\pi^*L=k\deg L$ for $L\in \Pic(C)$.
As $\pi$ is unramified, we have $\omega_D=\pi^*\omega_C$.
That is, $\omega_{D/C}=\Oo_D$.
Moreover, $\pi^*$ has nontrivial kernel; $\pi^*L=\Oo_D$ for some $L\in J^0(C)\backslash\{\Oo_C\}$ if and only if $\pi$ is factored as $\pi: D\to C'\xrightarrow{\eta}C$ for some unramified cyclic covering $\eta:C'\to C$ satisfying $\eta^*L=\Oo_{C'}$ \cite[Proposition 11.4.3]{Birkenhake-Lange04}.
Recall that a torsion line bundle $L\in J_m(C)$ defines an unramified cyclic $m$-covering $\eta:C'\to C$ and vice versa under the relations $\eta^*L=\Oo_{C'}$ and $\eta_*\Oo_{C'}=\Oo_C\oplus L^{-1}\oplus \cdots\oplus L^{-(m-1)}$.
To a finite covering $\pi:D\to C$ of degree $k$, we associate the \emph{Norm map} $\Nm_{D/C}: \Pic(D)\to \Pic(C)$ which is defined by
\[
\Nm_{D/C}\left(\sum a_ix_i\right)=\sum a_i\pi(x_i)
\]
in terms of Weil divisors.
Then $\Nm_{D/C}$ is a group homomorphism and $\Nm_{D/C}\circ\pi^*: \Pic(C)\to \Pic(C)$ is multiplication by $k$.
For $R\in \Pic(D)$, $\pi_*R$ is a vector bundle of rank $k$ whose determinant is given by
\[
\det \pi_*R\cong \det \pi_*\Oo_{D}\otimes \Nm_{D/C}(R).
\]
Notice that $(\det \pi_*\Oo_D)^2=\Oo_C(-B)$ for the branch divisor $B$ of $\pi: D\to C$.
Thus, if $\pi$ is unramified, then $(\det \pi_*\Oo_D)^2=\Oo_C$.
If $V$ is a vector bundle on $C$, then $\pi^*V$ is semi-stable if and only if $V$ is semi-stable \cite[II: Lemma 6.4.12]{Lazarsfeld04}.
Meanwhile, if $W$ is a vector bundle on $D$, then it is known that $\pi_*W$ is stable for general $W$ when
$\pi$ is unramified \cite{Beauville00}.
The following proposition tells that $\pi_*W$ is semi-stable and which $\pi_*W$ is stable in the case where $\deg \pi=2$ and $\rk W=1$.
Notice that any unramified double covering is a cyclic covering.
\begin{Prop}\label{img}
Let $\pi: B\to C$ be a nontrivial unramified double covering corresponding to $M\in J_2(C)$ and $R\in J^0(B)$.
Then $E=\pi_*R$ is semi-stable and is strictly semi-stable if and only if $R\in \pi^*J^0(C)$.
Moreover, $E$ splits as $E=L\oplus (L\otimes M)$ for some $L\in J^0(C)$ when it is strictly semi-stable.
\end{Prop}
\begin{proof}
Note that the natural morphism $\pi^*\pi_*R\to R$ becomes surjective because $\pi$ is an affine morphism.
Since the kernel of a surjection between vector bundles is a vector bundle, the sequence
\[
0\to K\to \pi^* E\to R\to 0
\]
is exact for some vector bundle $K$ on $B$ where $\rk K=1$ as $\rk \pi^*\pi_*R=\rk \pi_*R=2$ and $\rk R=1$ in this case.
By comparing the determinants using $\det\pi^*E=\pi^*\det E$, the exact sequence becomes
\[
0\to R^{-1}\otimes\pi^*\det E\to \pi^*E\to R\to 0.
\]
Then, from
\begin{align*}
\deg E
=\deg (\det\pi_*R)
=\deg (\det\pi_*\Oo_B \otimes \Nm_{D/C}(R))
=\deg (\det(\Oo_C\oplus M)\otimes \Nm_{D/C}(R))
=0,
\end{align*}
we have $\deg \pi^*E=2\deg E=0$ and $\deg (R^{-1}\otimes \pi^*\det E)=\deg \pi^*E-\deg R=0$.
If there is an injection $L\to E$ for some line bundle $L$ of degree $d$, then we get a nonzero morphism $\pi^*L\to R$ due to the adjoint property, $\Hom(\pi^*L,R)=\Hom(L,\pi_*R)=\Hom(L,E)$.
As $\deg\pi^*L=2d$, $d\leq 0$ and the equality holds if and only if $R=\pi^*L$.
Therefore, $E$ is semi-stable, and $E$ is strictly semi-stable if and only if $R\in \pi^*J^0(C)$.
Now if $E$ is strictly semi-stable so that $R=\pi^*L$ for some $L\in J^0(C)$, then, by the projection formula,
\[
\pi_*R
=\pi_*\pi^*L
=(\pi_*\Oo_B)\otimes L
=(\Oo_C\oplus M)\otimes L=L\oplus(L\otimes M).\qedhere
\]
\end{proof}
\restoregeometry
In the case where $\pi: B\to C$ is a nontrivial unramified double covering, we denote the kernel of $\Nm_{B/C}$ by $\Pr(B/C)$ in this paper.
Then $\Pr(B/C)\subseteq J^0(C)$ and it has two components as
\[
\Pr(B/C)
=\{S\otimes (\iota^*S)^{-1}\,|\,S\in \Pic(B)\}
=\{S\otimes (\iota^*S)^{-1}\,|\,S\in J^0(B)\}\cup\{S\otimes (\iota^*S)^{-1}\,|\,S\in J^1(B)\}
\]
where $\iota:B\to B$ is the involution induced by $\pi$ \cite[Lemma 1]{Mumford71}.
We denote the first summand by $\Pr^0(B/C)$ and the second one by $\Pr^1(B/C)$.
$\Pr^0(B/C)$ is known as the \emph{Prym variety} of $B$ over $C$ which is an abelian subvariety of $J^0(B)$ of dimension $g(B)-g(C)=g-1$, and $\Pr^1(B/C)$ is a translate of $\Pr^0(C)$ in $J^0(B)$.\linebreak
It is also known that $J^0(B)=\Pr^0(B/C)\otimes \pi^*J^0(C)=\{R\otimes \pi^*L\,|\,R\in\Pr^0(B/C),\,L\in J^0(C)\}$, and we can describe the intersection $\Pr^0(B/C)\cap \pi^*J^0(C)=\Pr^0(B/C)\cap J_2(B)$ using the following proposition.
\begin{Prop}\label{intersection}
Let $\pi: B\to C$ be a nontrivial unramified double covering.
Then
\[
\Pr(B/C)\cap \pi^*J^0(C)=\pi^*J_2(C)=\Pr(B/C)\cap J_2(B).
\]
\end{Prop}
\begin{proof}
As $\Nm_{B/C}(\pi^*L)=L^2$ for $L\in \Pic(C)$, $\pi^*L\in\Pr(B/C)$ if and only if $L\in J_2(C)$.
Thus we get $\Pr(B/C)\cap\pi^*J^0(C)=\pi^*J_2(C)$, and its order is $2^{2g-1}$ because $|J_2(C)|=2^{2g}$ and $|\ker\pi^*|=2$ for the genus $g$ of $C$.
From $\pi^*J_2(C)\subseteq J_2(B)$, we have $\pi^*J_2(C)=\Pr(B/C)\cap\pi^*J_2(C)\subseteq \Pr(B/C)\cap J_2(B)$ and the inclusion becomes equality after calculating the order $|\Pr(B/C)\cap J_2(B)|=2^{2g-1}$.
Since $\Pr^0(B/C)$ is an abelian subvariety of $J^0(B)$ of dimension $g-1$, $|\Pr^0(B/C)\cap J_2(B)|=2^{2(g-1)}$.
So there exists $L_1\in J_2(C)$ with $\pi^*L_1\in \Pr^1(B/C)$ but $\pi^*L_1\not\in\Pr^0(B/C)$.
Then, using the translation $\Pr^1(B/C)=\Pr^0(B/C)\otimes \pi^*L_1$, we can deduce that $|\Pr^1(B/C)\cap J_2(B)|=2^{2(g-1)}$.
Hence we obtain $|\Pr(B/C)\cap J_2(B)|=2^{2g-1}$ and the equality $\pi^*J_2(C)=\Pr(B/C)\cap J_2(B)$.
\end{proof}
\subsection{Classification of Orthogonal Bundles}
The orthogonal bundles are studied by several authors;
Ramanathan \cite{Ramanathan96},
Mumford \cite{Mumford71},
Ramanan \cite{Ramanan81}, Hitching \cite{Hitching07}, and Biswas-G\'{o}mez \cite{Biswas-Gomez10} for instance.\linebreak
In this paper, we use the following definition presented in Hitching \cite{Hitching07}, which is also similar to that given in Biswas-G\'{o}mez \cite{Biswas-Gomez10}.
\begin{Def}
Let $E$ be a vector bundle and $M$ be a line bundle on $C$.
Then $E$ is said to be \emph{orthogonal with values in $M$} if there is a nondegenerate symmetric bilinear form $E\otimes E\to M$.
\end{Def}
Let $E$ be a vector bundle on $C$ of rank $2$ and degree $0$.
If $S^2 E$ is strictly semi-stable, then there is a quotient line bundle $S^2 E\to M$ of $\deg M=0$, which gives a nonzero morphism $E\to E^\vee\otimes M$ from
\[
\Hom(S^2 E, M)
\subseteq \Hom(E\otimes E,M)
\cong H^0(E^\vee\otimes E^\vee\otimes M)
\cong \Hom(E,E^\vee\otimes M).
\]
If $E$ is stable, then the morphism is necessarily an isomorphism, and the induced symmetric bilinear form $E\otimes E\to M$ is nondegenerate on each fiber.
Hence $E$ admits an orthogonal structure.
Conversely, if $E$ is an orthogonal bundle with values in $M$, then it associates a morphism $S^2 E\to M$ which must be surjective because the form $E\otimes E\to M$ is nondegenerate.
\begin{Rmk}\label{twotor}
Let $E$ be a stable vector bundle on $C$ of rank $2$ and degree $0$.
Then $E$ is orthogonal if and only if $S^2 E$ is strictly semi-stable.
If $S^2 E$ is destabilized by a quotient line bundle $S^2 E\to M$, then $E$ is orthogonal with values in $M$.
Also, by comparing the determinants in the isomorphism $E\cong E^\vee\otimes M$, we have $M^2\cong (\det E)^2$.
On the other hand, if $S^2 E$ is destabilized by a line subbundle $M^{-1}\to S^2 E$, then $M^2=(\det E)^{-2}$ follows from the isomorphism $E^\vee\cong E\otimes M$. In particular, if $E$ is orthogonal with values in $M$ and $E$ has trivial determinant, then $M\in J_2(C)$.
\end{Rmk}
\newgeometry{top=75pt,bottom=80pt,left=80pt,right=80pt,footskip=30pt}
By Mumford's classification \cite{Mumford71}, if $E$ is an orthogonal bundle of rank $2$ with values in $\Oo_C$, then
\begin{enumerate}
\item[(1)] $E=L^{-1}\oplus L$ for some line bundle $L$, or
\item[(2)] $E=\pi_*R$ where $\pi: B\to C$ is an unramified double covering and $R$ is a line bundle on $B$ such that $\Nm_{B/C}(R)=\Oo_C$.
\end{enumerate}
In (2), if $\pi$ is the trivial double covering, then $E=\pi_*R$ is the direct sum of line bundles of degree~$0$, and it has trivial determinant as $\det \pi_*\Oo_B=\det(\Oo_C\oplus \Oo_C)=\Oo_C$.
So $E=L^{-1}\oplus L$ for some $L\in J^0(C)$.
This case is covered by (1).
Otherwise, if $\pi$ is a nontrivial double covering corresponding to $M\in J_2(C)\backslash\{\Oo_C\}$, then we know from Proposition~\ref{img} that $E=\pi_*R$ is semi-stable, and it is strictly semi-stable if and only if $E=L\oplus (L\otimes M)$ for some $L\in J^0(C)$.
In this case, $L$ must satisfy $L^2=\Oo_C$ as $\det E=M$.
\begin{Rmk}\label{sss}
If $E$ is strictly semi-stable, then $E$ is orthogonal with values in $\Oo_C$ if and only if
\begin{itemize}
\item $E=L^{-1}\oplus L$ if the determinant of $E$ is trivial,
\item $E=L\oplus (L\otimes M)$ for some $L\in J_2(C)$ if $M=\det E$ is nontrivial.
\end{itemize}
In particular, there is no stable $E$ with trivial determinant whose orthogonal form takes its values in $\Oo_C$.
If $E$ is strictly semi-stable, then $S^2 E$ is always strictly semi-stable because a surjection $E\to L$ induces a surjection $S^2 E\to L^2$.
In the same way, if $E$ is not stable, then $S^k E$ is not stable for any $k\geq 2$.
\end{Rmk}
We denote by $\SU_C(2,\Oo_C)$ the space of S-equivalence classes of semi-stable vector bundles on $C$ of\linebreak rank $2$ with trivial determinant.
After choices of a nontrivial unramified double covering $\pi:\,B\to C$ corresponding to $M\in J_2(C)\backslash\{\Oo_C\}$ and a line bundle $A$ satisfying $A^2=M$, we can define a map $\Phi_{A}:\,\Pr(B/C)\to \SU_C(2,\Oo_C);\,R\mapsto \pi_*R\otimes A$ as $\pi_*R$ is semi-stable by Proposition \ref{img}.
For fixed $M$,\linebreak the images of $\Phi_A$ are the same under the changes of $A$ because the choices differ by a twist of $L\in J_2(C)$ and $\Pr(B/C)$ is invariant under the translation by $\pi^*L$ if $L\in J_2(C)$.
\begin{Prop}\label{dim}
If $E\in \SU_C(2,\Oo_C)$ is stable and $S^2 E$ is strictly semi-stable, then $E=\Phi_{A}(R)=\pi_*R\otimes A=\pi_*(R\otimes \pi^*A)$ for some $\pi: B\to C$, $A\in J^0(C)$, and $R\in\Pr(B/C)$ as above.
Moreover, the locus of $E\in \SU_C(2,\Oo_C)$ where $E$ is stable and $S^2 E$ is strictly semi-stable has dimension $g-1$.
\end{Prop}
\begin{proof}
Due to Remark \ref{twotor}, if $S^2 E$ is not stable, then $E$ is orthogonal with values in $M\in J_2(C)\backslash\{\Oo_C\}$.
Thus for any line bundle $A$ with $A^2=M$, $E\otimes A$ becomes an orthogonal bundle with values in $\Oo_C$.
Then, by Mumford's classification, $E\otimes A=\pi_*R$ for some nontrivial unramified double covering $\pi:B\to C$ and $R\in\Pr(B/C)$ where $\pi$ corresponds to $M$ since $\det\pi_*\Oo_B=\det\pi_*R=A^2=M$.
In order to prove the next claim, it is enough to show that $\pi_*:\Pr(B/C)\to \SU_C(2,M)$ is generically $2$-to-$1$ for a fixed nontrivial unramified double covering $\pi:B\to C$ corresponding to $M\in J_2(C)\backslash\{\Oo_C\}$ because $\dim\Pr(B/C)=g-1$ and the choice of $M$ is finite.
Let $R,\,R'\in\Pr(B/C)$ and $E=\pi_*R=\pi_*R'$.
As $\pi^*\det E=\pi^*M=\Oo_B$, we get the following exact sequences on $B$ (see the proof of Proposition \ref{img}).
\[
0\to R^{-1}\to \pi^*E\to R\to 0
\quad\text{and}\quad
0\to R'^{-1}\to \pi^*E\to R'\to 0
\]
If $R'\neq R$, then there exists a nonzero morphism $R^{-1}\to R'$ by adjoining the morphisms $R^{-1}\to\pi^*E$ and $\pi^*E\to R'$.
Since both $R$ and $R'$ have degree $0$, $R'\cong R^{-1}$.
So we have either $R'=R$ or $R'=R^{-1}$.
Here, $R^{-1}=\iota^*R$ for the involution $\iota:B\to B$ induced by $\pi$ because $R=S\otimes (\iota^* S)^{-1}$ for some $S\in \Pic(B)$.
Note that $R=\iota^*R$ if and only if $R\in \pi^*J^0(C)$, and $\Pr(B/C)\cap\pi^*J^0(C)$ is finite by Proposition \ref{intersection}.
\end{proof}
Recall that the dimension of the moduli $\SU_C(2,\Oo_C)$ is $3g-3$, and the locus of $E\in \SU_C(2,\Oo_C)$ with $S^2 E$ being strictly semi-stable is the union of the above locus and the locus of strictly semi-stable $E$.
The latter locus is given by the image of $J^0(C)\to \SU_C(2,\Oo_C);\,L\mapsto [L^{-1}\oplus L]$, and its dimension is $g$.
\subsection{\emph{k}-Sections on a Ruled Surface}
The material of this and the next subsection is well-known, but we include it for the sake of notational clarity.
Let $E$ be a vector bundle on $C$ of rank $2$.
The projective space bundle $X=\PP_C(E)$ with projection $\Pi: X\to C$ is called a \emph{ruled surface} over $C$.
We choose the convention that $\PP_C(E)$ is regarded as the projective space of lines in the fibers.
By Tsen's theorem, there exists a section of $\Pi$ and it is possible to regard the image as an effective divisor $C\subseteq X$.
The Picard group of $X$ is given by $\Pic(X)=\ZZ C\oplus\Pi^*\Pic(C)=\{kC+\bb f\,|\,k\in\ZZ,\,\bb\in\Pic(C)\}$.
A \emph{$k$-secant divisor} is a divisor $D$ on $X$ linearly equivalent to $kC+\bb f$ for some $\bb\in\Pic(C)$, and is called a \emph{$k$-section} if $D$ is effective.
If $k=1$ (resp. $2$, $3$), then a $k$-section $D$ is said to be a \emph{section} (resp. \emph{bisection}, \emph{trisection}).\linebreak
We will denote linear equivalence by $\sim$ and numerical equivalence by $\equiv$.
We fix a unisecant divisor $C_1$ on $X$ which satisfies $\pi_*\Oo_X(C_1)=E$.
Then $\pi_*\Oo_X(kC_1)=S^k E$ for $k\geq 0$,\linebreak $\pi_*\Oo_X(kC_1)=0$ for $k<0$, and $R^1\pi_*\Oo_X(kC_1)=\pi_*\Oo_X(-(k+2)C_1)^\vee\otimes (\det E)^\vee$.
We can also deduce that\linebreak ${C_1}^2=\deg E$.
There is a correspondence between $k$-sections on $X$ and line subbundles of $S^k E$ given by
\begin{align*}
\text{effective $D\sim kC_1 + \bb f$}
\ \leftrightarrow\
D\in H^0(\Oo_X(kC_1+\bb f))
\ \leftrightarrow\
s\in H^0(S^k E\otimes L)
\ \leftrightarrow\
\text{inclusion $L^{-1}\xrightarrow{s} S^kE$}
\end{align*}
for $L=\Oo_C(\bb)$, and the self-intersection number of $D$ is equal to
\[
D^2=(kC_1+\bb f)^2=k^2{C_1}^2+2k\deg\bb=k^2\deg E+2k\deg L.
\]
For $k=1$, a section $C_0$ is called a \emph{minimal section} if it attains the minimal self-intersection number $D^2$\linebreak among the sections $D\sim C_1+\bb f$ for some $\bb\in \Pic(C)$.
Though the choice of $C_0$ may not be unique, but the number ${C_0}^2$ is uniquely determined by $E$, and it is called the \emph{Segre invariant} $s_1(E)$.
From the definition\linebreak of stability, it is easy to check that $E$ is stable (resp. semi-stable) if and only if ${C_0}^2>0$ (resp. ${C_0}^2 \geq 0$).
Let $E$ be a semi-stable vector bundle on $C$ of rank $2$ and degree $0$.
Then $S^kE$ is a semi-stable vector bundle of degree $0$, and if $L^{-1}\to S^kE$ is a line subbundle of $\deg L=b$, then it must follow that $b\geq 0$.
That is, $D\equiv kC_1+bf$ is effective only if $b\geq 0$.
Thus the cone of curves $\NE(X)\subseteq N_1(X)$ is contained in\linebreak the cone $\mathcal{C}\subseteq N_1(X)$ which is $\RR_{\geq0}$-spanned by the rays $[f]$ and $[C_1]$.
It is further possible to show that $\overline{\NE}(X)=\mathcal{C}$ when $E$ is semi-stable \cite[I:~p.~70]{Lazarsfeld04}.
Therefore, $S^k E$ is destabilized by a line subbundle if there\linebreak exists a $k$-section $D\equiv kC_1$ on $X$ for some $k>0$, which is equivalent to saying that $\NE(X)$ is closed.
\begin{Rmk}\label{interesting}
Let $E$ be a semi-stable vector bundle on $C$ of rank $2$ with even degree.
There are various characterizations of a $k$-section $D$ on $X$ which corresponds to a destabilizing line subbundle $L^{-1}\to S^k E$.
\begin{enumerate}
\item $D$ has zero self-intersection.
\item $D$ lies on the boundary of $\NE(X)$.
\item $\pi=\Pi|_D:D\to C$ is an unramified $k$-covering if $D$ is irreducible and reduced.
\item $D$ is a smooth curve of genus $kg-k+1$ where $g$ is the genus of $C$ if $D$ is irreducible and reduced.
\end{enumerate}
The proof of (3) is introduced in Rosoff \cite[p.~123: the first remark]{Rosoff02}.
By (3), $D$ is smooth, so the other equivalences can be shown using the adjunction and Hurwitz formula.
\end{Rmk}
\begin{Prop}\label{k-2}
Let $E$ be a vector bundle on $C$ of rank $2$ and degree $0$.
Let $X=\PP_C(E)$ be the ruled surface $\Pi: \PP_C(E)\to C$ and $C_1$ be a unisecant divisor on $X$ with $\Pi_*\Oo_X(C_1)\cong E$.
If there is an irreducible and reduced $k$-section $D\sim kC_1+\bb f$ for some $\bb\in\Pic(C)$ of $\deg \bb=0$,
then $\Oo_D((k-2)C_1+\bb f)=\Oo_D$.
\end{Prop}
\begin{proof}
By Remark \ref{interesting}, $D$ is smooth.
Then we can use the adjunction formula,
\[
\omega_D
=\Oo_X(K_X+D)|_D
=\Oo_X((-2C_1+K_C f)+(kC_1+\bb f))|_D,
\]
and it shows that $\omega_{D/C}=\Oo_D((k-2)C_1+\bb f)$.
Again by Remark \ref{interesting}, $D$ is unramified over $C$ so that
$\omega_{D/C}=\Oo_D$.
Thus we have $\Oo_D((k-2)C_1+\bb f)=\Oo_D$.
\end{proof}
\restoregeometry
\subsection{Elementary Transformations}
Let $E$ be a vector bundle on $C$ of rank $2$ and $X=\PP_C(E)$ be the ruled surface $\Pi:\PP_C(E)\to C$.
There are two notions of elementary transformations.
One is for vector bundles and the other is for ruled surfaces.
First, we explain the elementary transformation of vector bundles.
Let $P\in C$ and fix a line $x$ in the fiber $E|_P$.
The elementary transformation $\elm_x E$ of $E$ at $x$ is defined by the following exact sequence.
\[
0\to \elm_x E\to E\xrightarrow{\alpha(x)} \CC_P\to 0
\]
Here, $\alpha(x)$ has the kernel $x$ at the fiber $E|_P\to\CC_P$.
Note that $\det (\elm_x E)=\det E\otimes\Oo_C(-P)$.
Next, for the elementary transformation of ruled surfaces, let $x\in X$ be a closed point.
Notice that the point $x$ in the fiber $Pf=\Pi^{-1}(P)$ can be identified with a line in the fiber $E|_P$ over $P=\Pi(x)$.
Then the elementary transformation $Y:=\elm_x X$ of $X$ at $x$ is the surface given by the following process.
\begin{enumerate}
\item $\widetilde X$ is the blow-up of $X$ at $x$.
The strict transform $\widetilde{Pf}\subseteq\widetilde{X}$ of the fiber $Pf\subseteq X$ is a $(-1)$-curve.
\item $Y$ is the blow-down of $\widetilde{X}$ along $\widetilde{Pf}$.
Then, the strict transform $Z'\subseteq Y$ of the exceptional divisor $Z\subseteq\widetilde{X}$ for $\widetilde{X}\to X$ becomes a smooth rational curve of zero self-intersection.
\end{enumerate}
Then $Y$ is again a ruled surface $\Lambda: \PP_C(F)\to C$ for some vector bundle $F$ on $C$ of rank $2$.
We denote the blow-up and down by
$\varphi:\widetilde{X}\to X$ and $\psi:\widetilde{X}\to Y$, and the center of the blow-down $\psi$ by $y\in Y$.
\[\xymatrix @C=3pc @R=0pc{
&Z\subseteq\widetilde{X}\supseteq \widetilde{Pf}\ar[ld]_\varphi\ar[rd]^\psi&\\
x\in X\ar[rd]_\Pi&&Y\ni y\ar[ld]^\Lambda\\
&P\in C&
}\]
Let $C_1$ be a unisecant divisor on $X$ such that $E=\Pi_*\Oo_X(C_1)$.
Then, for a section $D\sim C_1+\bb f$ on $X$, we have $\Pi_*\Oo_X(D)=E\otimes L$ for $L=\Oo_C(\bb)$.
We will see the nature of the strict transform $D'\subseteq Y$ of $D\subseteq X$ after the elementary transformation with regard to either $x\in D\subseteq X$ or $x\not\in D\subseteq X$.
Let $F=\Lambda_*\Oo_{Y}(D')$.\linebreak
If $x\in D\subseteq X$, then $y\not\in D'\subseteq Y$, and
\begin{align*}
F
&=\Lambda_*\Oo_{Y}(D')
=\Lambda_*(\psi_*(\psi^*\Oo_Y(D')))\\
&=(\Lambda\circ\psi)_*(\Oo_{\widetilde{X}}(\widetilde{D}))
=(\Pi\circ\phi)_*(\Oo_{\widetilde{X}}(\widetilde{D}+Z)\otimes\Oo_{\widetilde{X}}(-Z))\\
&=\Pi_*(\phi_*(\phi^*\Oo_X(D)\otimes\Oo_{\widetilde{X}}(-Z)))
=\Pi_*(\Oo_X(D)\otimes \Ii_x)
\end{align*}
where $\widetilde{D}\subseteq\widetilde{X}$ is the strict transform of $D\subseteq X$.
On the other hand, consider the exact sequence
\[
0\to\Oo_X(D)\otimes \Ii_x\to\Oo_X(D)\to\CC_x\to 0
\]
on $X$.
By pushing forward the sequence, we obtain the following exact sequence on $C$.
\[
0\to F\to E\otimes L\xrightarrow{\beta}\CC_P\to 0
\]
As $\beta=\alpha(x)$, it shows that $F=\elm_x (E\otimes L)=\elm_x E\otimes L$.
Next, if $x\not\in D\subseteq X$, then $y\in D'\subseteq Y$, and
\begin{align*}
F
&=\Lambda_*\Oo_Y(D')
=\Lambda_*(\psi_*(\psi^*\Oo_Y(D')))\\
&=(\Lambda\circ\psi)_*(\Oo_{\widetilde{X}}(\widetilde{D}+\widetilde{Pf}))
=(\Pi\circ\phi)_*(\Oo_{\widetilde{X}}(\widetilde{D}+\widetilde{Pf}+Z)\otimes\Oo_{\widetilde{X}}(-Z))\\
&=\Pi_*(\phi_*(\phi^*\Oo_X(D+Pf)\otimes\Oo_{\widetilde{X}}(-Z)))
=\Pi_*(\Oo_X(D)\otimes \Ii_x)\otimes\Oo_C(P)
\end{align*}
where $\widetilde{Pf}\subseteq \widetilde{X}$ and $\widetilde{D}\subseteq \widetilde{X}$ are as before.
By the same argument, we get the exact sequence
\[
0\to F\otimes\Oo_C(-P)\to E\otimes L\xrightarrow{\beta}\CC_P\to 0
\]
on $C$.
Because $\beta=\alpha(x)$, we deduce that $F=\elm_x(E\otimes L)\otimes\Oo_C(P)=\elm_x E\otimes L(P)$.
\begin{Prop}\label{subbundle_after_elementary_transformation}
Let $D$ be the section on $X=\PP_C(E)$ corresponding to a line subbundle $L^{-1}\to E$.
Then, for the elementary transformation $Y=\elm_x X$ of $X$ at a point $x$ over $P\in C$, and the strict transform $D'\subseteq Y$ of $D\subseteq X$, there exists the corresponding line subbundle
\[
\begin{cases}
\text{$L^{-1}\to \elm_x E$ to $D'\subseteq Y$ and $(D')^2=D^2-1$} & \text{if $x\in D\subseteq X$}, \\
\text{$L^{-1}(-P)\to \elm_x E$ to $D'\subseteq Y$ and $(D')^2=D^2+1$} & \text{if $x\not\in D\subseteq X$}.
\end{cases}
\]
\end{Prop}
\begin{proof}
Note that, in the case where $x\in D$, the strict transform $D'$ on $Y$ corresponds to the line subbundle $L^{-1}\to\elm_xE$.
So the self-intersection number is given by
\[
(D')^2
=\deg(\elm_x E)+2\deg L
=(\deg E-1)+2\deg L
=(\deg E+2\deg L)-1
=D^2-1.
\]
The proof is similar for the case where $x\not\in D$.
\end{proof}
In the both cases, whether $x\in D$ or not, we can see that $\elm_x (\Pi_*\Oo_X(D))$ and $\Lambda_*\Oo_{Y}(D')$ differ by\linebreak a twist of a line bundle, and hence $Y=\PP_C(F)=\PP_C(\elm_x E)$ for $Y=\elm_x X$.
The elementary transformation can be defined at multiple points of $X$ unless the set of points contains distinct points in the same fiber of $\Pi:X\to C$.
We introduce an example of an elementary transformation taken at a double point $x\leadsfrom y$ in which $x$ is a closed point of $X$ over $P\in C$ and $y$ is a point infinitely near to $x$ but not the infinitely near point of the fiber $Pf\subseteq X$.
Equivalently, $y$ is a closed point of $\elm_x X$ over the same point $P\in C$ but not the center of the blow-down $\Bl_x X\to \elm_x X$.
\begin{Ex}
Let $X=\PP_C(E)$ be a ruled surface $\Pi:\PP_C(E)\to C$ and $D$ be a section on $X$ such that $\Pi_*\Oo_X(D)=E\otimes L$ for some line bundle $L$.
Let $x$ be a closed point of $D$ and $y$ be the infinitely near point of $D$ at $x$ given by the intersection point $\widetilde{D}\cap Z$ of the exceptional fiber $Z$ of the blow-up $\text{Bl}_x X\to X$ and the strict transform $\widetilde{D}$ of $D$ on $\text{Bl}_x X$.
Let $P=\Pi(x)$ and $\Ii_{x\leadsfrom y}$ be the ideal sheaf on $X$ which defines $y$ infinitely near to $x$.
By pushing forward the exact sequence
\[
0
\to\Oo_X(D)\otimes \Ii_{x\leadsfrom y}
\to\Oo_X(D)
\to\Oo_X/\Ii_{x\leadsfrom y}
\to 0
\]
on $X$, we have the following exact sequence on $C$.
\[
0\to \elm_{x\leadsfrom y}E\otimes L\to E\otimes L\to \Oo_{2P}\to 0
\]
By Proposition \ref{subbundle_after_elementary_transformation}, we obtain that $\Lambda_*(\Oo_{Y}(D'))=\elm_{x\leadsfrom y}E\otimes L$ where $Y=\elm_{x\leadsfrom y}X$ is a ruled surface $\Lambda:Y\to C$ and $D'\subseteq Y$ is the strict transform of $D\subseteq X$.
\end{Ex}
\subsection{Generation of Orthogonal Bundles by Elementary Transformations}
Choi and Park \cite{Choi-Park15} use elementary transformations to construct a ruled surface $X=\PP_C(E)$ where $E$ is stable and $X$ admits\linebreak a bisection of zero self-intersection.
As we have seen so far, in this case, $S^2 E$ is strictly semi-stable.
Also, $E$ is orthogonal if the degree of $E$ is normalized to be $0$ since $E$ is stable.
In this subsection, we will briefly review the construction, and show that the elementary transformation construction generates all the orthogonal bundles.
Let $M\in J_2(C)$ be nontrivial and $Y$ be the ruled surface $\Lambda: \PP_C(\Oo_C\oplus M)\to C$.
Then $Y$ has only two minimal sections $C_0$ and $C_\infty$ which respectively correspond to $\Oo_C\to\Oo_C\oplus M$ and $M\to\Oo_C\oplus M$.
Because there is a $1$-dimensional family of bisections on $Y$ linearly equivalent to $2C_0$ whereas $Y$ has only finitely many sections numerically equivalent to $C_0$, there exists an irreducible bisection in the linear equivalence class of $2C_0$.
So fix an irreducible bisection $B'\sim 2C_0$ on $Y$ which corresponds to a line subbundle $\Oo_C\to S^2(\Oo_C\oplus M)=\Oo_C\oplus M\oplus\Oo_C$.
Then we can obtain the desired ruled surfaces by taking elementary transformations of $Y$ at general points of $B'$.
Let $x_1,\,x_2,\,\ldots,\,x_{2n}$ be arbitrary closed points of $B'$ on $Y$ and $X$ be the ruled surface $\Pi:X\to C$ obtained by taking elementary transformations of $Y$ at $x_1,\,x_2,\,\ldots,\,x_{2n}$.
To avoid technical issues, we will not deal with the cases where the points involve distinct closed points in the same fiber of $\Lambda$.
However, we allow repeated points.
If $x_1=x_2=\cdots=x_m$ for example, then we can take an elementary transformation at $x_1\leadsfrom x_2\leadsfrom \cdots \leadsfrom x_m$ where $x_{i+1}$ is infinitely near to $x_1$ in the direction $i$-th tangent to $B'$, that is,\linebreak $x_{i+1}$ is given recursively by the intersection point $ \widetilde{B'_i}\cap Z_i$ on the $i$-th blow-up $Y_i\to Y$ where $Z_i$ is the exceptional divisor of the blow-up $Y_i\to Y_{i-1}$ centered at $x_i$ with initial $Y_0=Y$ and $\widetilde{B'_i}\subseteq Y_i$ is the strict transform of $B'\subseteq Y$.
Since ${B'}^2=0$, the smoothness of $B'$ follows from Remark \ref{interesting}.
Then it is easy to check that the strict transform $B\subseteq X$ of $B'\subseteq Y$ satisfies $B^2=0$ as well.
Again by Remark \ref{interesting}, $B$ is smooth, so the strict transform from $B'$ to $B$ is an isomorphism.
Thus we can regard the points $x_1,\,x_2,\,\ldots,\,x_{2n}$ as points of $B$,\linebreak and $\pi=\Pi|_B: B\to C$ as the unramified double covering corresponding to $M$ as the same with $B'\to C$.
\begin{center}
\begin{figure}[h]
\begin{tikzpicture}[scale=0.92]
\pgfmathsetmacro{\wi}{3}
\pgfmathsetmacro{\he}{2}
\pgfmathsetmacro{\ri}{1.18}
\pgfmathsetmacro{\up}{0.5}
\pgfmathsetmacro{\t}{0.1}
\coordinate (a) at (0*\ri*\wi,0);
\coordinate (b) at (1*\ri*\wi,\up);
\coordinate (c) at (2*\ri*\wi,0);
\coordinate (d) at (3*\ri*\wi,\up);
\coordinate (e) at (4*\ri*\wi,0);
\tikzset{>=latex}
\draw[thick,<-]
(0*\ri*\wi+\wi+\t, \he/2+\t) --
(1*\ri*\wi-\t, \he/2-\t+\up);
\draw[thick,->]
(1*\ri*\wi+\wi+\t, \he/2-\t+\up) --
(2*\ri*\wi-\t, \he/2+\t);
\draw[thick,<-]
(2*\ri*\wi+\wi+\t, \he/2+\t) --
(3*\ri*\wi-\t, \he/2-\t+\up);
\draw[thick,->]
(3*\ri*\wi+\wi+\t, \he/2-\t+\up) --
(4*\ri*\wi-\t, \he/2+\t);
\node at (1.5+0.0*\wi,-0.3)
{\footnotesize $Y=\PP_C(\Oo_C\oplus M)$};
\node at (1.5+2.37*\wi,-0.3)
{\footnotesize $\PP_C(\elm_{x_1}(\Oo_C\oplus M))$};
\node at (1.5+4.73*\wi,-0.3)
{\footnotesize $X=\PP_C(\elm_{x_1,x_2}(\Oo_C\oplus M))$};
\def\surface
{(0,0)--(\wi,0)--(\wi,\he)--(0,\he)--cycle;}
\def\bisection
{(\wi/2,\he/2-0.1) ellipse (1.2 and 0.5)}
\begin{scope}[shift={($(a)$)}]
\draw[thick] {(0.3,\he-0.35)--(\wi-0.3,\he-0.35)};
\draw[black,densely dashed] (\wi/3,0)--(\wi/3,\he);
\draw[thick] \bisection;
\draw[thick] \surface;
\node at (\wi*0.1,\he*0.7) {\small $C_0$};
\node at (\wi*0.1,\he*0.2) {\small $B'$};
\node at (\wi*0.43,\he*0.3) {\small $x_1$};
\node[circle,fill=red,inner sep=0pt,minimum size=4pt] at (\wi/3,\he*0.22) {};
\end{scope}
\begin{scope}[shift={($(b)$)}]
\draw[thick] {(0.3,\he-0.35)--(\wi-0.3,\he-0.35)};
\draw[black,densely dashed] (\wi/3,\he) .. controls (\wi/3,\he/3*2-0.1) .. (0.27*\wi,\he/5*2-0.1);
\draw[red,densely dashed] (\wi/3,0) .. controls (\wi/3,\he/3-0.1) .. (0.27*\wi,\he/5*3-0.1);
\draw[thick] \bisection;
\draw[thick] \surface;
\end{scope}
\begin{scope}[shift={($(c)$)}]
\draw[thick] {(0.3,\he-0.35) .. controls (\wi/2,\he/2+0.1) .. (\wi-0.3,\he-0.35)};
\node[circle,draw=white,fill=white,inner sep=0pt,minimum size=5pt] at (\wi/3*2,\he/3*2) {};
\draw[red,densely dashed] (\wi/3,0)--(\wi/3,\he);
\draw[black,densely dashed] (\wi/5*4,0)--(\wi/5*4,\he);
\draw[thick] \bisection;
\draw[thick] \surface;
\node at (\wi*0.7,\he*0.35) {\small $x_2$};
\node at (\wi/3+0.1,\he*0.67+0.3) {\small $\iota(x_1)$};
\node[circle,fill=black,inner sep=0pt,minimum size=4pt] at (\wi/3,\he*0.68) {};
\node[circle,fill=green,inner sep=0pt,minimum size=4pt] at (\wi/5*4,\he*0.29) {};
\end{scope}
\begin{scope}[shift={($(d)$)}]
\draw[thick] {(0.3,\he-0.35) .. controls (\wi/2,\he/2+0.1) .. (\wi-0.3,\he-0.35)};
\node[circle,draw=white,fill=white,inner sep=0pt,minimum size=5pt] at (\wi/3*2,\he/3*2) {};
\draw[black,densely dashed] (\wi/5*4,\he) .. controls (\wi/5*4,\he/3*2-0.1) .. (0.75*\wi,\he/5*2-0.1);
\draw[green,densely dashed] (\wi/5*4,0) .. controls (\wi/5*4,\he/3-0.1) .. (0.75*\wi,\he/5*3-0.1);
\draw[thick] \bisection;
\draw[thick] \surface;
\node at (\wi/3+0.1,\he*0.67+0.3) {\small $\iota(x_1)$};
\node[circle,fill=black,inner sep=0pt,minimum size=4pt] at (\wi/3,\he*0.68) {};
\end{scope}
\begin{scope}[shift={($(e)$)}]
\draw[green,densely dashed] (\wi/3*2,0)--(\wi/3*2,\he);
\draw[thick] {(0.3,\he-0.35) .. controls (\wi/2,\he/2+0.1) .. (\wi-0.3,\he-0.35)};
\draw[thick] \bisection;
\draw[thick] \surface;
\node at (\wi*0.1,\he*0.7) {\small $D$};
\node at (\wi*0.1,\he*0.2) {\small $B$};
\node at (\wi/3+0.1,\he*0.67+0.3) {\small $\iota(x_1)$};
\node at (\wi/3*2-0.1,\he*0.67+0.3) {\small $\iota(x_2)$};
\node[circle,fill=black,inner sep=0pt,minimum size=4pt] at (\wi/3,\he*0.67) {};
\node[circle,fill=black,inner sep=0pt,minimum size=4pt] at (\wi/3*2,\he*0.67) {};
\end{scope}
\end{tikzpicture}
\vspace{-10pt}
\caption{Elementary transformation at $2n$ points for $n=1$}
\end{figure}
\end{center}
\vspace{-2em}
Let $D$ be a section on $X$ given by the strict transform of $C_0\subseteq Y$.
As we can observe from the diagram of the case $n=1$, we have
\[
\Oo_X(D)|_B=\Oo_B(\iota(x_1)+\iota(x_2)+\cdots+\iota(x_{2n}))
\]
where $\iota:B\to B$ is the involution induced by $\pi$.
Let $P_i=\Pi(x_i)$ for $i=1,2,\,\ldots,\,2n$ and $L=\Oo_C(\bb)$ be a line bundle on $C$ such that $L^2=\Oo_C(P_1+P_2+\cdots+P_{2n})$.
Then
\[
\Nm_{B/C}(\Oo_B(\iota(x_1)+\iota(x_2)+\cdots+\iota(x_{2n}))\otimes \pi^*L^{-1})=\Oo_C,
\]
and hence $E:=\pi_*(\Oo_B(\iota(x_1)+\iota(x_2)+\cdots+\iota(x_{2n}))\otimes \pi^*L^{-1})=\pi_*\Oo_B(D-\bb f)$ is an orthogonal bundle with values in $\Oo_C$ whose rank is $2$ and determinant is $M$.
Since $B$ is effective, there exists the exact sequence
\[
0
\to \Oo_X(D-\bb f-B)
\to \Oo_X(D-\bb f)
\to \Oo_X(D-\bb f)|_B
\to 0
\]
on $X$, and by pushing forward the sequence to $C$, we get $E= \Pi_*\Oo_X(D-\bb f)= \Pi_*\Oo_X(D)\otimes L^{-1}$.
Applying Proposition \ref{subbundle_after_elementary_transformation}, we have $\elm_{x_1,x_2,\ldots,x_{2n}} (\Oo_C\oplus M)=\Pi_*\Oo_X(D)\otimes \Oo_C(-P_1-P_2-\cdots -P_{2n})$ because none of $x_i$ is contained in $C_0$ due to $C_0.B=0$.
Therefore,
\[
E
=\left(\Pi_*\Oo_X(D)\otimes L^{-2}\right)\otimes L
=\elm_{x_1,x_2,\ldots,x_{2n}} (\Oo_C\oplus M)\otimes L.
\]
Thanks to Proposition \ref{img}, this argument further asserts that $E$ is semi-stable and stable in general.\linebreak
The next theorem shows that this process generates the orthogonal bundles.
\begin{Thm}\label{generation}
Let $M\in J_2(C)\backslash\{\Oo_C\}$ and $E$ be a vector bundle on $C$ of rank $2$ and determinant $M$.\linebreak
If $E$ is orthogonal with values in $\Oo_C$,
then there exist points $x_1,\,x_2\,\ldots,\,x_{2n}$ of a bisection $B'$ on the ruled surface $Y=\PP_C(\Oo_C\oplus M)$ such that $E=\elm_{x_1,\,x_2,\,\ldots, x_{2n}}(\Oo_C\oplus M)\otimes L$ for some $L\in J^n(C)$.
\end{Thm}
\begin{proof}
Let $B'\subseteq Y$ be as before and $\pi: B\to C$ be the unramified double covering corresponding to $M$\linebreak with involution $\iota:B\to B$.
Then $E=\pi_*R$ for some $R\in \Pr(B/C)$.
Fix a line bundle $L$ on $C$ of degree $g$.\linebreak
Since $2g\geq 2g-1=g(B)=\dim J^{2g}(B)$, there exist points $x_1,\,x_2,\,\ldots,\,x_{2g}\in B\cong B'$ such that
\[
\Oo_B(\iota(x_1)+\iota(x_2)+\cdots+\iota(x_{2g}))=R\otimes \pi^*L
\ \text{in $J^{2g}(B)$}
\]
and $P_i=\pi(x_i)$ satisfies $L^2=\Nm_{B/C}(R\otimes\pi^*L)=\Oo_C(P_1+P_2+\cdots+P_{2g})$ because $\Nm_{B/C}(R)=\Oo_C$.
If there are two points of the form $x$ and $\iota(x)$ in $x_i$, say $x_{2g}=\iota(x_{2g-1})$, then, by taking subtraction as
\[
\Oo_B(\iota(x_1)+\iota(x_2)+\cdots+\iota(x_{2g-2}))=R\otimes \pi^*(L(-P_g))
\ \text{in $J^{2g-2}(B)$},
\]
we may assume that there is no pair of points in $x_i$ of the form $x$ and $\iota(x)$.
Thus we can apply the previous argument to have $E\cong \elm_{x_1,x_2,\ldots,x_{2n}}(\Oo_C\oplus M)\otimes L$.
\end{proof}
For the completeness of the exposition, we leave the following remark which states that the orthogonal bundles of the form $A^{-1}\oplus A$ are also generated by elementary transformations from $\Oo_C\oplus \Oo_C$.
\begin{Rmk}
Let $Y$ be the ruled surface $\Lambda:\PP_C(\Oo_C\oplus\Oo_C)\to C$.
We can choose two distinct sections\linebreak $C_0$ and $C_\infty$ on $Y$ corresponding to different inclusions $\Oo_C\to\Oo_C\oplus\Oo_C$.
Then they have no intersection.
For $x_1,\,\ldots,\,x_n\in C_0$ and $x_{n+1},\,\ldots,\,x_{2n}\in C_\infty$ with $P_i=\Lambda(x_i)$, we can define $\elm_{x_1,x_2,\ldots,x_{2n}} F$ whenever $\{P_1,\,\ldots,\,P_n\}\cap \{P_{n+1},\,\ldots,\,P_{2n}\}=\emptyset$.
By Proposition \ref{subbundle_after_elementary_transformation}, we have two distinct injections
\[
\Oo_C(-P_{n+1}-\cdots-P_{2n})\to \elm_{x_1,x_2,\ldots,x_{2n}}(\Oo_C\oplus\Oo_C)
\ \ \text{and}\ \
\Oo_C(-P_1-\cdots-P_n)\to \elm_{x_1,x_2,\ldots,x_{2n}}(\Oo_C\oplus \Oo_C).
\]
Since they destabilize $\elm_{x_1,x_2,\ldots,x_{2n}}(\Oo_C\oplus \Oo_C)$, we have
\[
\elm_{x_1,x_2,\ldots,x_{2n}}(\Oo_C\oplus\Oo_C)\otimes L=L(-P_1-\cdots-P_n)\oplus L(-P_{i+1}-\cdots-P_{2n}),
\]
for any $L\in\Pic(C)$.
As the choices of $x_i$ are arbitrary, we can generate $E=A^{-1}\oplus A$ for all $A\in J^0(C)$ in this way.
Indeed, after fixing $L\in J^g(C)$, we can find points $P_1,\,P_2\,\ldots,\,P_{2g}\in C$ which satisfy $A=L(-P_1-\cdots-P_g)$ and $A^{-1}=L(-P_{g+1}-\cdots-P_{2g})$.
If $\{P_1,\,\ldots,\,P_g\}\cap \{P_{g+1},\,\ldots,\,P_{2g}\}\neq\emptyset$, say $P_g=P_{2g}$, then we can reduce to the case $A=L'(-P_1-\cdots -P_{g-1})$ and $A^{-1}=L'(-P_{g+1}-\cdots -P_{2g-1})$ by substituting $L'=L(-P_g)$ for $L$.
Thus we can obtain that $\elm_{x_1,x_2,\ldots,x_{2n}}(\Oo_C\oplus\Oo_C)\otimes L=A^{-1}\oplus A$ for some $L\in J^n(C)$ after the substitutions.
\end{Rmk}
\vspace{1em}
\section{Semi-Stable Vector Bundles Whose Symmetric Cube is Not Stable}
\subsection{Destabilized by Rank 1}
Let $E$ be a stable vector bundle on $C$ of rank $2$.
If $S^3E$ is not stable, then it is destabilized by a subbundle of rank $1$ or $2$ by Remark \ref{first}.
We first study the case of rank $1$.
\begin{Prop}\label{tosmall}
Let $k\geq 2$.
For a vector bundle $E$ of rank $2$ and any line bundle $L$ on $C$, there exist the following exact sequences.
\begin{gather*}
0
\to \Hom(S^{k+1}E, L\otimes \det E)
\to \Hom(S^k E, E\otimes L)
\to \Hom(S^{k-1}E, L)\\
0
\to \Hom(L^{-1}\otimes \det E, S^{k+1}E)
\to \Hom(E\otimes L^{-1}, S^k E)
\to \Hom(L^{-1}, S^{k-1}E)
\end{gather*}
In particular, when $E$ is stable and has trivial determinant, for a line bundle $L$ of degree $0$,
\begin{itemize}
\item if $S^{k+1} E$ is destabilized by $S^{k+1}E\to L$, then $S^kE$ is destabilized by $S^k E\to E\otimes L$,
\item if $S^{k+1} E$ is destabilized by $L^{-1}\to S^{k+1}E$, then $S^k E$ is destabilized by $E\otimes L^{-1}\to S^k E$,
\end{itemize}
and the converses hold if $S^{k-2} E$ is stable.
\end{Prop}
\begin{proof}
Let $X=\PP_C(E)$ be the ruled surface $\Pi:\PP_C(E)\to C$ and $C_1$ be a unisecant divisor on $X$ satisfying $\Pi_*\Oo_X(C_1)=E$.
Since the natural morphism $\Pi^* E\to\Oo_C(C_1)$ is surjective, we have the following exact sequence on $X$.
\vspace{-0.5em}
\[
0\to \Oo_X(-C_1)\otimes \Pi^*\det E\to \Pi^*E\to \Oo_X(C_1)\to 0
\]
By pushing forward the sequence after twisting by $\Oo_X(kC_1)$ for $k\geq 0$, we obtain the following exact sequence on $C$.
\vspace{-0.5em}
\[\label{basic}\tag{3.1}
0\to S^{k-1}E\otimes \det E\to S^kE\otimes E\to S^{k+1}E\to 0
\]
Then, taking the dual of the sequence and twisting by $L\otimes\det E$, we have the following exact sequences.
\begin{gather*}
0
\to (S^{k+1} E)^\vee
\to (S^k E)^\vee\otimes E^\vee
\to (S^{k-1}E)^\vee\otimes (\det E)^{-1}
\to 0\\
0
\to (S^{k+1}E)^\vee\otimes(L\otimes\det E)
\to (S^kE)^\vee\otimes (E\otimes L)
\to (S^{k-1} E)^\vee\otimes L
\to 0
\end{gather*}
By taking global sections, we get the first exact sequence of the statement.
Next, applying $E^\vee\cong E\otimes (\det E)^{-1}$ to the last sequence and twisting by $(\det E)^{k-1}$, we have the following exact sequences.
\begin{gather*}
0
\to S^{k+1}E\otimes (\det E)^{-k}\otimes L
\to S^kE\otimes (\det E)^{-(k-1)} \otimes E^\vee \otimes L
\to S^{k-1} E\otimes (\det E)^{-(k-1)}\otimes L
\to 0\\
0
\to (L^{-1}\otimes \det E)^\vee\otimes S^{k+1}E
\to (E\otimes L^{-1})^\vee\otimes S^kE
\to (L^{-1})^\vee\otimes S^{k-1}E
\to 0
\end{gather*}
By taking global sections, we obtain the second exact sequence of the statement.
\end{proof}
This fact means that if $S^{k+1}E$ is destabilized by a line bundle, then it has implications for the stability of $S^k E$.
Later, when $E$ has degree $0$, using an exact sequence (see Lemma \ref{generalization}) which generalizes \eqref{basic}, we can deduce that if $S^{k+1}E$ is destabilized by a line subbundle, then, for all $\frac{k+1}{2}<m\leq k$, $S^m E$ is not stable.\linebreak
In the opposite direction, we can show the following.
\begin{Prop}\label{tolarge}
Let $E$ be a stable vector bundle on $C$ of rank $2$ and degree $0$.
If $S^k E$ is destabilized by\linebreak a line subbundle $L^{-1}\to S^k E$, then $S^l E$ is destabilized by a subbundle $S^{l-k}E\otimes L^{-1}\to S^l E$ for all $l\geq k$.
\end{Prop}
\begin{proof}
If there is a line subbundle $L^{-1}\to S^k E$ of degree $0$, then there exists a $k$-section $D\sim kC_1+\bb f$ on $X$ for $L=\Oo_C(\bb)$ with projection $\pi=\Pi_D|:D\to C$.
Thus it gives the exact sequence
\[
0\to\Oo_X((l-k)C_1-\bb f)\to \Oo_X(lC_1)\to \Oo_D(lC_1)\to 0
\]
on $X$, and by pushing forward the sequence, we have the following exact sequence on $C$.
\[
0\to S^{l-k}E\otimes L^{-1}\to S^lE\to \pi_*\Oo_C(lC_1)|_D\to 0
\]
Because $\deg(S^{l-k}E\otimes L^{-1})=\deg(S^lE)=0$, $S^{l-k}E\otimes L^{-1}$ destabilizes $S^l E$.
\end{proof}
If $S^3 E$ is destabilized by a line subbundle, then $S^2 E$ is not stable by Proposition \ref{tosmall}, so $S^2 E$ has\linebreak a destabilizing line subbundle (see Remark \ref{first}).
Conversely, if $S^2 E$ is destabilized by a line subbundle, then $S^3 E$ is not stable by Proposition \ref{tolarge}, but we do not know whether $S^3 E$ has a destabilizing subbundle.
The following tells when $S^3 E$ is destabilized by a line subbundle.
\begin{Thm}\label{classification_rank_one}
Let $E$ be a stable vector bundle on $C$ of rank $2$ with trivial determinant.
If $S^3 E$ is destabilized by a line subbundle $L^{-1}\to S^3 E$, then $L\in J_4(C)$ and $E=(\pi_*R)\otimes L$ for some $R\in\Pr(B/C)$ with $R\in J_6(B)$ where $\pi:B\to C$ is the unramified double covering corresponding to $L^2\in J_2(C)$.
\end{Thm}
\newgeometry{top=80pt,bottom=80pt,left=80pt,right=80pt,footskip=30pt}
\begin{proof}
Assume that $S^3 E$ is destabilized by a line subbundle $L^{-1}\to S^3 E$ of $\deg L=0$.
Then $S^2 E$ is destabilized by the subbundle $E\otimes L^{-1}\to S^2 E$ by Proposition \ref{tosmall}.
Completing the quotient by comparing the determinants, we have the following exact sequence.
\[
0\to E\otimes L^{-1}\to S^2 E\to L^2\to 0
\]
We can observe from Remark \ref{twotor} with the surjection $S^2 E\to L^2$ that $L^2\in J_2(C)$.
Notice that the dual of the surjection, $L^{-2}\to (S^2 E)^\vee\cong S^2 E$, yields a bisection $B\sim 2C_1+2\bb f$ on $X=\PP_C(E)$ for $L=\Oo_C(\bb)$. Let $\pi=\Pi|_B:B\to C$ be the induced unramified double covering.
By pushing forward the exact sequence
\[
0\to \Oo_X(C_1-2\bb f)\to \Oo_X(3C_1)\to \Oo_B(3C_1)\to 0
\]
on $X$, we obtain the following exact sequence on $C$.
\[
0\to E\otimes L^{-2}\to S^3 E\to \pi_*\Oo_B(3C_1)\to 0
\]
Since there cannot exist a nonzero morphism $L^{-1}\to E\otimes L^{-2}$ as $E$ is stable, the morphism $L^{-1}\to S^3 E$ induces a nonzero morphism $L^{-1}\to \pi_*\Oo_B(3C_1)$, and it implies that $\pi_*\Oo_B(3C_1)$ is not stable.
By pushing forward the exact sequence
\[
0\to \Oo_X(-C_1-3\bb f)\to\Oo_X(C_1-\bb f)\to\Oo_B(C_1-\bb f)\to 0
\]
on $X$ to $C$, we get $E\otimes L^{-1}=\Pi_*\Oo_X(C_1-\bb f)=\pi_*\Oo_B(C_1-\bb f)$.
Note that $E\otimes L^{-1}$ is an orthogonal bundle with values in $\Oo_C$ as there is a surjection $S^2(E\otimes L^{-1})\to \Oo_C$.
Then $\Oo_B(C_1-\bb f)\in\Pr(B/C)$ follows from Mumford's classification.
Because the stability of $\pi_*\Oo_B(3C_1-3\bb f)=(\pi_*\Oo_B(3C_1))\otimes \Oo_C(-3\bb)$ and $\pi_*\Oo_B(3C_1)$ are equivalent, and $\Oo_B(3C_1-3\bb f)\in\Pr(B/C)$, we can deduce that $\pi_*\Oo_B(3C_1-3\bb f)$ splits by Proposition \ref{img}.
Therefore, $\Oo_B(3C_1-3\bb f)$ is $2$-torsion, and hence $\Oo_B(C_1-\bb f)$ is $6$-torsion.
\end{proof}
We can verify the converse of the theorem as in the next remark.
That is, if $R\in\Pr(B/C)\cap J_6(B)$ and $E=\pi_*R$, then $S^3 E$ is destabilized by a line subbundle, but we need to exclude $2$-torsion $R$ for $E$ to be stable.
For all nontrivial $M\in J_2(C)$ and a line bundle $A$ with $A^2=M$, recall from Section 2 that the images of $\Phi_A:\Pr(B/C)\to\SU_C(2,\Oo_C)$ are the locus of $E$ with strictly semi-stable $S^2 E$.
Then, $E=\Phi_A(R)$ for some $R\in\Pr(B/C)\cap(J_6(B)\backslash J_2(B))$ if and only if $E$ is stable and $S^3 E$ is destabilized by a line subbundle.
Since the number of choices of $M\in J_2(C)$ and $R\in J_6(B)$ are finite, we can conclude that there are only finitely many such $E$ in $\SU_C(2,\Oo_C)$.
\begin{Rmk}
Let $\pi: B\to C$ be an unramified double covering corresponding to $M\in J_2(C)\backslash\{\Oo_C\}$.\linebreak
If $k\geq3$, then we can find in the same manner that $E=\pi_*R$ is stable and $S^k E$ is destabilized by a line subbundle for $R\in\Pr(B/C)\cap (J_{2k}(B)\backslash J_2(B))$.
Let $X=\PP_C(E)$ be the ruled surface $\Pi:\PP_C(E)\to C$ and $C_1$ be a unisecant divisor on $X$ such that $\Pi_*\Oo_X(C_1)=E$.
As $S^2 E$ is destabilized by $\Oo_C$, $B$ realizes as a bisection $B\sim 2C_1$ on $X$, and then $E=\pi_*\Oo_B(C_1)$.
By pushing forward the exact sequence
\[
0\to\Oo_X((k-2)C_1)\to \Oo_X(kC_1)\to \Oo_B(kC_1)\to 0
\]
on $X$, we obtain the following exact sequence on $C$.
\[
0\to S^{k-2}E\to S^k E\to \pi_*\Oo_B(kC_1)\to 0
\]
Notice that $E$ is orthogonal with values in $\Oo_C$.
Thus $\Oo_B(C_1)\in\Pr(B/C)$, and so $\Oo_B(kC_1)\in\Pr(B/C)$.
Hence $\Oo_B(kC_1)\in J_2(B)$ is equivalent to saying that $\pi_*\Oo_B(kC_1)$ splits into the direct sum of line bundles by Proposition \ref{img}.
Therefore, $S^k E$ is destabilized by a line
(sub)bundle when $\Oo_B(C_1)$ is $2k$-torsion.
\end{Rmk}
\newgeometry{top=65pt,bottom=80pt,left=80pt,right=80pt,footskip=30pt}
\subsection{Destabilized by Rank 2 but not by Rank 1}
In Proposition \ref{tolarge}, we observe that if $S^2 E$ is strictly semi-stable, then $S^3 E$ is destabilized by a subbundle of rank $2$.
The converse does not hold as there is an example where $S^2 E$ is stable but $S^4 E$ is destabilized by a line subbundle.
By Proposition \ref{tosmall},\linebreak if $S^4 E$ is destabilized by a line subbundle, then $S^3 E$ is destabilized by a subbundle of rank $2$.
\begin{Ex}\label{S3}
Let $\eta:C'\to C$ be an unramified cyclic triple covering corresponding to $L\in J_3(C)\backslash\{\Oo_C\}$.
Then we have $\eta_*\Oo_{C'}=\Oo_C\oplus L^{-1}\oplus L^{-2}$.
For $M\in J_2(C')$, $V=\eta_*M$ becomes an orthogonal bundle on $C$\linebreak of rank $3$.
Indeed, the surjection
\[
\eta^*S^2 V
=S^2(\eta^* V)
=S^2(\eta^*(\eta_*M))
\to M^2
=\Oo_{C'}
\]
induces a nonzero morphism $S^2 V\to \eta_*\Oo_{C'}=\Oo_C\oplus L^{-1}\oplus L^{-2}$, and it implies that $V$ is orthogonal because
one of $S^2 V\to L^{-k}$ is nonzero which is necessarily surjective due to the semi-stability of $S^2 V$.
In fact, there are surjections $S^2 V\to L^{-k}$ for all $k$ (mod $3$);
if there is a surjection $S^2V\to L^{-k}$ for one $k$,\linebreak then the isomorphism $V\otimes L=\eta_*M\otimes L\cong \eta_*(M\otimes \eta^*L)\cong \eta_*M=V$ induces another surjection $S^2 V\to L^{-k+1}$ after twisting the surjection $S^2V\otimes L^2\cong S^2 (V\otimes L)\cong S^2V\to L^{-k}$ by $L$.
According to Mumford's classification \cite{Mumford71} of orthogonal bundles of rank $3$, $V\cong A^{-1}\otimes S^2 E$ for some vector bundle $E$ of rank $2$ and line bundle $A$ satisfying $A^2=(\det E)^2$.
We will first check that $S^2 E$ is stable if and only if $M\not\in \eta^*J^0(C)$, and then show that $S^4 E$ has a destabilizing quotient line bundle.
If $\eta_*M$ is destabilized by a line subbundle $R^{-1}\to \eta_*M$ of degree $0$, then there is a nonzero morphism $\eta^*R^{-1}\to M$.
Since both $M$ and $\eta^*R^{-1}$ have degree $0$, it is possible if and only if $M=\eta^*R^{-1}\in \eta^*J^0(C)$.
Next, to show that $S^4 E$ is destabilized by the quotient line bundles $S^4 E\to A^2\otimes L^{\pm 1}$, we will use the following exact sequence obtained by completing the kernel of the natural surjection $S^2(S^2 E)\to S^4 E$ after comparing the determinants.
\vspace{-0.3em}
\[
0\to (\det E)^2\to S^2 (S^2 E)\to S^4 E\to 0
\]
Twisting the sequence by $A^{-2}\cong (\det E)^2$, we have the following exact sequence.
\[
0\to\Oo_C\to S^2(A^{-1}\otimes S^2 E)\to S^4 E\otimes A^{-2}\to 0
\vspace{-0.2em}
\]
Because there exist the surjections $S^2 V\to L^{\pm 1}$ and $L\neq \Oo_C$,
the above sequence with $V\cong A^{-1}\otimes S^2 E$ induces surjections $S^4 E\otimes A^{-2}\to L^{\pm 1}$.
That is, $S^4 E$ has destabilizing quotient bundles $A^2\otimes L^{\pm 1}$.
\end{Ex}
Note that the number of $E$ given by the above construction is finite since there are only finitely many $m$-torsion line bundles for each $m$ and vector bundles $E$ with $\det E=\det F$ which satisfy $S^2 E=S^2 F$ for a fixed vector bundle $F$ of rank $2$ with even degree when $S^2 F$ is stable.
As the first fact is well-known, we give a proof for the latter fact.
\begin{Prop}\label{fundamental_theorem_of_square_tensor}
Let $E$, $F$ be vector bundles on $C$ of rank $2$ with the same determinant of even degree.
If $S^2 E$, $S^2 F$ are stable and $S^2E\cong S^2 F$, then $E\cong F\otimes M$ for some $2$-torsion line bundle $M$.
\end{Prop}
\begin{proof}
We may give a proof under the assumption that $\det E=\det F=\Oo_C$.
Notice that $E$, $F$ are stable because $S^2 E$, $S^2 F$ are stable.
Due to $E\otimes E=\Oo_C\oplus S^2 E$, $E^\vee\cong E$, and the same facts for $F$, we have
\begin{align*}
\dim\Hom(E\otimes F,E\otimes F)
&=h^0(E^\vee\otimes F^\vee\otimes E\otimes F)
=\dim\Hom(E\otimes E,F\otimes F)\\
&\geq \dim\Hom(\Oo_C,\Oo_C)
+\dim\Hom(S^2 E,S^2 F)
=2.
\end{align*}
Thus $E\otimes F$ is not simple, so not stable, hence there exists a subbundle $V\to E\otimes F$ of rank $r\leq 2$ and degree $0$ by Remark \ref{first}.
If $r=1$, then it induces an isomorphism $E\cong E^\vee\cong F\otimes V^{-1}$ as $E$ and $F$ are stable.
After comparing the determinants, we can check that $M^2=\det (F\otimes V^{-1})=\det E=\Oo_C$ for $M=V^{-1}$ which gives an isomorphism $E\cong F\otimes M$.
\begin{center}\vspace{25pt}\end{center}
Otherwise, if there is no destabilizing line subbundle of $E\otimes F$ so that $r=2$, then we can derive\linebreak a contradiction as follows.
Suppose that there is an exact sequence
\[
0\to V\to E\otimes F\to W\to 0
\]
for some vector bundles $V$ and $W$ of rank $2$ and degree $0$.
If $W$ is not stable so there exists a destabilizing quotient line bundle $W\to L$, then it gives a destabilizing line subbundle $L^{-1}\to W^\vee\to (E\otimes F)^\vee\cong E\otimes F$.
If $V$ is not stable, then we have the same contradiction.
So we assume that both $V$ and $W$ are stable.
Then, using the filtration
\[
S^2(E\otimes F)=G_0\supseteq G_1\supseteq G_2\supseteq G_3=0
\]
satisfying $G_0/G_1=S^2 W$, $G_1/G_2=V\otimes W$, and $G_2/G_3=S^2 V$, we obtain that
\[
h^0(S^2 (E\otimes F))\leq h^0(S^2V)+h^0(S^2 W)+h^0(V\otimes W).
\]
On the other hand, by the isomorphism
\[
S^2(E\otimes F)
\cong(\det E\otimes \det F)\oplus (S^2E\otimes S^2 F)
=\Oo_C\oplus (S^2 E\otimes S^2 F),
\]
we have $h^0(S^2(E\otimes F))=h^0(\Oo_C)+\dim \Hom((S^2 E)^\vee, S^2 F)=1+\dim\Hom(S^2 E, S^2 F)=2$, and therefore,
\[
h^0(S^2 V)+h^0(S^2 W)+h^0(V\otimes W)\geq 2.
\]
We will first show that $h^0(S^2 V)=h^0(S^2 W)=1$.
That is, $V$ and $W$ are orthogonal with values in $\Oo_C$.\linebreak
Since $V\otimes V=\det V\oplus S^2 V$ and $V$ is stable, $h^0(S^2 V)\leq h^0(V\otimes V)=\dim\Hom(V^\vee,V)\leq 1$.
For the same reason, $h^0(S^2 W)\leq 1$.
Thus it suffices to treat the case $h^0(V\otimes W)\neq 0$.
Because both $V$ and $W$ are stable, $h^0(V\otimes W)=\dim\Hom(V^\vee,W)\leq 1$, and the equality holds if and only if $W\cong V^\vee$.
Hence if $h^0(V\otimes W)\neq 0$, then $h^0(V\otimes W)=1$, so we have either $h^0(S^2 V)=1$ or $h^0(S^2 W)= 1$ from $h^0(S^2 V)+h^0(S^2 W)\geq 1$.
Without loss of generality, we may assume that $h^0(S^2 V)= 1$, equivalently, there is a symmetric isomorphism $V^\vee\cong V$.
Then $W\cong V^\vee\cong V$ is true due to $h^0(V\otimes W)\neq 0$.
Thus we get $S^2 W\cong S^2 V$ as well, and so $h^0(S^2 W)=h^0(S^2 V)=1$.
Now suppose that $V$ and $W$ are orthogonal with values in $\Oo_C$.
By Remark \ref{sss}, we know that $\det V$ and $\det W$ are nontrivial $2$-torsion.
From $\det V\otimes \det W=\det(E\otimes F)=\Oo_C$, we also have $\det V=\det W$, and denote them by $N=\det V=\det W\in J_2(C)$.
Then, from $V\otimes N\cong V^\vee\cong V$ and $W\otimes N\cong W^\vee\cong W$, we obtain that $(E\otimes F)\otimes N\cong E\otimes F$ since $E\otimes F$ is an extension of $W$ by $V$.
Thus
\begin{align*}
(E\otimes F)\otimes (E\otimes F)
&\cong(E\otimes F)\otimes ((E\otimes F)\otimes N)
=((E\otimes E)\otimes (F\otimes F))\otimes N\\
&=((\Oo_C\oplus S^2 E)\otimes (\Oo_C\oplus S^2 F))\otimes N
=(\Oo_C\oplus S^2 E\oplus S^2 F\oplus (S^2 E\otimes S^2 F))\otimes N\\
&=N\oplus (S^2 E\otimes N)\oplus (S^2 F\otimes N)\oplus (S^2 E\otimes S^2 F\otimes N)
\end{align*}
implies that $h^0(N)+h^0(S^2 E\otimes N)+h^0(S^2 F\otimes N)+h^0(S^2 E\otimes S^2 F\otimes N)=\dim\Hom(E\otimes F,E\otimes F)=2$.
Because $S^2 E\cong S^2 F$ are stable, $h^0(S^2 E\otimes N)=h^0(S^2 F\otimes N)=0$.
Moreover, due to their stability,
\[
h^0(S^2 E\otimes S^2 F\otimes N)
=\dim\Hom((S^2 E)^\vee,S^2 F\otimes N)
=\dim\Hom(S^2 E,S^2 F\otimes N)
\leq 1.
\]
Thus it follows that $h^0(N)=1$ which is equivalent to saying that $\det V=N=\Oo_C$.
Then it contradicts the inequality $h^0(\det V)+h^0(S^2 V)=h^0(V\otimes V)=\dim\Hom(V^\vee,V)\leq 1$ satisfied by the stability of $V$.\linebreak
Therefore, there must be a line subbundle $M^{-1}\to E\otimes F$ yielding an isomorphism $E\cong E^\vee\cong F\otimes M$.
\end{proof}
The remaining part of this subsection is devoted to showing that the vector bundles constructed in Example \ref{S3} are the only vector bundles $E$ where $S^2 E$ is stable but $S^3 E$ is not stable.
\restoregeometry
\begin{Lem}\label{integral}
Let $E$ be a vector bundle on $C$ of rank $2$ and degree $0$.
Assume that $S^k E$ is destabilized by a line subbundle so there exists a $k$-section $D$ of zero self-intersection on the ruled surface $X=\PP_C(E)$.
Then $D$ is irreducible and reduced if $S^m E$ is not destabilized by a line subbundle for any $m\leq \frac{k}{2}$, or equivalently, if there is no $m$-section of zero self-intersection on $X$ for any $m\leq \frac{k}{2}$ (see Remark \ref{interesting}).
\end{Lem}
\begin{proof}
We fix a unisecant divisor $C_1$ on $X$ satisfying ${C_1}^2=0$.
Then necessarily $D\equiv kC_1$.
If $D$ is neither irreducible nor reduced, then $D=D_1\cup D_2$ for some effective divisors $D_1\equiv mC_1+bf$ and $D_2\equiv nC_1+ df$ with $m+n=k$ for $m,\,n>0$ and $b+d=0$ for $b,\,d\geq 0$.
So we have $b=d=0$ and both $D_1$ and $D_2$ have zero self-intersection on $X$.
Thus it yields a contradiction as one of $m\leq \frac{k}{2}$ or $n \leq \frac{k}{2}$ must hold.
\end{proof}
\begin{Lem}\label{strong_and_concise}
Let $E$ be a vector bundle on $C$ of rank $2$ and degree $0$ and $X=\PP_C(E)$ be the ruled surface $\Pi: \PP_C(E)\to C$ with a unisecant divisor $C_1$ on $X$ satisfying $\Pi_*\Oo_X(C_1)\cong E$.
Assume that $k\geq 2$ and $S^m E$ is not destabilized by a line subbundle for any $m\leq k$.
If there exists an injection $V\to S^k E$ for some stable vector bundle $V$ on $C$ of rank $2$ and degree $0$, then it induces a surjection $\Pi^*V\to \Oo_X(kC_1)$~on~$X$.
\end{Lem}
\begin{proof}
There is a nonzero morphism $\morph:\Pi^*V\to\Oo_X(kC_1)$ from the adjoint correspondence
\[
\Hom(\Pi^*V,\Oo_X(kC_1))
\cong\Hom(V,\Pi_*\Oo_X(kC_1))
=\Hom(V,S^kE).
\]
Since $\im \morph$ is a torsion-free sheaf on $X$ of rank $1$, we can write $\im \morph=\Oo_X(mC_1+\dd f)\otimes \Ii_Z$ for some integer $m$, $\dd\in\Pic(C)$, and $0$-dimensional subscheme $Z$ of $X$, which may be empty.
As the injection $\im \gamma\to \Oo_X(kC_1)$ induces a nonzero morphism $\Oo_X(mC_1+\dd f)\to\Oo_X(kC_1)$ between the reflexive hulls, $m\leq k$ and $\deg\dd\leq 0$ must hold.
Let $P\in C$ be arbitrary and $Pf=\Pi^{-1}(P)$ denote the $\PP^1$-fiber of $\Pi$ over $P$.
Since there is a trivialization $V|_U\cong \Oo_U\oplus \Oo_U$ in a neighborhood $U\subseteq C$ of $P\in C$, we get $\Pi^* V|_{Pf}\cong \Oo_{\PP^1}\oplus \Oo_{\PP^1}$.
Then, by restricting the surjection $\Pi^*V\to \im \morph$ onto $Pf$, we obtain a surjection $\Oo_{\PP^1}\oplus\Oo_{\PP^1}\to \Oo_{\PP^1}(m)$.
Since $\Oo_{\PP^1}(m)$ cannot be generated by global sections when $m<0$, we get $m\geq 0$.
On the other hand, by restricting the surjection $\Pi^*V\to \im \morph$ onto an $l$-section $D\sim lC_1+\cc f$ for some $\cc\in\Pic(C)$ of $\deg\cc=c$, we have a surjection $\Pi^*V|_D\to \Oo_X(mC_1+\dd f)\otimes\Ii_Z|_D$.
Because $\pi=\Pi|_D: D\to C$ is finite, $\Pi^*V|_D=\pi^*V$ is a semi-stable vector bundle on $D$ of degree $0$ \cite[II:~p.~61]{Lazarsfeld04}.
Thus
\[
0\leq \deg \Oo_X(mC_1+\dd f)\otimes\Ii_Z|_D\leq (mC_1+\dd f).(lC_1+\cc f)=mc+l\deg\dd.
\]
Since the $\RR$-ray generated by $[C_1]$ lies on the boundary of the cone of ample divisors in $N^1(X)$, we can choose a smooth curve $D\equiv lC_1+cf$ with $c/l>0$ being arbitrary small by taking sufficiently large $l\gg 0$.
So $\deg \dd$ is necessarily $0$.
Let $L=\Oo_C(\bb)=\det V$ and consider the exact sequence
\[
0
\to \Oo_X(nC_1+\ee f)\otimes \Ii_W
\to \Pi^*V
\to \Oo_X(mC_1+\dd f)\otimes\Ii_Z\to 0
\]
on $X$ for some integer $n$, $\ee\in\Pic(C)$, and $0$-dimensional subscheme $W$ of $X$, which is possibly empty.
Applying Whitney's formula for coherent sheaves to the sequence, we get
\[
c_1(\Pi^*V)=(mC_1+\dd f)+(nC_1+\ee f)
\ \ \text{and}\ \
c_2(\Pi^*V)=(mC_1+\dd f).(nC_1+\ee f) +\len(Z)+\len(W).
\]
Because $c_1(\Pi^*V)=\bb f$, $c_2(\Pi^*V)=0$, and ${C_1}^2=0$, we obtain $n=-m$, $\ee=\bb-\dd$, and $Z=W=\emptyset$.
Then, from the inclusion $\Oo_X(mC_1+\dd f)\to \Oo_X(kC_1)$, we get a global section of $\Oo_X((k-m)C_1-\dd f)$, which gives a $(k-m)$-section of zero self-intersection on $X$ if $m<k$, so it contradicts the assumption.
Therefore, $m=k$ and hence $\dd=0$, which together imply that $\gamma$ is surjective.
\end{proof}
\newgeometry{top=80pt,bottom=80pt,left=80pt,right=80pt,footskip=30pt}
\begin{Prop}\label{last}
Let $E$ be a vector bundle on $C$ of rank $2$ with trivial determinant.
Assume that $S^2 E$ is stable and $S^3 E$ is not destabilized by a line subbundle.
If $S^3E$ is destabilized by a subbundle $V\to S^3 E$ of rank $2$, then $L=\det V$ is nontrivial $3$-torsion, and $S^3 E$ has two distinct destabilizing subbundles $E\otimes L^{\mp 1}\to S^3 E$.
\end{Prop}
\begin{proof}
Assume that there exists a nonzero morphism $V\to S^3E$ for some stable vector bundle $V$ on $C$ of rank $2$ with $\det V= L$ of $\deg L=0$.
Then, by Lemma \ref{strong_and_concise}, the induced morphism $\Pi^*V\to \Oo_X(3C_1)$ must be surjective where $X=\PP_C(E)$ is the ruled surface $\Pi:\PP_C(E)\to C$ and $C_1$ is a unisecant divisor on $X$\linebreak such that $E=\Pi_*\Oo_X(C_1)$.
Then we have the following exact sequence on $X$ by completing the kernel after comparing the determinants.
\[
0\to\Oo(-3C_1)\otimes \Pi^*L\to \Pi^*V\to \Oo_X(3C_1)\to 0
\]
By pushing forward the above sequence, we get the exact sequence
\[
0\to V\to S^3 E\to E^\vee\otimes L\to 0
\]
on $C$, and it gives that $L^3=\Oo_C$ by comparing the determinants.
Moreover, by taking the dual of the sequence, we obtain an injection $E\otimes L^{-1}\to (S^3 E)^\vee\cong S^3 E$.
Assume that $L$ is nontrivial.
Due to Lemma \ref{strong_and_concise} with respect to $E\otimes L^{-1}\to S^3 E$, we can say that there exists the following exact sequence on $X$.
\[
0\to \Oo_X(-3C_1)\otimes \Pi^*L^{-2}\to \Pi^*(E\otimes L^{-1})\to \Oo_X(3C_1)\to 0
\]
By pushing forward the exact sequence after twisting by $\Oo_X(-C_1)$, we have the exact sequence
\[\label{twisting_square}\tag{3.2}
0\to S^2 E\to S^2 E\otimes L^{-2}\to 0
\]
on $C$ due to $(S^2 E)^\vee\cong S^2 (E^\vee)\cong S^2 E$.
Thus $S^2E\cong S^2E\otimes L$.
Twisting both sides by $L$, we also obtain that $S^2E\otimes L\cong S^2E\otimes L^2$.
That is, $S^2E\otimes L^{-1}\cong S^2E\cong S^2E\otimes L$.
As shown in Example \ref{S3}, there is the following exact sequence.
\[
0\to \Oo_C\to S^2(S^2 E)\to S^4 E\to 0
\]
Notice that $H^0(S^2(S^2 E))\neq 0$.
Twisting the sequence by $L^{\pm 1}$, we get the following exact sequence.
\[
0\to L^{\pm 1}\to S^2(S^2 E)\to S^4E\otimes L^{\pm 1}\to 0
\]
Since $L\neq\Oo_C$, $H^0(S^2(S^2 E))\neq 0$ implies that $H^0(S^4 E\otimes L^{\pm 1})\neq0$.
So there are nonzero morphisms $L^{\mp 1}\to S^4 E$, which respectively induce destabilizations $E\otimes L^{\mp 1}\to S^3 E$ by Proposition \ref{tosmall}.
Now it remains to eliminate the case $L=\Oo_C$.
Suppose that there is an injection $E\to S^3 E$.
Applying Proposition \ref{tosmall}, we can see that there exist a $4$-section $D\sim 4C_1$ and the following exact sequence on $X$.
\[
0\to \Oo_X(-8C_1)\to \Oo_X(-4C_1)\to \Oo_D(-4C_1)\to 0
\]
By pushing forward the sequence to $C$, we obtain an injection $H^0(\pi_*\Oo_D(-4C_1))\to H^0(S^6 E)$ after taking global sections where $\pi:D\to C$ is the induced unramified $4$-covering.
As $S^2 E$ is assumed to be stable so that $E$ is stable as well, $D$ is irreducible and reduced by Lemma \ref{integral}.
Due to Proposition \ref{k-2}, $\Oo_D(2C_1)=\Oo_D$, and thus $H^0(S^6 E)\neq 0$ follows from $H^0(\pi_*\Oo_D(-4C_1))=H^0(\pi_*\Oo_D)=H^0(\Oo_D)\neq 0$.
Let $B\sim 6C_1$ be the $6$-section on $X$ corresponding to a nonzero global section of $S^6 E$, and we denote the induced $6$-covering by $\xi=\Pi|_B: B\to C$.
Suppose that $B$ is irreducible and reduced.
Then, from Remark \ref{interesting}, $B$ is smooth.
By pushing forward the exact sequence
\[
0\to \Oo_X(-6C_1)\to \Oo_X\to \Oo_B\to 0
\]
on $X$ to $C$, we can see that $\xi_*\Oo_B=\Oo_C\oplus S^4 E$ as there is a splitting of the natural inclusion $\Oo_C\to\xi_*\Oo_B$ \cite[I:~p.~248]{Lazarsfeld04}.
Because $H^0(S^4 E)\neq 0$, we get $h^0(\Oo_B)=h^0(\xi_*\Oo_B)\geq 2$, contradicting the hypothesis on $B$.\linebreak
Therefore, $B$ is neither irreducible nor reduced, and so by Lemma \ref{integral}, we have a contradiction to the assumption that none of $E$, $S^2 E$, and $S^3 E$ are destabilized by a line subbundle.
\end{proof}
We continue to classify vector bundles $E$ of rank $2$ with trivial determinant where $S^2 E$ is stable but there exist subbundles $E\otimes L^{\mp 1}\to S^3 E$ for some nontrivial $3$-torsion line bundle $L$.
\begin{Prop}\label{sumup}
Let $E$ be a vector bundle on $C$ of rank $2$ with trivial determinant.
Assume that $S^2 E$ is stable and $S^3 E$ has destabilizing subbundles $E\otimes L^{\mp 1}\to S^3 E$ for some $L\in J_3(C)\backslash\{\Oo_C\}$.
Then there exists an irreducible and reduced $6$-section $B$ on $X=\PP_C(E)$ and $\xi^*L=\Oo_B$ for the induced $6$-covering $\xi: B\to C$.
\end{Prop}
\begin{proof}
Using Lemma \ref{strong_and_concise} with respect to the inclusions $E\otimes L^{\mp 1}\to S^3 E$, we can further assert that there exist two exact sequences
\[
0\to \Oo_X(-3C_1)\otimes \Pi^*L^{\mp 2}\to \Pi^*(E\otimes L^{\mp 1})\to \Oo_X(3C_1)\to 0
\]
on $X$.
After twisting by $\Pi^*L^{\pm 1}=\Pi^*\Oo_C(\pm\bb)=\Oo_X(\pm\bb f)$, the sequences are in other way written by
\[
0\to \Oo_X(-3C_1\mp \bb f)\to \Pi^*E\to \Oo_X(3C_1\pm\bb f)\to 0.
\]
Since $L$ is nontrivial $3$-torsion, $\Oo_X(3C_1+\aa f)\neq \Oo_X(3C_1-\aa f)$.
So, by adjoining
\[
0\to \Oo_X(-3C_1+\bb f)\to \Pi^*E
\quad\text{and}\quad
\Pi^*E\to\Oo_X(3C_1+\bb f)\to 0,
\]
we get a nonzero morphism $\Oo_X(-3C_1+\bb f)\to \Oo_X(3C_1+\bb f)$ which yields a global section of $\Oo_X(6C_1)$.
Thus if $S^3 E$ has destabilizing subbundles $E\otimes L^{\mp 1}\to S^3 E$ for some $L\in J_3(C)\backslash\{\Oo_C\}$, then there exists a~$6$-section $B\sim 6C_1$ on $X$.
Furthermore, by Lemma \ref{integral}, $B$ is irreducible and reduced as $S^2 E$ is stable.
Let $\xi=\Pi|_B:B\to C$ be the induced $6$-covering.
By restricting the three surjections
\[
\Pi^*E\to \Oo_X(3C_1+\bb f),\ \
\Pi^*E\to \Oo_X(3C_1-\bb f),\ \ \text{and}\ \
\Pi^*E\to \Oo_X(C_1)
\]
on $X$ to $B$, we have the following three quotient line bundles on $B$ of degree $0$.
\[\label{three_quotients}\tag{3.3}
\xi^*E\to \Oo_B(3C_1+\bb f),\ \
\xi^*E\to \Oo_B(3C_1-\bb f),\ \ \text{and}\ \
\xi^*E\to \Oo_B(C_1)
\]
As $B\sim 6C_1$ is effective, by pushing forward the exact sequences
\[
0\to \Oo_X(-4C_1\pm\bb f)\to \Oo_X(2C_1\pm\bb f)\to \Oo_B(2C_1\pm\aa f)\to 0
\]
on $X$, we have the exact sequences
\[\label{enclosed_by_squares}\tag{3.4}
0\to S^2 E\to \xi_*\Oo_B(2C_1\pm\bb f)\to S^2 E\to 0
\]
on $C$ because $(S^2 E)^\vee\cong S^2 E$ and $S^2 E\cong S^2E\otimes L^{\pm 1}$ from \eqref{twisting_square} in the proof of Proposition \ref{last}.
Now suppose that $\xi^*L=\Oo_B(\bb f)$ is nontrivial.
Because $\xi^*E$ is a semi-stable vector bundle on $B$ of rank~$2$ and degree $0$, it has at most two distinct isomorphic types of quotient line bundles of degree $0$.
Since the first and second quotients in \eqref{three_quotients} are distinct, the third quotient must be equal to one of the others.
Thus we get either
\[
\Oo_B(2C_1+\bb f)\cong \Oo_B
\quad \text{or}\quad
\Oo_B(2C_1-\bb f)\cong \Oo_B.
\]
So, when $\Oo_B(\bb f)\neq \Oo_B$, $H^0(S^2 E)\neq 0$ follows from one of \eqref{enclosed_by_squares} together with $h^0(\xi_*\Oo_B)=h^0(\Oo_B)=1$, and it gives rise to the strict semi-stability of $S^2 E$, contradicting the assumption.
Thus $\xi^*L=\Oo_B$.
\end{proof}
\restoregeometry
\begin{Thm}\label{classification_rank_two}
Let $E$ be a vector bundle on $C$ of rank $2$ with trivial determinant.
Assume that $S^2 E$ is stable but $S^3 E$ is not destabilized by a line subbundle.
If $S^3E$ is destabilized by a subbundle of rank $2$,\linebreak then $S^2 E=\eta_*M$ for some nontrivial unramified cyclic covering $\eta: C'\to C$ of degree $3$ and $2$-torsion line bundle $M$ on $C'$ which is not contained in $\eta^*J_0(C)$.
\end{Thm}
\begin{proof}
Let $X=\PP_C(E)$ be the ruled surface $\Pi: \PP_C(E)\to C$ and $C_1$ be a unisecant divisor on $X$ such that $\Pi_*\Oo_X(C_1)=E$.
Under the same assumption, we have seen in Proposition \ref{last} and \ref{sumup} that
\begin{enumerate}
\item there exists an irreducible and reduced $6$-section $B\sim 6C_1$ on $X$ with projection $\xi=\Pi|_B: B\to C$,
\item there exists a nontrivial $3$-torsion line bundle $L=\Oo_C(\bb)$ on $C$ and $\xi^*L$ is trivial,
\item from \eqref{three_quotients}, $\xi^*E$ has quotient line bundles $\Oo_B(C_1)$ and $\Oo_B(3C_1)$ of degree $0$, and
\item from \eqref{enclosed_by_squares}, there is the following exact sequence on $C$.
\[
0\to S^2E\to \xi_*\Oo_B(2C_1)\to S^2E\to 0
\]
\end{enumerate}
In addition, from (2), $\xi$ must be factored into
\[
\xymatrix@1{
\xi:\ B\ \ar[r]^{\sigma} &
\ C'\ \ar[r]^{\eta} &
\ C
}
\]
for some unramified cyclic triple covering $\eta$ and unramified double covering $\sigma$ \cite[Proposition 11.4.3]{Birkenhake-Lange04}.
First, notice that $\eta^*E$ is stable.
For, otherwise, its destabilizing line bundle $\eta^*E\to S$ of degree $0$ gives a morphism $\phi:C'\to X$ over $C$ and an $m$-section $\phi(C')\equiv mC_1+\deg\phi^*\Oo_X(C_1)\cdot f=
mC_1+\deg S\cdot f=mC_1$\linebreak on $X$ for some $m\,|\,3$.
It contradicts the assumption that $S^3 E$ is not destabilized by a line subbundle.
Next, note that $\eta^*E\cong\sigma_*\Oo_B(C_1)$.
Because $B\sim 6C_1$ is effective, by pushing forward the exact sequence\linebreak
\vspace{-1em}
\[
0\to \Oo_X(-5C_1)\to \Oo_X(C_1)\to \Oo_B(C_1)\to 0
\]
on $X$ to $C$, we obtain an injection $E\to \xi_*\Oo_B(C_1)=\eta_*(\sigma_*\Oo_B(C_1))$.
Then it gives a nonzero morphism $\eta^*E\to\sigma_*\Oo_B(C_1)$ on $C'$, which is an isomorphism as $\eta^*E$ is stable and $\deg\eta^*E=\deg \Oo_B(C_1)=0$.
Then we check that $\Oo_B(2C_1)\in\Pr(B/C')$.
From $\det\sigma_*\Oo_B(C_1)\cong \det\sigma_*\Oo_{B}\otimes \Nm_{B/C'}(\Oo_B(C_1))$ and $(\det \sigma_*\Oo_B)^2=\Oo_{C'}$ as $\sigma$ is unramified,
we can observe that
\[
\Nm_{B/C'}(\Oo_B(2C_1))
=\{\Nm_{B/C'}(\Oo_B(C_1))\}^2
\cong (\det \sigma_*\Oo_B(C_1))^2.
\]
Since $\sigma_*\Oo_B(C_1)\cong \eta^*E$
and $\det \eta^*E=\eta^*\det E=\Oo_{C'}$, we get $\Nm_{B/C'}(\Oo_B(2C_1))=\Oo_{C'}$.
Also, we deduce that $\Oo_B(4C_1)=\Oo_B$ from (3).
Indeed, if $\Oo_B(C_1)=\Oo_B(3C_1)$, then $\Oo_B(2C_1)=\Oo_B$.
Otherwise, if $\Oo_B(C_1)\neq\Oo_B(3C_1)$, then $\eta^*E=\Oo_B(C_1)\oplus\Oo_B(3C_1)$ and $\Oo_B(4C_1)=\det \eta^*E=\Oo_B$.
Now we have $\Oo_B(2C_1)\in\Pr(B/C')$ and $\Oo_B(2C_1)\in J_2(B)$.
Thus $\sigma_*\Oo_B(2C_1)$ is orthogonal with values in $\Oo_{C'}$, and $\Oo_B(2C_1)\in \sigma^*J^0(C')$ by Proposition \ref{intersection}.
Due to Proposition \ref{img}, we can see that $\sigma_*\Oo_B(2C_1)$ is strictly semi-stable, and by Remark \ref{sss},
$\sigma_*\Oo_B(2C_1)=M\oplus N$ for some $M,\,N\in J_2(C')$.
Therefore, $S^2E\cong \eta_*M$ is obtained from the stability of $S^2 E$ and the following exact sequence from (4).
\[
0
\to S^2 E
\to \eta_*(\sigma_*\Oo_B(2C_1))\cong(\eta_*M)\oplus (\eta_*N)
\to S^2 E
\to 0
\]
Finally, $M\in \eta^*J^0(C)$ if and only if $S^2 E=\eta_*M$ is strictly semi-stable as in Example \ref{S3}.
\end{proof}
This is the end of the classification of $E\in\SU_C(2,\Oo_C)$ with stable $S^2 E$ and strictly semi-stable $S^3 E$; by Theorem \ref{classification_rank_two}, they are only given as in Example \ref{S3}, and the number of such $E$ is finite according to Proposition \ref{fundamental_theorem_of_square_tensor} and the argument preceding the proposition.
Recall that we observed in Theorem \ref{classification_rank_one} that if $S^3 E$ is destabilized by a line subbundle, then $S^2 E$ is necessarily not stable.
If $S^2 E$ is stable, then $S^3 E$ is not stable if and only if $S^3 E$ is destabilized by subbundles $E\otimes L^{\mp 1}\to S^3 E$ for some line bundle $L$ due to Proposition \ref{last}.
Moreover, from an exact sequence
\[
0
\to \Hom(L^{\mp 1}, S^4E)
\to \Hom(E\otimes L^{\mp 1}, S^3 E)
\to \Hom(L^{\mp 1}, S^2E)
\]
of Proposition \ref{tosmall} with $k=3$, we can see that $S^3 E$ is destabilized by subbundles $E\otimes L^{\mp 1}$ if and only if $S^4 E$ is destabilized by line subbundles $L^{\mp 1}\to S^4 E$ when $S^2 E$ is stable so that $\Hom(L^{\mp 1},S^2 E)=0$.
Therefore, we have the following corollary under the assumption on the stability of $S^2 E$.
\begin{Cor}
\label{clear_form}
If $S^2 E$ is stable, then $S^3 E$ is strictly semi-stable if and only if $S^4 E$ is destabilized by a line subbundle, and the number of such $E\in\SU_C(2,\Oo_C)$ is finite.
In particular, there exist only a finite number of ruled surfaces $X=\PP_C(E)$ for stable $E$ with even degree where $X$ contains a $4$-section of zero self-intersection whereas it has no such $m$-section for any $m<4$.
\end{Cor}
\section{Stability of Higher Symmetric Powers}
\subsection{Finiteness Result}
In the previous sections, we prove that the family of $E\in \SU_C(2,\Oo_C)$ with strictly semi-stable $S^2 E$ has positive dimension, but there are only finitely many $E\in\SU_C(2,\Oo_C)$ with strictly semi-stable $S^3 E$ outside this family.
We will strengthen the result as showing that there are only a finite number of $E\in\SU_C(2,\Oo_C)$ where $S^2 E$ is stable but $S^k E$ is not stable for some $k\geq 3$.
Throughout this and the next subsection, we assume that
\begin{itemize}
\item $E$ is a stable vector bundle on $C$ of rank~$2$ with trivial determinant,
\item $X=\PP_C(E)$ is the ruled surface $\Pi:\PP_C(E)\to C$,
\item $C_1$ is a unisecant divisor on $X$ satisfying $\Pi_*\Oo_X(C_1)=E$, and
\item $\pi=\Pi|_D: D\to C$ is the $k$-covering induced from a $k$-section $D$ on $X$.
\end{itemize}
Then, by Remark \ref{interesting}, $\pi$ is unramified if and only if $D^2=0$.
Also, from the exact sequence \eqref{basic} in the proof of Proposition \ref{tosmall} and its dual sequence with the self-duality $E^\vee\cong E$, we obtain two injections
\[
S^k E\to S^{k+1} E\otimes E
\quad\text{and}\quad
S^k E\to S^{k-1} E\otimes E.
\]
Thus if there is a subbundle $V\to S^k E$, then it induces injections $V\to S^{k+1}E\otimes E$ and $V\to S^{k-1}E\otimes E$, which respectively induce nonzero morphisms
\[\label{str}
V\otimes E\to S^{k-1}E
\quad\text{and}\quad
V\otimes E\to S^{k+1}E
\tag{4.1}\]
due to $\Hom(V\otimes E, S^{k\pm 1}E)
=H^0((V\otimes E)^\vee\otimes S^{k\pm 1}E)
=H^0(V^\vee\otimes S^{k\pm 1}E\otimes E)
=\Hom(V,S^{k\pm 1}E\otimes E)$.
\begin{Prop}\label{reduction}
Let $k\geq 2$.
If $S^{k-1}E$ is stable, then $S^k E$ is not destabilized by a subbundle $V\to S^k E$ of $\rk V<\frac{k}{2}$.
Moreover, when $S^{k-1}E$ is stable, $S^k E$ is not stable if and only if it is destabilized by\linebreak a stable subbundle $V\to S^k E$ of $\rk V=\frac{k+1}{2}$ if $k$ is odd or $\rk V=\frac{k}{2}$ if $k$ is even.
\end{Prop}
\begin{proof}
Assume that $S^k E$ is destabilized by a subbundle $V\to S^k E$ of $\rk V=r$ and $\deg V=0$.
As \eqref{str}, the injection $V\to S^k E$ gives a nonzero morphism $V\otimes E\to S^{k-1}E$.
Since $\deg (V\otimes E)=0$ and $S^{k-1}E$ is assumed to be stable, the morphism must be surjective, so $k=\rk S^{k-1}E
\leq \rk (V\otimes E)=2r$.
That is, $\frac{k+1}{2}\leq r$ if $k$ is odd or $\frac{k}{2}\leq r$ otherwise.
Due to Remark \ref{first}, when $S^k E$ is not stable, we can find a destabilizing subbundle $V\to S^k E$ of rank $r\leq \frac{k+1}{2}$ if $k$ is odd or $r\leq \frac{k}{2}$ otherwise, and each equality holds if $S^{k-1}E$ is stable as shown above.\linebreak
Because a destabilizing subbundle of $V$ destabilizes $S^k E$ in a smaller rank if it exists, $V$ must be stable.
\end{proof}
\begin{Thm}\label{higher}
Let $k\geq 2$.
If $S^{k-1}E$ is stable but $S^k E$ is not stable, then $E$ corresponds to one of the following cases.
\begin{itemize}
\item[(1)] $S^2 E$ is destabilized by a subbundle of rank $1$
\item[(2)] $S^3 E$ is destabilized by a subbundle of rank $2$
\item[(3)] $S^4 E$ is destabilized by a subbundle of rank $2$
\item[(4)] $S^6 E$ is destabilized by a subbundle of rank $3$
\end{itemize}
In particular, $S^k E$ is stable for every $k\geq 2$ if $S^m E$ is stable for all $m\leq 6$.
\end{Thm}
\begin{proof}
From the proof of Proposition \ref{reduction}, we observe that if $S^{k-1} E$ is stable but $S^k E$ is not stable, then there exist an injection $V\to S^k E$ and a surjection $V\otimes E\to S^{k-1}E$ for some stable vector bundle $V$ of degree $0$ and rank $r=\frac{k+1}{2}$ if $k$ is odd or $r=\frac{k}{2}$ otherwise.
If $k$ is odd, then $2r=k+1$, and the surjection $V\otimes E\to S^{k-1}E$ gives the exact sequence
\[
0\to L\to V\otimes E\to S^{k-1}E\to 0
\]
for some line bundle $L$ of degree $0$, and the injection $L\to V\otimes E$ induces a nonzero morphism $L\otimes E\cong L\otimes E^\vee\to V$.
This is possible only if $r=\rk V=2$ because $L\otimes E$ and $V$ are stable of the same degree.
Thus it implies that $k=3$.
Otherwise, if $k$ is even, then $2r=k$, so the surjection $V\otimes E\to S^{k-1}E$ becomes an isomorphism $V\otimes E\cong S^{k-1}E$.
On the other hand, the injection $V\to S^k E$ gives a nonzero morphism $V\otimes E\to S^{k+1}E$ as \eqref{str}.
By taking the dual, a nonzero morphism $S^{k+1}E\cong (S^{k+1}E)^\vee\to (V\otimes E)^\vee\cong (S^{k-1}E)^\vee\cong S^{k-1}E$ is obtained, which must be surjective due to the stability of $S^{k-1}E$.
Then, from the exact sequence
\[\label{S5}
0\to Q\to S^{k+1}E\to S^{k-1}E\to 0
\tag{4.2}
\vspace{-0.2em}
\]
for some vector bundle $Q$ of rank $2$ and degree $0$,
we can see that $S^{k+1}E$ is destabilized by a subbundle $Q\to S^{k+1}E$ of rank $2$.
Using the injections $S^{k+1}E\to S^k E\otimes E$ and $S^k E\to S^{k-1}E\otimes E$ twisted by $E$, the injection $Q\to S^{k+1}E$ induces an injection $Q\to S^{k-1}E\otimes E\otimes E$, and it yields a nonzero morphism $Q\otimes E\otimes E\to S^{k-1}E$ due to $\Hom(Q\otimes E\otimes E, S^{k-1}E)=\Hom(Q,S^{k-1}E\otimes E\otimes E)$ similar to \eqref{str}.
Since $S^{k-1}E$ is assumed to be stable, the morphism
\[
Q\oplus (Q\otimes S^2 E)
\cong Q\otimes (\Oo_C\oplus S^2 E)
\cong Q\otimes E\otimes E
\to S^{k-1}E
\]
is necessarily surjective, and again from the stability of $S^{k-1}E$, one of the morphisms $Q\to S^{k-1}E$ or $Q\otimes S^2 E\to S^{k-1}E$ must be surjective.
Therefore, $k=\rk S^{k-1}E\leq\max\{\rk Q,\,\rk(Q\otimes S^2E)\}=6$.
\end{proof}
Notice that cases (1) and (2) in the theorem are already studied in Section 2 and 3 respectively.
We will show that there are only a finite number of $E$ in cases (3) and (4) as in case (2).
\begin{Prop}\label{concluding}
Let $k\geq 3$.
Assume that there is a $k$-section $\pi: D\to C$ of zero self-intersection on $X$\linebreak and no such $m$-section for any $m<k$.
Then there is a surjection $\pi^*E\to R$ for some $R\in J_{2(k-1)(k-2)}(D)$.
\end{Prop}
\begin{proof}
Let $D\sim kC_1+\bb f$ for some $\bb\in\Pic(C)$, necessarily satisfying $\deg\bb=0$.
Due to Lemma \ref{integral} and Remark \ref{interesting}, $D$ is smooth and $\pi: D\to C$ is unramified.
Also, note that the morphism $\phi: D\to X$ over $C$ gives a surjection $\pi^*E\to R$ on $D$ for $R=\phi^*\Oo_X(C_1)=\Oo_D(C_1)$ with $\deg R=D.C_1=(kC_1+\bb f).C_1=0$.
By applying \cite[I:~p.~248]{Lazarsfeld04} and comparing the determinants after pushing forward the exact sequence
\[
0\to\Oo_X(-kC_1-\bb f)\to \Oo_X\to \Oo_D\to 0
\]
on $X$ to $C$, we obtain that $L^{-2(k-1)}=(\det(\Oo_C\oplus(S^{k-2}E\otimes L^{-1})))^2=(\det \pi_*\Oo_D)^2=\Oo_C$ for $L=\Oo_C(\bb)$ because $\pi$ is unramified.
Then, by Proposition \ref{k-2}, we have
\[
R^{k-2}=\Oo_D((k-2)C_1)
=\Oo_D(-\bb f)=\pi^*L^{-1},
\]
and hence $R^{2(k-1)(k-2)}
=(\pi^*L^{-1})^{2(k-1)}
=\pi^*L^{-2(k-1)}
=\pi^*\Oo_C
=\Oo_D$.
\end{proof}
\begin{Rmk}\label{hightor}
In the proof of Proposition \ref{concluding}, when $E$ has trivial determinant and $S^k E$ is destabilized by a quotient line bundle $S^k E\to L$, it is shown that $L^{2(k-1)}=\Oo_C$.
This is a generalization of Remark \ref{twotor}.\linebreak
It is also compatible with $L^4=\Oo_C$ for $k=3$ (Theorem \ref{classification_rank_one}) and $L^3=\Oo_C$ for $k=4$ (Proposition \ref{last}).
\end{Rmk}
\begin{Cor}\label{finite}
Let $k\geq 3$.
Then there exist at most finitely many $E\in\SU_C(2,\Oo_C)$ such that $S^k E$ is destabilized by a line subbundle but $S^m E$ is not destabilized by a line subbundle for any $m<k$.
\end{Cor}
\begin{proof}
The assertion follows from Proposition \ref{concluding} and the finiteness of the following data.
\begin{itemize}
\item the unramified $k$-coverings $\pi: D\to C$
\item the torsion line bundles on $D$ of order $2(k-1)(k-2)$
\item the direct summands of graded bundle of a Jordan-H\"{o}lder filtration associated to $\pi_*R$
\end{itemize}
By the adjoint property, a surjection $\pi^*E\to R$ gives a nonzero morphism $E\to \pi_*R$, so we can deduce that $E$ is a subbundle of $\pi_*R$ as $E$ is stable and $\deg E=\deg \pi_*R=0$.
\end{proof}
\begin{Lem}\label{generalization}
There exists the following exact sequence on $C$.
\[
0
\to S^{n-1}E\otimes S^{m-1}E
\to S^n E\otimes S^m E
\to S^{n+m}E
\to 0
\]
\end{Lem}
\begin{proof}
By restricting the natural morphism $\Pi^*\Pi_*\Oo_X(mC_1)\to \Oo_X(mC_1)$ to the fiber $Pf=\Pi^{-1}(P)$ of $X$,\linebreak
we have the morphism $\Pi^*\Pi_*\Oo_X(mC_1)|_{Pf}
\to\Oo_X(mC_1)|_{Pf}$ over each $P\in C$,
which is identified with the evaluation morphism $H^0(\PP^1,\Oo_{\PP^1}(mC_1))\otimes\Oo_{\PP^1}\to\Oo_{\PP^1}(m)$ from Grauert's theorem,
\[
\Pi^*\Pi_*\Oo_X(mC_1)|_{Pf}
\cong (\Pi_*\Oo_X(mC_1)\otimes \CC(P))\otimes \Oo_{Pf}
\cong H^0(Pf,\Oo_X(mC_1)|_{Pf})\otimes \Oo_{Pf}.
\]
As $\Oo_X(mC_1)|_{Pf}\cong\Oo_{\PP^1}(m)$ is generated by its global sections, the evaluation morphism is surjective over each $P\in C$, so the morphism $\Pi^*\Pi_*\Oo_X(mC_1)\to \Oo_X(mC_1)$ is surjective on $X$ by Nakayama's lemma.\linebreak
Thus we have the exact sequence
\[\label{gen}\tag{4.3}
0\to K\otimes \Oo_X(-C_1)\to \Pi^*\Pi_*\Oo_X(mC_1)\to \Oo_X(mC_1)\to 0
\]
for some vector bundle $K$ on $X$ of rank $m$ and $c_1(K)=\Oo_X$ which satisfies $K|_{Pf}\cong {\Oo_{Pf}}^{\oplus m}$ for each $P\in C$.\linebreak
Indeed, note that $K|_{Pf}\otimes\Oo_{\PP^1}(-1)\cong\Oo_{\PP^1}(a_1)\oplus\cdots\oplus \Oo_{\PP^1}(a_m)$ for some $a_i\leq 0$ with $a_1+\cdots+a_m=-m$, and from the associated long exact sequence of cohomology groups,
\[
0
\to H^0(\PP^1,K|_{Pf}\otimes\Oo_{\PP^1}(-1))
\to H^0(\PP^1,H^0(\PP^1,\Oo_{\PP^1}(mC_1))\otimes\Oo_{\PP^1})
\to H^0(\PP^1,\Oo_{\PP^1}(m)),
\]
we can observe that $H^0(K|_{Pf}\otimes\Oo_{\PP^1}(-1))
=H^0(\Oo_{\PP^1}(a_1))\oplus\cdots\oplus H^0(\Oo_{\PP^1}(a_m))=0$ because the last morphism is bijective.
Thus $a_i\leq -1$, and we have $a_i=-1$ for all $i=1,\,\ldots,\,m$.
By pushing forward exact sequence \eqref{gen} on $X$ to $C$ after twisting by $\Oo_X(-C_1)$, we have
\[
S^{m-1}E
\cong R^1\Pi_* (K\otimes \Oo_X(-2C_1))
\cong (\Pi_* K)^\vee
\]
from $\omega_{X/C}\cong\Oo_X(-2C_1)$ and the relative Serre duality.
Since $\Pi^*S^{m-1}E\cong \Pi^*(S^{m-1}E)^\vee\cong \Pi^*\Pi_*K$,\linebreak we get a morphism $\Pi^*S^{m-1}E\cong\Pi^*\Pi_*K\to K$ on $X$ where the latter morphism $\Pi^*\Pi_*K\to K$ is surjective\linebreak as $K|_{Pf}\cong {\Oo_{Pf}}^{\oplus m}$ is generated by its global sections for all $P\in C$.
Because $S^{m-1}E$ is of rank $m$, we have $K\cong \Pi^*S^{m-1}E$, and there exists the following exact sequence on $X$.
\vspace{-0.15em}
\[
0
\to \Pi^*S^{m-1}E\otimes \Oo_X(-C_1)
\to \Pi^*S^m E
\to \Oo_X(mC_1)
\to 0
\vspace{-0.15em}
\]
Therefore, we obtain the desired exact sequence by pushing forward the above sequence to $C$ after twisting by $\Oo_X(n C_1)$.
\end{proof}
\begin{Rmk}
The reviewer points out that as $E$ is the associated vector bundle of some principal $\mathrm{SL}_2\CC$-bundle $P\to C$, $S^kE$ is the associated vector bundle of $P(S^kV)$ where $V=\CC^2$ is the standard representation of $\mathrm{SL}_2\CC$, hence Lemma \ref{generalization} is a consequence of \cite[Exercise 11.11]{Fulton-Harris91}.
\end{Rmk}
\begin{Thm}\label{main}
If $S^2 E$ is stable, then every $S^k E$ is stable except for finitely many $E\in \SU_C(2,\Oo_C)$.
\end{Thm}
\begin{proof}
Assume that $S^k E$ is not stable and $S^m E$ is stable for all $m<k$.
By Theorem \ref{higher}, it remains to treat $k=3$, $4$, and $6$.
For $k=3$, we know from Corollary \ref{clear_form} that $S^4 E$ is destabilized by a line subbundle.
In the cases of even $k=4$, $6$, we can observe from exact sequence \eqref{S5} in the proof of Theorem \ref{higher} that there exists a surjection $S^{k+1}E\to S^{k-1} E$ so that $H^0(S^{k-1}E\otimes S^{k+1}E)=H^0((S^{k+1}E)^\vee\otimes S^{k-1}E)\neq 0$.
By taking global sections of the exact sequence of Lemma \ref{generalization} with $n={k-1}$ and $m={k+1}$,
\vspace{-0.2em}
\[
0\to S^{k-2} E\otimes S^k E\to S^{k-1} E\otimes S^{k+1} E\to S^{2k} E\to 0,
\vspace{-0.2em}
\]
we have either $H^0(S^{k-2}E\otimes S^k E)\neq 0$ or $H^0(S^{2k}E)\neq 0$ from $H^0(S^{k-1}E\otimes S^{k+1}E)\neq 0$.
If $H^0(S^{2k}E)= 0$, then $H^0(S^{k-2}E\otimes S^k E)\neq 0$, and in this case, by taking global sections of the exact sequence of the same Lemma with $n={k-2}$ and $m={k}$,
\vspace{-0.2em}
\[
0\to S^{k-3} E\otimes S^{k-1} E\to S^{k-2} E\otimes S^{k} E\to S^{2k-2} E\to 0,
\vspace{-0.2em}
\]
we have either $H^0(S^{k-3}E\otimes S^{k-1} E)\neq 0$ or $H^0(S^{2k-2}E)\neq 0$, where the former is impossible because, if then, it gives a nonzero morphism $S^{k-3}E\cong (S^{k-3}E)^\vee\to S^{k-1}E$ which destabilizes $S^{k-1}E$.
Therefore, from the assumption that $H^0(S^{k-1}E\otimes S^{k+1}E)\neq 0$, we obtain either $H^0(S^{2k}E)\neq 0$ or $H^0(S^{2k-2}E)\neq 0$.
In other words, either $S^{2k} E$ or $S^{2k-2}E$ is destabilized by a line subbundle.
Therefore, if $S^2 E$ is stable but $S^k E$ is not stable for some $k\geq 3$, then $S^l E$ is destabilized by a line subbundle for some $4\leq l\leq 12$.
However, since $S^2 E$ is stable, $S^3 E$ cannot be destabilized by a line subbundle due to Proposition \ref{tosmall}.
Thus $E$ is contained in the collection
\vspace{-0.3em}
\[
\bigcup_{l=4}^{12}
\left\{\text{$E\in\SU_C(2,\Oo_C)$}
\ |\ \text{$S^l E$ is destabilized by a line subbundle but $S^m E$ is not for any $m<l$}
\right\}
\vspace{-0.2em}
\]
which must be finite thanks to Corollary \ref{finite}.
\end{proof}
As the locus of strictly semi-stable $E\in \SU_C(2,\Oo_C)$ is given by the image of a map from the Jacobian variety, the locus is closed.
Also, the locus of $E\in \SU_C(2,\Oo_C)$ with strictly semi-stable $S^2 E$ is closed in $\SU_C(2,\Oo_C)$ because it is the union of finitely many images of maps from the Prym varieties.
Since a finite union of proper closed subsets is still a proper closed subset, the theorem gives the following fact.
\begin{Cor}\label{general}
If $E$ is stable, then every $S^k E$ is stable for general $E\in \SU_C(2,\Oo_C)$.
\end{Cor}
\subsection{Relation with \'{E}tale Triviality}
$E$ is said to be \emph{\'{e}tale-trivial} if there exists an unramified finite covering $\eta:C'\to C$ with $\eta^*E=\Oo_{C'}\oplus \Oo_{C'}$.
Thus if $E$ is \'{e}tale-trivial, then the trivialization induces a morphism $\phi: C'\to X$ over $C$ with $\deg \phi^*\Oo_X(C_1)=0$ and $\phi(C')\equiv kC_1+\deg\phi^*\Oo_X(C_1) f= kC_1$ for some $k>0$.
Therefore, the image $\phi(C')$ becomes a $k$-section of zero self-intersection on $X$ so that $S^k E$ is destabilized by a line subbundle.
We will see when the converse holds.
\newgeometry{top=85pt,bottom=80pt,left=80pt,right=80pt,footskip=30pt}
\begin{Lem}\label{pull-back}
Assume that $E$ splits over an unramified finite covering $\pi: D\to C$ as $\pi^*E=R^{-1}\oplus R$ for some line bundle $R$ on $D$.
Then $E$ is \'{e}tale-trivial if and only if $R\in J_m(D)$ for some $m>0$.
\end{Lem}
\begin{proof}
Recall that we assumed $\det E=\Oo_C$.
So if $\pi^*E$ splits, then it must be of the form $\pi^*E=R^{-1}\oplus R$.
If $E$ is \'{e}tale-trivial, then there exists an unramified finite covering $\eta: C'\to C$ over which $E$ is trivialized.
Let $D'=D\times_C C'$ be the covering of $C$ which gives the following commutative diagram.
\vspace{-0.5em}
\[
\xymatrix @R=2pc{
D' \ar[r]^{\eta'} \ar[d]_{\pi'} &
D \ar[d]^\pi \\
C' \ar[r]^\eta &
C
}\]
From
\[
\eta'^*(R^{-1}\oplus R)
=(\pi\circ\eta')^*E
=(\eta\circ\pi')^*E
=\pi'^*(\Oo_{C'}\oplus \Oo_{C'})=\Oo_{D'}\oplus \Oo_{D'},
\]
we can observe that $\eta'^* R=\Oo_{D'}$.
It is possible only if $R$ is a torsion line bundle \cite[Proposition 11.4.3]{Birkenhake-Lange04}.
The converse holds by composing $\pi$ with a cyclic covering over which $R$ is trivialized.
\end{proof}
If $E$ is stable but $S^2 E$ is not stable, then, by Proposition \ref{dim}, there exists a nontrivial unramified double covering $\pi: B\to C$ and $E=\pi_*R$ for some $R\in J^0(B)$.
If $R\neq \iota^*R$, then $\pi^*E$ splits as $\pi^*E=\iota^*R\oplus R$
because $\pi^*E$ is invariant under the involution $\iota: B\to B$ induced by $\pi$.
Otherwise, if $R=\iota^*R$, then $R\in \pi^*J^0(C)$, so $E$ already splits on $C$ by Proposition \ref{img}.
Hence $\pi^*E$ splits over an unramified double covering $\pi:B\to C$ whenever $S^2 E$ is not stable.
Since $\det E=\Oo_C$, it must be of the form $\pi^*E=R^{-1}\oplus R$.
Therefore, we have the following observation applying the previous lemma.
\begin{Rmk}
If $E$ is stable but $S^2 E$ is not stable, then $E$ is \'{e}tale-trivial if and only if $E=\pi_*R$ for some nontrivial unramified double covering $\pi: B\to C$ and $R\in J_m(B)$ for some $m>0$.
\end{Rmk}
\begin{Lem}\label{k-2_twist}
Let $k\geq 3$. Assume that $S^k E$ is destabilized by a line subbundle $L^{-1}\to S^k E$ but $S^m E$ is not destabilized by a line subbundle for any $m< k$.
Then $S^{k-2} E\otimes L^{-1}\cong S^{k-2} E\otimes L$.
\end{Lem}
\begin{proof}
Let $D\sim kC_1+\bb f$ be the $k$-section on $X$ corresponding to the line subbundle $L^{-1}\to S^k E$.
Then $D$ is irreducible and reduced by Lemma \ref{integral}.
By pushing forward the exact sequences
\begin{gather*}
0\to \Oo_X(-kC_1-\bb f)\to \Oo_X\to \Oo_D\to 0,\\
0\to \Oo_X(-2C_1)\to \Oo_X((k-2)C_1+\bb f)\to \Oo_D((k-2)C_1+\bb f)\to 0
\end{gather*}
on $X$ to $C$ and using $\Oo_D((k-2)C_1+\bb f)=\Oo_D$ from Proposition \ref{k-2}, we get the following exact sequences.
\begin{gather*}
0\to \Oo_C\to \pi_*\Oo_D\to S^{k-2}E\otimes L^{-1}\to 0\\
0\to S^{k-2}E\otimes L\to \pi_*\Oo_D\to \Oo_C\to 0
\end{gather*}
Since $D$ is smooth by Remark \ref{interesting}, $\pi_*\Oo_D$ has a splitting of the injection $\Oo_C\to\pi_*\Oo_D$ \cite[I:~p.~248]{Lazarsfeld04}, which indeed follows from the hypothesis on $S^m E$ for $m<k$.
Therefore, $\Oo_C\oplus (S^ {k-2} E\otimes L^{-1})\cong \pi_*\Oo_D\cong \Oo_C\oplus (S^ {k-2} E\otimes L)$, and hence $S^{k-2} E\otimes L^{-1}\cong S^{k-2} E\otimes L$.
\end{proof}
\begin{Thm}\label{concludingconcluding}
Let $k\geq 3$.
If there is a $k$-section $\pi: D\to C$ of zero self-intersection on $X=\PP_C(E)$ and no such $m$-section for any $m<k$, then $\pi^*E=R^{-1}\oplus R$ for some $R\in J_{2(k-1)(k-2)}(D)$.
Moreover, if $S^2 E$ is stable, then $E$ is \'{e}tale-trivial if and only if $S^k E$ is destabilized by a line subbundle for some $k\geq 3$.
\end{Thm}
\begin{proof}
Let $L^{-1}\to S^k E$ be the destabilizing line subbundle corresponding to the $k$-section $D\sim kC_1+\bb f$ on $X$ for $L=\Oo_C(\bb)$.
Then $D$ is irreducible and reduced by Lemma \ref{integral}, and so $\pi:D\to C$ is unramified by Remark \ref{interesting}.
Let $R=\Oo_D(C_1)$.
We observe in Proposition \ref{concluding} that $R$ is a torsion line bundle and there exists a surjection $\pi^*E\to R$.
In order to conclude that $\pi^*E=R^{-1}\oplus R$, it remains to show that there exists another surjection $\pi^*E\to R^{-1}$ and $R^{-1}\neq R$.
We first claim that $R^{-1}\neq R$.
Suppose $\Oo_D(2C_1)=R^2=\Oo_D$.
By pushing forward the exact sequence
\[
0\to \Oo_X(-(k+4)C_1-\bb f)\to \Oo_X(-4C_1)\to \Oo_D(-4C_1)\to 0
\]
on $X$ to $C$, we have an injection $\pi_*\Oo_D=\pi_*\Oo_D(-4C_1)\to R^1\pi_*\Oo_X(-(k+4)C_1-\bb f)= S^{k+2} E\otimes L^{-1}$.
Since $H^0(\pi_*\Oo_D)\neq 0$ and $H^0(\pi_*\Oo_D)\subseteq H^0(S^{k+2}E\otimes L^{-1})$, we get $H^0(S^{k+2}E\otimes L^{-1})\neq 0$.
So there exists a $(k+2)$-section $B\sim (k+2)C_1-\bb f$, and we have the following exact sequence on $X$.
\[
0\to \Oo_X(-(k+2)C_1+\bb f)\to \Oo_X\to \Oo_B\to 0
\]
By pushing forward the sequence to $C$, we obtain the isomorphism $\pi_*\Oo_B\cong \Oo_C\oplus (S^k E\otimes L)$ thanks to \cite[I:~p.~248]{Lazarsfeld04} if we suppose that $B$ is irreducible and reduced so that $B$ is smooth by Remark \ref{interesting}.
But
\[
h^0(\pi_*\Oo_B)=h^0(\Oo_C)+ h^0(S^k E\otimes L)\geq 2
\]
cannot hold if $B$ is irreducible and reduced, hence $B$ must be neither irreducible nor reduced.
So there exists an $m$-section of zero self-intersection on $X$ with $m\leq \frac{k+2}{2}$ due to Lemma \ref{integral}, and it contradicts the minimality of $k$.
Therefore, $R^2\neq \Oo_C$.
We next prove the existence of a surjection $\pi^*E\to R^{-1}$.
Because both $\pi^*E$ and $R^{-1}$ have degree $0$ and $\pi^*E$ is semi-stable, it suffices to find a nonzero morphism $\pi^*E\to R^{-1}$.
By the adjoint property, it is equivalent to show that $\Hom(E,\pi_*R^{-1})
\neq 0$.
By pushing forward the exact sequence
\[
0\to \Oo_X(-(k+1)C_1-\bb f)\to \Oo_X(-C_1)\to \Oo_D(-C_1)\to 0
\]
on $X$ to $C$, we can see that
\[
\pi_*R^{-1}
\cong R^1\pi_*\Oo_X(-(k+1)C_1-\bb f)
\cong S^{k-1}E\otimes L^{-1}.
\]
Thus $\Hom(E,\pi_*R^{-1})\neq 0$ follows once we prove that $\Hom(E,S^{k-1}E\otimes L^{-1})\neq 0$.
For $k=3$, if $S^3 E$ is destabilized by a line subbundle $L^{-1}\to S^3 E$, then it induces a destabilizing subbundle $E\otimes L^{-1}\to S^2 E$ with the quotient bundle $S^2 E\to L^2$, so there is an isomorphism $E\cong E\otimes L^2$.
Hence we have $\Hom(E,S^2E\otimes L^{-1})=\Hom(E\otimes L^2,S^2E\otimes L^{-1})=\Hom(E\otimes L^{-1},S^2 E)\neq 0$ where the second equality is obtained from twisting by $L^{-3}$ and using the fact $L^2\in J_2(C)$ (see Proposition \ref{S3}).
For $k=4$, if $S^4 E$ has a destabilizing subbundle $L^{-1}\to S^4 E$, then it gives a subbundle $E\otimes L^{-1}\to S^3 E$ which destabilizes $S^3 E$, and we know from Proposition \ref{last} that there is also a subbundle $E\otimes L\to S^3 E$.
That is, $\Hom(E,S^3 E\otimes L^{-1})\neq 0$.
For the remaining $k\geq 5$, we start with the three exact sequences
\begin{gather*}
0\to S^{k-2}E\otimes A
\to E\otimes S^{k-1}E\otimes A
\to S^k E\otimes A\to 0, \\
0\to E\otimes S^{k-3}E\otimes A
\to E\otimes E\otimes S^{k-2}E\otimes A
\to S^{k-1} E\otimes A\to 0, \\
0\to S^{k-4}E \otimes A
\to E\otimes S^{k-3} E \otimes A
\to S^{k-2} E\otimes A\to 0
\end{gather*}
from \eqref{basic} being twisted by $A$ and $E\otimes A$ for a line bundle $A$ on $C$.
Thus we have
\begin{align*}
H^0(E\otimes S^{k-1} E\otimes A)\neq 0
\ &\Leftrightarrow\
H^0(S^k E\otimes A)\neq 0\ \text{or}\ H^0(S^{k-2}E\otimes A)\neq 0, \\
H^0(E\otimes E\otimes S^{k-2} E\otimes A)\neq 0
\ &\Leftrightarrow\
H^0(E\otimes S^{k-1} E\otimes A)\neq 0\ \text{or}\ H^0(E\otimes S^{k-3}E\otimes A)\neq 0, \\
H^0(E\otimes S^{k-3} E\otimes A)\neq 0
\ &\Leftrightarrow\
H^0(S^{k-2} E\otimes A)\neq 0\ \text{or}\ H^0(S^{k-4}E\otimes A)\neq 0.
\end{align*}
Then $H^0(S^k E\otimes L^{+1})=\Hom(L^{-1},S^k E)\neq 0$ implies $\Hom(E\otimes L,S^{k-1}E)=H^0(E\otimes S^{k-1}E\otimes L^{-1})\neq 0$ from the following implication diagram whose equivalence at the middle is established by Lemma~\ref{k-2_twist}.
An arrow labeled with $\times$ indicates an implication which leads to a contradiction to the assumption that there is no $m$-section of zero self-intersection on $X$ for any $m<k$.
\[
\xymatrix @C=-3.5pc @R=0pc{
H^0(S^kE\otimes L^{+1})\neq 0 \ar@{=>}[rdd] & &
H^0(S^{k-2}E\otimes L^{+1})\neq 0 & &
H^0(S^{k-4}E\otimes L^{+1})\neq 0\\ \\
& H^0(E\otimes S^{k-1} E\otimes L^{+1})\neq 0 \ar@{=>}[rdd] \ar[ruu]|\times & &
H^0(E\otimes S^{k-3} E\otimes L^{+1})\neq 0 \ar[luu]|\times \ar[ruu]|\times & \\ \\
& & H^0(E\otimes E\otimes S^{k-2}E\otimes L^{+1})\neq 0 \ar@{<=>}[ddd] \ar[ruu]|\times & & \\ \\ \\
& & H^0(E\otimes E\otimes S^{k-2}E\otimes L^{-1})\neq 0 \ar@{=>}[ldd] \ar[rdd]|\times & & \\ \\
& H^0(E\otimes S^{k-1}E\otimes L^{-1})\neq 0 \ar@{=>}[ldd] \ar[rdd]|\times & &
H^0(E\otimes S^{k-3}E\otimes L^{-1})\neq 0 \ar[ldd]|\times \ar[rdd]|\times \\ \\
H^0(S^kE\otimes L^{-1})\neq 0 & &
H^0(S^{k-2}E\otimes L^{-1})\neq 0 & &
H^0(S^{k-4}E\otimes L^{-1})\neq 0
}\]
Thus, for $k\geq 3$, $\pi^*E=R^{-1}\oplus R$ for some torsion line bundle $R$ over the unramified $k$-covering $\pi$.
Finally, we complete the proof applying Lemma \ref{pull-back}.
\end{proof}
\begin{Rmk}\label{etale}
Nori \cite[p.~35]{Nori76} defines a vector bundle $V$ on a complete, connected, reduced scheme $X$ to be \emph{finite} if there exists a finite collection $\mathcal{S}$ of vector bundles on $X$ such that for each $n\geq 0$,
\[
V^{\otimes n}=W_1\oplus W_2\oplus \cdots \oplus W_{k_n}\ \
\text{for some $W_i\in \mathcal{S}$}.
\]
It is also proven in the same paper that $V$ is finite if and only if $V$ is \emph{\'{e}tale-trivial} (over the base field of characteristic $0$).
Therefore, in the case where $X$ is a smooth projective curve $C$ of genus $g\geq 2$ and\linebreak $V$ is a vector bundle $E$ of rank $2$ with trivial determinant, Theorem \ref{concludingconcluding} says that when $S^2 E$ is stable, $E$ is finite if and only if $S^k E$ is destabilized by a line subbundle for some $k\geq 3$.
If $S^k E$ is not stable for some $k\geq 3$ which may assumed to be minimal, then $k=3$ or $4$ or $6$ due to Theorem~\ref{higher}.
Moreover, in those cases, from Proposition \ref{last} (with Proposition \ref{tosmall}) for $k=3$, and from the proof of Theorem \ref{main} for $k=4$ and $6$, we know that $S^l E$ is destabilized by a line subbundle for some $l\geq k$, and hence $E$ becomes finite by Theorem \ref{concludingconcluding}.
As a conclusion, when $S^2 E$ is stable, we can state that $E$ is finite if and only if $S^k E$ is not stable for some $k\geq 3$.
\end{Rmk}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,032 |
module Spree
class SalePrice < ActiveRecord::Base
# TODO validations
belongs_to :price, :class_name => "Spree::Price"
has_one :calculator, :class_name => "Spree::Calculator", :as => :calculable, :dependent => :destroy
accepts_nested_attributes_for :calculator
validates :calculator, :presence => true
#attr_accessible :value, :start_at, :end_at, :enabled
scope :active, lambda {
where("enabled = 't' AND (start_at <= ? OR start_at IS NULL) AND (end_at >= ? OR end_at IS NULL)", Time.now, Time.now)
}
# TODO make this work or remove it
#def self.calculators
# Rails.application.config.spree.calculators.send(self.to_s.tableize.gsub('/', '_').sub('spree_', ''))
#end
def calculator_type
calculator.class.to_s if calculator
end
def calculator_type=(calculator_type)
clazz = calculator_type.constantize if calculator_type
self.calculator = clazz.new if clazz and not self.calculator.is_a? clazz
end
def price
calculator.compute self
end
def enable
update_attribute(:enabled, true)
end
def disable
update_attribute(:enabled, false)
end
def start(end_time = nil)
end_time = nil if end_time.present? && end_time <= Time.now # if end_time is not in the future then make it nil (no end)
attr = { end_at: end_time, enabled: true }
attr[:start_at] = Time.now if self.start_at.present? && self.start_at > Time.now # only set start_at if it's not set in the past
update_attributes(attr)
end
def stop
update_attributes({ end_at: Time.now, enabled: false })
end
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,168 |
{"url":"https:\/\/quantumcomputing.stackexchange.com\/questions\/7067\/qiskit-measure-only-1-of-the-register-out-of-n-registers","text":"# qiskit - measure only 1 of the register out of n registers\n\nI have created a 9 qubit register:\n\nq = QuantumRegister(9)\nc = ClassicalRegister(9)\nqc = QuantumCircuit(q, c)\n\n\nI want to measure only 1 of the register(let's suppose i want to measure the 5th register.) How do I implement the code?\n\nqc.measure(q[register],c[register])\njob = execute(qc,simulator,shots = 1000)\nresult = job.result()\ncounts = result.get_counts(qc)\nprint(\"\\nTotal count for 0 and 1 are:\",counts)\n\n\nMy code seems to measure all 9 registers.\n\n\u2022 Hi @Hon Lin! Could you elaborate on \"my code seems to measure all 9 registers\"? What makes you tell us that? The results of the code snippet? If so, could you provide them? Aug 22 '19 at 7:07\n\u2022 Hi @Nelimee, the counts give '000000000' and '000000001' as the output, thus not realizing the default setting from the classical register, I jumped into a very wrong conclusion that all 9 quantum registers have been measured. Aug 30 '19 at 7:16\n\nIn qiskit, if there are classical bits that are not getting a value through a measurement gate, they will default to '0'. Since you set your circuit up to have 9 classical bits, the counts will always be bit strings of length 9, even though you are only measuring 1 qubit.\n\nTo only measure one qubit, and have that be the only value in the resulting bit string, you need to only have one classical bit in your circuit.\n\nq = QuantumRegister(9)\nc = ClassicalRegister(1) # Changed from 9 to 1\nqc = QuantumCircuit(q, c)\nqc.measure(q[register],c)\njob = execute(qc,simulator,shots = 1000)\nresult = job.result()\ncounts = result.get_counts(qc)\nprint(\"\\nTotal count for 0 and 1 are:\",counts)","date":"2021-09-21 10:46:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4013494551181793, \"perplexity\": 2129.30021421503}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057202.68\/warc\/CC-MAIN-20210921101319-20210921131319-00219.warc.gz\"}"} | null | null |
Progressive SPORT BEFORE is an ULTRA CLEAN and highly efficacious pre-workout supplement that helps to elevate your body and mind to bring out your absolute best. Prepare the right way to activate energy, endurance and motivation that drive you to success. Featuring 5 strategic, performance-based blends to help you energize, endure and zero-in.
When to Take It: Take BEFORE 30 minutes prior to your next workout, race or sporting event—no matter the time of day. It mixes up quickly and easily in water. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,165 |
\chapter*{List of Symbols and Notation}
\section*{Notation and Conventions}
\begin{symbollist}[0.7in]
\item[$\mathbb{N}$]
The set of natural numbers, $\mathbb{N} = \{0,1,2,3,4,\ldots\}$.
\item[$\mathbb{Z}^{+}$]
The set of positive integers, $\mathbb{Z}^{+} = \{1,2,3,4,\ldots\}$.
\item[$\mathbb{Z}\lbrack x\rbrack$]
The ring of polynomials in $x$ with coefficients in the
integers, $\mathbb{Z}$.
\item[$\mathbb{Q}\lbrack x\rbrack$]
The ring of polynomials in $x$ with coefficients in the
rational numbers, $\mathbb{Q}$.
\item[$\mathbb{K}\lbrack x\rbrack$]
The ring of polynomials in $x$ with coefficients over the
field $\mathbb{K}$.
\item[$D_z^{(j)}$]
The derivative, or differential, operator with respect to $z$, i.e., where
$D_z^{(j)}[F(z)] \equiv F^{(j)}(z)$
denotes the $j^{th}$ derivative of $F(z)$,
provided that the $j^{th}$ derivative of the function exists.
\item[$\binom{n}{k}$]
The binomial coefficients.
\item[$\gkpSI{n}{k}$]
The unsigned Stirling numbers of the first kind,
also denoted by $(-1)^{n-k} s(n, k)$.
\item[$\gkpSII{n}{k}$]
The Stirling numbers of the second first kind,
also denoted by $S(n, k)$.
\item[$\gkpEI{n}{k}$]
The first--order Eulerian numbers.
\item[$\gkpEII{n}{k}$]
The second--order Eulerian numbers.
\item[$H_n^{(r)}$]
The $r$--order harmonic numbers, $H_n^{(r)} := \sum_{k=1}^{n} k^{-r}$,
where the first--order harmonic numbers are denoted in the
shorthand notation, $H_n \equiv H_n^{(1)}$.
\item[$B_n$]
The Bernoulli numbers.
\end{symbollist}
\mainmatter
\chapter{Introduction}
\section{Background and Motivation}
\label{Chapter_Intro_Section_BGMotivation}
The form of composite sequences involving the
Stirling numbers of the first and second kinds are common in many applications.
The Stirling number triangles arise naturally in
formulas involving sums of factorial functions and in the
symbolic, polynomial expansions of binomial coefficients and other
factorial function variants.
The Stirling and Eulerian number triangles also both frequently occur in
applications involving finite sums and generating functions
over non--negative powers of integers.
These applications include finding closed--form expressions and formulas for
generating functions over polynomial multiples of an arbitrary sequence.
\subsection{Example I: Computing Derivatives of
Stirling Number Generating Functions}
If $p, k \in \mathbb{N}$, the
following modified series for the generating functions for
polynomial multiples of the unsigned Stirling numbers of the first kind,
denoted by $\gkpSI{n}{k}$, result in the expansions
\begin{align}
\label{eqn_ModPolyPowGFs_of_the_S1StirlingNumbers-stmts_v1}
\sum_{n=0}^{\infty} n^{k} \cdot \gkpSI{n}{p} \frac{z^n}{n!} & =
\sum_{j=0}^{k} \gkpSII{k}{j} z^j \cdot D_z^{(j)}\left[
\frac{(-1)^{p}}{p!} \cdot \Log(1-z)^{p}
\right] \\
\notag
\sum_{n=0}^{\infty} n^{k} \cdot \gkpSI{n+1}{p+1} \frac{z^n}{n!} & =
\sum_{j=0}^{k} \gkpSII{k}{j} z^j \cdot D_z^{(j)}\left[
\frac{(-1)^{p}}{p!} \cdot \frac{\Log(1-z)^{p}}{(1-z)}
\right],
\end{align}
where the derivative operator, $D_z^{(j)}$,
denotes the $j^{th}$ derivative with respect to $z$ of its input and the
\emph{Stirling numbers of the second kind} are denoted by $\gkpSII{n}{k}$.
Given enough familiarity with the
Stirling numbers of the first kind and some trial and error,
formulas for each of the $j^{th}$ derivatives involved in the
expansions of \eqref{eqn_ModPolyPowGFs_of_the_S1StirlingNumbers-stmts_v1}
are obtained by extrapolation
from the first several cases of $j \in \mathbb{N}$ to obtain the
finite sums
\begin{align}
\label{eqn_Dzj_S1pzGF_exp_form_v1}
D_z^{(j)}\left[\frac{(-1)^{p}}{p!} \cdot \Log(1-z)^{p}\right] & =
\sum_{i=0}^{j} \gkpSI{j}{i} \frac{(-1)^{p-i}}{(p-i)!} \cdot
\frac{\Log(1-z)^{p-i}}{(1-z)^{j}} \\
\notag
D_z^{(j)}\left[\frac{(-1)^{p}}{p!} \cdot \frac{\Log(1-z)^{p}}{(1-z)}
\right] & =
\sum_{i=0}^{j} \gkpSI{j+1}{j+1-i} \frac{(-1)^{p+j-i}}{(1-z)^{j+1}}
\cdot \frac{\Log(1-z)^{p-j+i}}{(p-j+i)!},
\end{align}
where the formulas in \eqref{eqn_Dzj_S1pzGF_exp_form_v1}
may be regarded as polynomials in the
function, $\Log(1-z)$.
A proof of the correctness of these formulas is then later obtained
formally by induction on $j$.
\subsection{Example II: A More Challenging Application}
A more challenging, and less straightforward,
example arises in attempting to find an exact,
closed--form representation for the expansions of the
ordinary generating function for the Stirling number sequence variant,
$S_k^{(d)}(n)$, defined as in \eqref{eqn_Skdn_seq_sum_def_v1}
with respect to each fixed $d, k \in \mathbb{Z}^{+}$.
\begin{align}
\label{eqn_Skdn_seq_sum_def_v1}
S_k^{(d)}(n) & := \sum_{j=1}^{n}
\binom{n}{j} \gkpSI{j+1}{k+1} \frac{(-1)^j}{j! \cdot (j+d)} \cdot
\frac{(n+d)!}{n!},
\end{align}
The first few examples of the ordinary generating function,
$\widetilde{S}_k^{(d)}(z)$, over $d \geq 2$ for the sequence defined by
\eqref{eqn_Skdn_seq_sum_def_v1} are provided for reference as follows:
\begin{align}
\label{eqn_STildekdn_OGFs_first_cases_listings-stmts_v1}
\widetilde{S}_k^{(2)}(z) & =
-\frac{\Log\left(1-z\right)^{k-1}}{(1-z)^2} \left[
\frac{z}{(k-1)!}+\frac{(-1+z) \Log\left(1-z\right)}{k!}
\right] \\
\notag
\widetilde{S}_k^{(3)}(z) & =
\frac{\Log\left(1-z\right)^{k-2}}{(1-z)^3} \left[
\frac{z^2}{(k-2)!}+\frac{z (-4+3 z)
\Log\left(1-z\right)}{(k-1)!}+\frac{\left(2-4 z+2 z^2\right)
\Log\left(1-z\right)^2}{k!}
\right] \\
\notag
\widetilde{S}_k^{(4)}(z) & =
-\frac{\Log\left(1-z\right)^{k-3}}{(1-z)^4} \Biggl[
\frac{z^3}{(k-3)!}+\frac{z^2 (-9+6 z)
\Log\left(1-z\right)}{(k-2)!}+\frac{z \left(18-27 z+11
z^2\right) \Log\left(1-z\right)^2}{(k-1)!} \\
\notag
& \phantom{=-\frac{\Log\left(1-z\right)^{k-3}}{(1-z)^4} \Biggl[\ } +
\frac{\left(-6+18z-18 z^2+6 z^3\right) \Log\left(1-z\right)^3}{k!}
\Biggr] \\
\notag
\widetilde{S}_k^{(5)}(z) & =
\frac{\Log\left(1-z\right)^{k-4}}{(1-z)^5} \Biggl[
\frac{z^4}{(k-4)!}+\frac{z^3 (-16+10 z)
\Log\left(1-z\right)}{(k-3)!}+\frac{z^2 \left(72-96 z+35
z^2\right) \Log\left(1-z\right)^2}{(k-2)!} \\
\notag
& \phantom{=\ } +
\frac{z \left(-96+216 z-176 z^2+50 z^3\right)
\Log\left(1-z\right)^3}{(k-1)!}+\frac{\left(24-96 z+144 z^2-96
z^3+24 z^4\right) \Log\left(1-z\right)^4}{k!}
\Biggr].
\end{align}
Based observations of the first several cases of these generating functions
in \eqref{eqn_STildekdn_OGFs_first_cases_listings-stmts_v1},
we rewrite the expansions of these generating functions as the sum
\begin{align}
\label{eqn_Skp1sn_OGF_gmdz_seq_sum_def_exp_stmt_v1}
\widetilde{S}_k^{(d)}(z) & :=
\sum_{n=0}^{\infty} S_{k}^{(d)}(n) z^n = \left(
\frac{(-1)^{d-1} \cdot \Log(1-z)^{k+1-d}}{(1-z)^{d}}
\right) \times
\sum_{m=0}^{d-1} \frac{\Log(1-z)^{m} \cdot z^{d-1-m}}{(k+1-d+m)!} \cdot
g_m^{(d)}(z).
\end{align}
It is clear from examining the sequence data in
\eqref{eqn_STildekdn_OGFs_first_cases_listings-stmts_v1} that the
formulas for the polynomials, $g_m^{(d)}(z)$, specified in
\eqref{eqn_Skp1sn_OGF_gmdz_seq_sum_def_exp_stmt_v1}
involve a sum over factors of the Stirling numbers of the first kind and the
binomial coefficients.
However, finding the precise sequence inputs in the formula for these
polynomials with the correct corresponding multiplier terms in the sum is
not immediately obvious from the first few example cases in
\eqref{eqn_STildekdn_OGFs_first_cases_listings-stmts_v1}.
We then proceed forward seeking a
formula for the polynomials, $g_m^{(d)}(z)$, in the
general template form of
\begin{equation}
\label{eqn_gmdz_example_seq_searcg_exp_v2}
g_m^{(d)}(z) = \sum_{i} S_1(\cdot, \cdot) \cdot \Binom(\cdot, \cdot
\times \RemSeq_1(i) \cdot \RemSeq_2(m+m_0-i) \times z^i,
\end{equation}
where the functions $S_1(\cdot, \cdot)$ and $\Binom(\cdot, \cdot)$ denote the
Stirling numbers of the first kind and binomial coefficients, respectively,
each over some unspecified index inputs to these sequence functions.
After a few hours of frustrating trial and error with \MmPlain,
we finally arrive at a formula for these polynomials in the form of
\begin{align}
\label{eqn_gmdz_seq_sum_def_v1}
g_m^{(d)}(z) & = \sum_{i=0}^{m} \gkpSI{d-m+i}{d-m} \binom{d-1}{m-i}^2
(-1)^{m-i} (m-i)! \cdot z^i.
\end{align}
The motivation for constructing the package routines in the thesis is to
automate the eventual discovery of the formula in
\eqref{eqn_gmdz_seq_sum_def_v1} based on user input as to the
general template to the formula for these polynomials specified as in
\eqref{eqn_gmdz_example_seq_searcg_exp_v2}.
The automated discovery of the first pair of less
complicated formulas given in \eqref{eqn_Dzj_S1pzGF_exp_form_v1}
is then also possible using the package.
\section{High--Level Description of the Package}
The \Mm package \GPSFPkgName developed as a part of the thesis
research implements software routines for the
intelligent guessing of polynomial sequence formulas based on user input
on the expected form of the sequence formulas.
These functions for sequence recognition then rely on some degree of
user intuition to correctly find closed--form formulas
that represent the input polynomial sequence.
The logic used to construct these routines is based
on factorization data for the expected sequence factors of the input
polynomial coefficient terms suggested by the user.
The template of the polynomial sequence formulas that the
package aims to recognize satisfies an expansion of the
general form outlined in \eqref{eqn_pmx_gen_poly_sequence_forms-intro_stmt_v1}
where $j, j_0 \in \mathbb{N}$, $r \in \mathbb{Z}^{+}$, and
$x$ is some (formal) polynomial variable that may assume different forms
in the sequences input to the package routines.
\begin{align}
\label{eqn_pmx_gen_poly_sequence_forms-intro_stmt_v1}
\Poly_j(x) & := \sum_{i=0}^{j+j_0} \left(
\prod_{i=1}^{r}
\genfracSeq{\widetilde{u}_i(j) + u_i \cdot i}{
\widetilde{\ell}_i(j) + \ell_i \cdot i}_i
\right)
\times \RS_1(i) \RS_2(j+j_0-i) \cdot x^i.
\end{align}
The product of (triangular) sequences in the first term of
\eqref{eqn_pmx_gen_poly_sequence_forms-intro_stmt_v1} correspond to the
factors of the expected user--defined sequences in the
polynomial coefficient terms where the functions
$\widetilde{u}_i(j), \widetilde{\ell}_i(j)$ are prescribed functions
of the sequence index $j$ and where the $u_i, \ell_i \in \mathbb{Z}$ are
prescribed application--dependent multiples of the polynomial
summation index $i$.
The functions $\RS_i(\cdot)$ in the previous equation denote the
coefficient remainder terms in these polynomial formulas, which
should ideally correspond to comparatively simpler sequences that are
already easily recognized by existing packages discussed in
\sref{Section_Intro_ComparisonsOfSoftware} of the thesis below.
These existing packages may be called as subroutines to recognize the
sequences corresponding to the remainder terms in the
input polynomial coefficients after the forms of the sequence factors
expected by the user are determined by the package routines.
\section{Plan of Attack and Aims of the Thesis Research}
A significant part of the work for the thesis is a
``proof of concept'' implementation of the logic to find
polynomial sequence formulas of the form in
\eqref{eqn_pmx_gen_poly_sequence_forms-intro_stmt_v1} based on
user input of the first several terms of the sequence.
In this implementation, the focus of the package development is in
constructing the logic to recognize the polynomial sequence formulas in the
form of \eqref{eqn_pmx_gen_poly_sequence_forms-intro_stmt_v1}.
For example,
in the absence of obvious, or known, algorithms for the factorization of an
integer by an arbitrary sequence, the implementation of this
part of the algorithm employed by the package is effectively treated as
an oracle within the working source code.
The construction of this type of integer factorization algorithm is
motivated by the need for such algorithms in a more efficient
implementation of this package.
A more complete and detailed specification of these factorization
routines is described in the future research topics outlined in
\chref{Chapter_Conclusions}.
The plan is that later, once more of the machinery for generating the proposed
polynomial sequence formulas is in place, optimizations to the code and the
task of finding a more efficient
implementation to generate the factorizations of a given integer
over multiple sequence factors may be investigated further.
Several examples of usage of the
sequence recognition functions in the package, including figures of the
\Mm output, are given in \chref{Chapter_PkgUsageChapter}.
These examples provide both the working syntax of
\Mm programs that employ the package routines and serve to document the
capabilities of the package current at the time of this thesis draft.
\section{Software for Sequence Recognition}
\label{Section_Intro_ComparisonsOfSoftware}
\subsection{Software Packages for Sequence Recognition}
There are a number of notable existing software packages and online resources
geared towards guessing formulas for integer and semi--rational
sequences based on the forms of the first few terms of a sequence.
Notable and well--known examples include the \GFUN package for
\emph{Maple} \citep{GFUN-PKG-DOCS}, the
\ttemph{Rate} packages for \MmPlain \footnote{
See \url{http://www.mat.univie.ac.at/~kratt/rate/rate.html}.
}
\citep[Appendix A]{RATE.M-PKG-DOCS}, the more recently updated
\ttemph{Guess} package \footnote{
See
\url{http://axiom-wiki.newsynthesis.org/GuessingFormulasForSequences}.
}
for the \ttemph{FriCAS} fork of \emph{Axiom}
which includes enhancements to the previous packages documented in
\citep{AXIOM-GUESS-PKG-DOCS}, and a default, built--in function, \FSFFnPlain,
in \MmPlain.
There are still other software packages designed to perform related
operations aimed at recognizing auxiliary properties
such as identifying recurrence relations and generating functions for
sequences freely available online\footnote{
See the complete list of \emph{Algorithmic Combinatorics Software}
on the \emph{RISC} website at
\url{http://www.risc.jku.at/research/combinat/software/}.
}.
The \OEISPlain, and its email--based \ttemph{SuperSeeker} program,
provide lookup access to a large database of integer sequences, including the
integer--valued entries for the numerator and denominators of
rational sequences such as the Bernoulli numbers.
A more historical account of the development of software for
sequence recognition is provided in
\citep[\S 2]{AXIOM-GUESS-PKG-DOCS}.
\subsection{Polynomial and Summation Identities Involving the Stirling Numbers}
Notice that in the absence of some underlying structure to a sequence
(or satisfied by its generating function), guessing functions that attempt to
find closed--form expressions for an arbitrary sequence by
extrapolation from the input of its first few terms are inherently
limited in obtaining a proof to verify the correctness of the
formulas returned.
The routines in many software packages and in the algorithms
described in \citep{AEQUALSB-BOOK} are able to obtain computerized proofs
or certificates for closed--form identities obtained
from summations involving special functions.
The correctness of formulas obtained by packages such as \GFUN
follow if the generating function for a sequence is \emph{holonomic},
or equivalently, if the sequence, say $S_n$, itself satisfies a
\emph{$P$--recurrence} of the form
\begin{equation}
\label{eqn_holonomic_P-Rec_form-stmt_v1}
\widehat{p}_0(n) \cdot S_{n} + \widehat{p}_1(n) \cdot S_{n+1} + \cdots +
\widehat{p}_r(n) \cdot S_{n+r} = 0,
\end{equation}
whenever $n \geq n_0$ for some fixed $n_0$, with $r \geq 1$, and where the
coefficients, $\widehat{P}_i(n) \in \mathbb{C}\lbrack n \rbrack$,
are polynomials for each $0 \leq i \leq r$ \citep[\S B.4]{ACOMB-BOOK}.
As pointed out in \citep{GKP} and in \citep{STIRLING.M-PKG-DOCS},
unlike a number of other special sequences of interest in applications, the
Stirling numbers are \emph{not holonomic}, or do not satisfy a homogeneous
recurrence relation of the form in \eqref{eqn_holonomic_P-Rec_form-stmt_v1},
so it is reasonable to expect that
existing software to guess sequence formulas should be at least
somewhat limited in recognizing the exact forms of summations
involving factors these sequences.
The \Mm package \ttemph{Stirling.m} by M. Kauers is still able to find
recurrences satisfied by many polynomial--like sums involving the
Stirling and Eulerian number triangles in cases of many known and new
summation identities.
However, the example cited in Kauers' article about the package shows
a seemingly simple polynomial--like summation involving the
Stirling numbers of the second kind for which a
recurrence relation in the form of
\eqref{eqn_holonomic_P-Rec_form-stmt_v1} fails to exist
\citep[See \S 4]{STIRLING.M-PKG-DOCS}.
This behavior offers some explanation as to the deficiency of
functions like \FSFFn in recognizing formulas for sequences
involving factors of the Stirling and Bernoulli numbers.
We now restrict our attention to constructing software routines that
recognize formulas for the class of polynomial sums of the form in
\eqref{eqn_pmx_gen_poly_sequence_forms-intro_stmt_v1}
based on intelligent guesses as to the coefficient forms input by the user.
In the context, the package is intended to quickly assist the user in the
discovery of formulas that arise in practice,
like those motivated by the examples from
\sref{Chapter_Intro_Section_BGMotivation},
which we then are able to prove correct later by separate methods.
\subsection{Comparisons of the Packages to Existing Software Routines}
The treatment of the user--defined expected sequence factors
in finding formulas for input polynomial sequences
is to consider these expected sequence terms as
primitives in the matching formulas returned by the package.
This treatment of the user--defined expected sequence factors
as primitives in the search for matching formulas
is analogous to the handling of the closed--form functions
returned by \FSFFn in \MmPlain, such as for
scalar or constant values, powers of (polynomials in) a variable $n$,
factorial and gamma functions, or powers of a fixed constant, $c^n$.
For example, acceptable formulas returned by the package for the sequence of
generating functions for polynomial powers of $n$ may correspond to
either of the sums in the following equation involving the
\emph{Stirling numbers of the second kind},
$\gkpSII{n}{k}$, or the \emph{first--order Eulerian numbers}, $\gkpEI{n}{m}$,
when $p \in \mathbb{N}$ and $|z| < 1$
\citep[\S 7.4, \S 6.2]{GKP}:
\begin{align*}
\sum_{n=0}^{\infty} n^{p} z^n & =
\sum_{k=0}^{p} \gkpSII{p}{k} \frac{k! \cdot z^k}{(1-z)^{k+1}} =
\frac{1}{(1-z)^{p+1}} \sum_{i=0}^{p} \gkpEI{p}{i} z^{i+1}.
\end{align*}
The forms of sequence factors of other standard sequences, including the
Stirling numbers of the first and second kinds, common variants of the
binomial coefficients, $\binom{n}{k}$ and $\binom{n+m}{m}$, the
Eulerian number triangles, and other triangular sequences of interest in
application--specific contexts are handled similarly as primitives in the
desired formulas output by the package routines.
The factorization--based approach to determine factors of
expected sequences by the user in this package
differs from the methods employed to recognize
sequence formulas by existing sequence recognition software.
Since this method relies on user direction
as to what terms the
sequence formulas should contain, this approach is also useful in
determining formulas involving factors of difficult sequence forms that
are not easily recognized by existing software packages.
The package then employs a hybrid
of the complementary approaches noted in
\citep[\S 1]{AXIOM-GUESS-PKG-DOCS}
to the search for polynomial sequence formulas.
Specifically, the package routines employ existing sequence recognition
functions as a subroutine to process the reminder terms in the
sequence after the expected special sequence factors are
identified in the coefficients of the input polynomials.
\chapter{The Guess Polynomial Sequence Function Packages}
\label{Chapter_PkgUsageChapter}
\section{Features of the Package}
\label{Chapter_GPSFPkg_Section_Features}
\subsection{Overview}
The \GPSFPkgName package is designed to recognize formulas for
polynomial sequences in one variable based on input user observations on
factors of the polynomial coefficients.
The public function \GPSFPkgGuessFnName provided by the package
attempts to perform intelligent guessing of closed--form
summation representations for a polynomial sequence of elements,
$p_j(x) \in \mathbb{Z}\left[x\right]$,
based on the user insights as to the coefficient factors in the end
formula for the sequence and the first several polynomial terms passed as
input to the function.
Several particular concrete examples of uses of the package to obtain
formulas and other identities involving the Stirling numbers and
binomial coefficients are contained in the discussions of
\sref{subSection_PkgUsageChapter_Full} and
\sref{subSection_PkgUsageChapter_MoreExamples_Full} of the
thesis below.
\subsection{Specification of the Package Routines and
Polynomial Sequence Formulas}
The primary package function \GPSFPkgGuessFnName provided to the user is
implemented in \Mm code in such a way that it is able to handle multiple
coefficient factors of sequences expected by the user.
The focus of the examples provided as documentation for the
package focus on the particular cases of \quoteemph{single--factor} and
\quoteemph{double--factor} coefficient formulas for the input polynomials.
In particular, the package search routines are of interest in
obtaining sequence formulas corresponding to the following
pair of summation formulas:
\begin{align}
\label{eqn_Polyjx_single_factor_seq_formula_template_v1}
\Poly_j(x) & := \sum_{i=0}^{j+j_0}
\genfracSeq{\widetilde{u}_1(j) + u_1 i}{\widetilde{\ell}_1(j) + \ell_1 i}_1
\times \RS_1(i) \RS_2(j+j_0-i) \cdot x^i \\
\label{eqn_Polyjx_double_factor_seq_formula_template_v2}
\Poly_j(x) & := \sum_{i=0}^{j+j_0}
\genfracSeq{\widetilde{u}_1(j) + u_1 i}{\widetilde{\ell}_1(j) + \ell_1 i}_1
\genfracSeq{\widetilde{u}_2(j) + u_2 i}{\widetilde{\ell}_2(j) + \ell_2 i}_2
\times \RS_1(i) \RS_2(j+j_0-i) \cdot x^i .
\end{align}
The polynomials in
\eqref{eqn_Polyjx_single_factor_seq_formula_template_v1} and
\eqref{eqn_Polyjx_double_factor_seq_formula_template_v2} correspond to the
single--factor and double--factor sequence formula templates, respectively.
In the previous equations,
$j, j_0 \in \mathbb{N}$, $u_i, \ell_i \in \mathbb{Z}$, the
functions $\widetilde{u}_i(j)$ and $\widetilde{\ell}_i(j)$ denote some
prescribed application--dependent functions of the sequence index, and the
form of the remaining sequences in the polynomial coefficient
formulas are denoted by the functions $\RS_1(\cdot)$ and $\RS_2(\cdot)$.
The package formula search routines only currently handle linear
functions of the summation index inputs.
Also notice that it is assumed that at least one of the
$RS_i(\cdot)$ sequence functions is identically one, and
that a formula for the remaining function is either easily obtained by an
existing sequence recognition routine such as
\MmPlain's \FSFFn function, or may be
later identified with a relevant entry in the \OEIS database
\citep{OEIS-UPDATED-URL}.
\subsection{Special Triangular Sequence Factors
Supported by the Package}
The built--in subpackage \GSDPkgName included with the current package
source code provides an \quotetext{out of the box}
implementation of several triangular sequences of
interest in my research and that are important in motivating the
development of this package.
In the current implementation of the package, these user--specified sequences
identified in the package routines include factors of the
(signed and unsigned) Stirling number triangles, variations of
triangular sequences derived from the binomial coefficients, and the
first and second--order Eulerian number triangles defined recursively as in
\citep[\S 6.1, 6.2]{GKP} \citep[\cf \S 26.8, 26.14]{NISTHB}.
Each of these respective sequences correspond to special cases of the following
triangular recurrence relation where
$\alpha, \beta, \gamma, \alpha^{\prime}, \beta^{\prime}, \gamma^{\prime} \in \mathbb{Z}$
\citep[\S 5, \S 6.1--6.2]{GKP}:
\begin{equation}
\notag
\genfracSeq{n}{k} = \left(\alpha n+\beta k+\gamma\right) \genfracSeq{n-1}{k} +
\left(\alpha^{\prime} n+\beta^{\prime} k+\gamma^{\prime}\right)
\genfracSeq{n-1}{k-1} + \Iverson{n = k = 0}.
\end{equation}
\Mm provides several standard, built--in functions for the
(signed) Stirling numbers of the first and second kinds, and for the
binomial coefficients. The related
\Mm package \url{Stirling.m} authored by Manuel Kauers
\footnote{
See also the \Mm package documentation at
\url{http://www.risc.jku.at/research/combinat/software/ergosum/RISC/Stirling.html}.
}
further extends the default functions for the Stirling numbers and
defines additional functions that implement the Eulerian number
triangles of both orders \citep{STIRLING.M-PKG-DOCS}.
\subsection{Some Restrictions on the Form of the Input Polynomials}
The package function \GPSFPkgGuessFnName is designed to find formulas for
polynomials, $p_j(x)$, whose coefficients are integer--valued.
The guessing function is, however, able to find formulas for semi--rational
polynomial sequences in $\mathbb{Q}\left[x\right]$ provided that the
first several terms of the sequence input to the function
\GPSFPkgGuessFnName are
normalized by a user guess function, $U_{\guess}(j, i)$, as
described in \sref{subSection_PkgUsageChapter_SpecUserGuessFns}
of the thesis below.
The difficulties in handling formulas for polynomials with rational
coefficients arise in determining strictly integer--valued factors of
rational--valued coefficient forms.
These implementation issues are outlined in
\sref{subSection_FutureFeaturesChapter_RationalSeqTransforms}.
Several suggestions for transformations that pre--process polynomials with
rational coefficients are also suggested in the section
as features to be implemented in a future revision of the package.
\section{Installation and Usage of the Package Routines}
\label{subSection_PkgUsageChapter_Full}
\subsection{Installation}
\subsubsection{Mathematica Package Installation}
The package requires a working installation of \Mm and
a copy of the two source files \\
\GPSFPkgName and \GSDPkgName provided on the \emph{SageMathCloud} project page
at the URL listed in the next section.
To load the package under Linux, suppose that the package files are
located in \url{~/guess-polys-pkg}. The package is then loaded by running
\begin{center}
\url{<<"~/guess-polys-pkg/GuessPolySequenceFormulas.m"}
\end{center}
A graphical summary of the short description and revision information for the
package is printed when the package is successfully loaded from within a
\Mm notebook.
\subsubsection{Sage Package Installation}
The \Mm package routines accompanying the original Master's thesis manuscript
from 2014 now have a counterpart in the open--source \emph{SageMath}
application. The \emph{Python} source code to this updated software for the
\emph{Sage} environment is freely available for non--commercial usage online at
\url{https://github.com/maxieds/GuessPolynomialSequences}.
Provided that there is a correctly functioning version of
\emph{Sage}, or a user account on the \emph{SageMathCloud} servers,
installation of the package is as simple as copying all of the
\emph{Python}, or \texttt{*.py}, files into the current working directory
for \emph{Sage}.
\subsection{Typical Usage}
\label{subSection_PkgUsageChapter_TypicalUsage_and_Examples}
The examples given in this section illustrate both the syntax and
utility of the sequence recognition routines provided by the
package functions. Notice that the formulas returned by the function
are pure functions in \Mm with three ordered parameters:
1) The polynomial sequence index; 2) An input variable that denotes the
summation index of the formula; and 3) A parameter that specifies the
polynomial variable.
The graphical printing of the formula data provided in the
figures given in this section is disabled by
setting the runtime option \url{PrintFormulas->False}.
The runtime option \url{FSFFunction} is also available to replace the
default \Mm function \FSFFn by an alternate sequence handling function to
process the formulas for the remaining sequences in the
polynomial coefficient terms, as well as the
formulas for the coefficient indices in the
polynomial index $j$ and the upper index of summation in
\eqref{eqn_Polyjx_single_factor_seq_formula_template_v1} and
\eqref{eqn_Polyjx_double_factor_seq_formula_template_v2}.
The most common and useful of these option settings are documented in the
examples below and in the sections of this chapter.
\subsubsection{Examples: Coefficient Factors Involving the
Stirling Numbers of the First Kind}
Consider the following pair of sums resulting from the
expansions of the binomial coefficients as polynomials in $n$
\begin{align}
\notag
\binom{n}{k} & =
\frac{n^{\underline{k}}}{k!} =
\frac{1}{k!} \times n \cdot (n-1) \cdot (n-2) \cdots (n-k+1) \\
\label{eqn_BinomCoeff_FallingFactFn_PolyExps-stmt_v1}
& =
\frac{1}{k!} \times \sum_{i=0}^{k} \gkpSI{k}{i} (-1)^{k-i} n^{i} \\
\notag
\binom{n+m}{m} & =
\frac{n^{\overline{m+1}}}{n \cdot m!} =
\frac{1}{m!} \times (n+1) \cdot (n+2) \cdots (n+m-1) \cdot (n+m) \\
\label{eqn_BinomCoeff_RisingFactFn_PolyExps-stmt_v2}
& =
\frac{1}{m!} \times \sum_{i=0}^{m} \gkpSI{m+1}{i+1} n^{i},
\end{align}
where $n^{\underline{k}}$ denotes the \emph{falling factorial function} and
$n^{\overline{m}}$ is the \emph{rising factorial function}
in the respective expansions of the previous equations
\citep[\S 2.6; \S 5.1]{GKP} \citep[\cf \S 26.1]{NISTHB}.
The sums for the binomial coefficient expansions
involving the Stirling numbers in each of
\eqref{eqn_BinomCoeff_FallingFactFn_PolyExps-stmt_v1} and
\eqref{eqn_BinomCoeff_RisingFactFn_PolyExps-stmt_v2}
are known closed--form identities for the
rising and falling factorial functions, respectively, stated in
\citep[\S 6.1]{GKP} \citep[\cf \S 26.8]{NISTHB}.
To see how the package can assist a user in rediscovering these
identities, consider the respective \Mm outputs given in
Figure \ref{figure_BinomCoeff_FallingFactFn_PolyExp_Formula_v1} and
Figure \ref{figure_BinomCoeff_RisingFactFn_PolyExp_Formula_v2}.
A related example involving the polynomial sequence in
\eqref{eqn_BinomCoeff_FallingFactFn_PolyExps-stmt_v1} is shown in
Figure \ref{figure_BinomCoeff_FallingFactFn_PolyExp_Formula_v3}.
\begin{figure}[h]
\begin{center}
\fbox{\includegraphics{binom-poly-exp-formula-v1.pdf}}
\end{center}
\caption{Computing a Polynomial Formula for the
Falling Factorial Function (\Mm)}
\label{figure_BinomCoeff_FallingFactFn_PolyExp_Formula_v1}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{boxedminipage}{0.87\linewidth}
\begin{sagecommandline}
sage: ## Falling factorial polynomials
sage: from GuessPolynomialSequenceFunction import *
sage: n = var('n')
sage: poly_seq_func = lambda k: expand(simplify(binomial(n, k) * factorial(k)))
sage: pseq_data = map(poly_seq_func, range(1, 6))
sage: guess_polynomial_sequence(pseq_data, n, index_offset = 1);
\end{sagecommandline}
\end{boxedminipage}
\end{center}
\caption{Computing a Polynomial Formula for the
Falling Factorial Function (\emph{Sage})}
\label{figure_BinomCoeff_FallingFactFn_PolyExp_Formula_v1.2}
\end{figure}
\begin{figure}[h]
\begin{center}
\fbox{\includegraphics{binom-poly-exp-formula-v2.pdf}}
\end{center}
\caption{Computing a Polynomial Formula for the
Rising Factorial Function (\Mm)}
\label{figure_BinomCoeff_RisingFactFn_PolyExp_Formula_v2}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{boxedminipage}{0.87\linewidth}
\begin{sagecommandline}
sage: ## Rising factorial polynomials
sage: from GuessPolynomialSequenceFunction import *
sage: n = var('n')
sage: poly_seq_func = lambda m: expand(simplify(binomial(n + m, m) * factorial(m)))
sage: pseq_data = map(poly_seq_func, range(1, 4))
sage: guess_polynomial_sequence(pseq_data, n, index_offset = 1);
\end{sagecommandline}
\end{boxedminipage}
\end{center}
\caption{Computing a Polynomial Formula for the
Rising Factorial Function (\emph{Sage})}
\label{figure_BinomCoeff_RisingFactFn_PolyExp_Formula_v2.2}
\end{figure}
\subsubsection{Example: Multiple Polynomial Sequence
Formulas Derived from Symmetric Sequences}
The package sequence recognition routines are able to find formulas for
polynomials involving the \emph{binomial coefficients}, $\binom{n}{k}$, and the
\emph{first--order Eulerian numbers}, $\gkpEI{n}{m}$.
These sequences both have symmetry in each row of the corresponding
triangles that satisfy the following pair of reflection identities
where $n,k,m \in \mathbb{N}$ \citep[\S 5, \S 6.2]{GKP}:
\begin{equation}
\notag
\binom{n}{k} = \binom{n}{n-k}
\qquad \text{ and } \qquad
\gkpEI{n}{m} = \gkpEI{n}{n-1-m}.
\end{equation}
The examples given in this section demonstrate the multiple formulas
obtained by the package for polynomial sequences involving these triangles
that result from the coefficient symmetry noted in the forms of the
previous equation.
\begin{figure}[h]
\begin{center}
\fbox{\includegraphics{binom-derivop-formulas-v1.pdf}}
\end{center}
\caption{A Sum Involving Derivative Operators and
Squares of the Binomial Coefficients (\Mm)}
\label{figure_binom-derivop-formulas-v1}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{boxedminipage}{0.87\linewidth}
\begin{sagecommandline}
sage: ## Binomial squared difference operator identity
sage: from GuessPolynomialSequenceFunction import *
sage: n, i, w = var('n i w')
sage: poly_seq_func = lambda d: sum((binomial(d, i) ** 2) * factorial(d - i) * (n ** i), i, 0, d)
sage: pseq_data = map(poly_seq_func, range(1, 8))
sage: guess_polynomial_sequence(pseq_data, n, seq_factors = ["Binom2"], index_offset = 1);
\end{sagecommandline}
\end{boxedminipage}
\end{center}
\caption{A Formula Involving Derivative Operators and
Squares of the Binomial Coefficients (\emph{Sage})}
\label{figure_binom-derivop-formulas-v1.2}
\end{figure}
The first example
corresponds to an identity involving squares of the
binomial coefficients and the \emph{derivative operator},
$D^{(j)}\left[F(z)\right] \equiv F^{(j)}(z)$, of a function, $F(z)$,
whose $j^{th}$ derivative with respect to $z$ exists for some
$j \in \mathbb{N}$.
In particular,
suppose that the function $F(z)$ denotes the
ordinary generating function of the sequence, $\langle f_n \rangle$, and the
function has $j^{th}$ derivatives of orders
$j \in [0, d] \subseteq \mathbb{N}$. Then for $d \in \mathbb{Z}^{+}$, the
generating function for the modified sequence,
$\langle \frac{(n+d)!}{n!} f_n \rangle$, satisfies the formula
\begin{equation}
\label{eqn_BinomSquared_npdFactOvernFact_DerivOpIdent-stmt_v1}
\sum_{n=0}^{\infty} \frac{(n+d)!}{n!} f_n z^n =
\sum_{n=0}^{\infty} (n+1) \cdots (n+d) \times f_n z^n =
\sum_{i=0}^{d} \binom{d}{i}^2 (d-i)! \times z^i D^{(i)}\left[F(z)\right].
\end{equation}
Notice that a proof of the formula given in
\eqref{eqn_BinomSquared_npdFactOvernFact_DerivOpIdent-stmt_v1}
follows easily by induction on $d \geq 1$.
A user may obtain the first several values of this sequence empirically by
evaluating \MmPlain's \url{GeneratingFunction} for the modified
sequence terms over the first few values of $d \geq 1$.
Figure \ref{figure_binom-derivop-formulas-v1} shows a use of the
package in guessing a formula for
\eqref{eqn_BinomSquared_npdFactOvernFact_DerivOpIdent-stmt_v1}
where the polynomial variable ($w^i$ in the figure listing)
corresponds to the operator form of $z^i D^{(i)}$, and where the
\emph{Pochhammer symbol}, $(1)_{j-i} \equiv (j-i)!$.
\begin{figure}[h]
\begin{center}
\fbox{\includegraphics{E1-geom-poly-powers-idents-v1.pdf}}
\end{center}
\caption{Ordinary Generating Functions of Polynomial Powers (\Mm)}
\label{figure_E1-geom-poly-powers-idents-v1}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{boxedminipage}{0.87\linewidth}
\begin{sagecommandline}
sage: ## First-order Eulerian numbers in the OGFs of the polylogarithm
sage: ## functions, Li_{-m}(z), for natural numbers m >= 1
sage: from GuessPolynomialSequenceFunction import *
sage: n, z = var('n z')
sage: poly_seq_func = lambda m: expand(factor(sum((n ** m) * (z ** n), n, 0, infinity)) * ((1 - z) ** (m + 1))).subs(z = n)
sage: pseq_data = map(poly_seq_func, range(1, 6))
sage: guess_polynomial_sequence(pseq_data, n, seq_factors = ["E1"], index_offset = 1);
\end{sagecommandline}
\end{boxedminipage}
\end{center}
\caption{Ordinary Generating Functions of Polynomial Powers (\emph{Sage})}
\label{figure_E1-geom-poly-powers-idents-v1.2}
\end{figure}
The next example in this section corresponds to the sequence of
ordinary generating functions for polynomial powers of $n$,
$\sum_{n \geq 0} n^m z^n$ for $m \in \mathbb{N}$.
These generating functions satisfy well--known polynomial identities
involving the \emph{Stirling numbers of the second kind},
$\gkpSII{n}{k}$, and the first--order Eulerian numbers, $\gkpEI{n}{m}$,
stated as follows \cite[\cf \S 7.4]{GKP}:
\begin{align}
\label{eqn_PolyPowersOGFs_S2E1_sum_formulas-stmts_v1}
\sum_{n=0}^{\infty} n^{m} z^n & =
\sum_{k=0}^{m} \gkpSII{m}{k} \frac{k! \cdot z^k}{(1-z)^{k+1}} =
\frac{1}{(1-z)^{m+1}} \sum_{i=0}^{m} \gkpEI{m}{i} z^{i+1}.
\end{align}
The second example cited in this section focuses on the second
expansion in \eqref{eqn_PolyPowersOGFs_S2E1_sum_formulas-stmts_v1}
given in terms of the Eulerian number triangle.
Figure \ref{figure_E1-geom-poly-powers-idents-v1}
shows the output of the package on the second polynomial sequence
scaled by a multiple of $(1-z)^{m+1}$.
As with the first example, the row--wise symmetry in the Eulerian number
triangle results in the two separate formulas in the figure.
\subsubsection{Examples: Two--Factor Polynomial Sequence Formulas}
\begin{figure}[h]
\begin{center}
\fbox{\includegraphics{S1S2-double-factor-formula-example-v1.pdf}}
\end{center}
\caption{A Double--Factor Sequence Example Involving the
Stirling Number Triangles}
\label{figure_S1S2-double-factor-formula-example-v1}
\end{figure}
\begin{figure}[h!]
\begin{center}
\fbox{\includegraphics{S1Binom-double-factor-formula-example-v2.pdf}}
\end{center}
\caption{A Double--Factor Sequence Example Involving the
Stirling Numbers of the First Kind and the
Binomial Coefficients}
\label{figure_S1Binom-double-factor-formula-example-v2}
\end{figure}
The examples cited in this section correspond to the two--factor
polynomial sequence formulas in the form of
\eqref{eqn_Polyjx_double_factor_seq_formula_template_v2}.
The first example given in
Figure \ref{figure_S1S2-double-factor-formula-example-v1}
shows the output of the package function \\
\GPSFPkgGuessFnName for a formula involving the
Stirling numbers of the first and second kinds. The second example given in
Figure \ref{figure_S1Binom-double-factor-formula-example-v2}
shows the pair of formulas output for a sequence formula involving the
Stirling numbers of the first kind and the binomial coefficients.
The use of the runtime option \url{IndexOffsetPairs} in both of these
examples is explained in more detail by
\sref{subSection_PkgUsageChapter_Troubleshooting_PotentialIssues}.
\subsection{User Guess Functions}
\label{subSection_PkgUsageChapter_SpecUserGuessFns}
The guessing routines implemented in the package rely on some intuition on the
part of the user to determine a general template for the end formulas for
an input polynomial sequence with coefficients over the integers.
The user may specify an additional ``\emph{user guess function}'' that is
employed by the package to pre--process the coefficients of the
polynomial sequence terms passed to the function \GPSFPkgGuessFnName.
This construction allows semi--rational, and even non--polynomial functions
in the input variable to be processed by the package functions.
\subsubsection{Example: A Second Formula for the Falling Factorial Function}
\begin{figure}[h]
\begin{center}
\fbox{\includegraphics{binom-poly-exp-formula-v12.pdf}}
\end{center}
\caption{A Formula for the Falling Factorial Function
by a User Guess Function (\Mm)}
\label{figure_BinomCoeff_FallingFactFn_PolyExp_Formula_v3}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{boxedminipage}{0.87\linewidth}
\begin{sagecommandline}
sage: ## Another exponential falling factorial polynomial example
sage: ## with a user guess function
sage: from GuessPolynomialSequenceFunction import *
sage: n, z = var('n z')
sage: poly_seq_func = lambda k: binomial(n, k)
sage: user_guess_func = lambda n, k: 1 / factorial(k)
sage: pseq_data = map(poly_seq_func, range(1, 6))
sage: guess_polynomial_sequence(pseq_data, n, user_guess_func = user_guess_func, index_offset = 1);
\end{sagecommandline}
\end{boxedminipage}
\end{center}
\caption{Computing a Formula for the Falling Factorial Function
by a User Guess Function (\emph{Sage})}
\label{figure_BinomCoeff_FallingFactFn_PolyExp_Formula_v3.2}
\end{figure}
A first example of the syntax for guessing the polynomial expansions of the
binomial coefficient identity from
\eqref{eqn_BinomCoeff_FallingFactFn_PolyExps-stmt_v1} is provided in
Figure \ref{figure_BinomCoeff_FallingFactFn_PolyExp_Formula_v3}.
Notice that this example is similar to the first form of the
sequence formula computed by the package in
Figure \ref{figure_BinomCoeff_FallingFactFn_PolyExp_Formula_v1},
except that in this case the input sequence is not normalized by a factor
of $k$ to make the polynomial coefficients strictly integer--valued.
A similar computation is employed to discover an analogous sum for the
non--normalized sequence formula corresponding to the
rising factorial function from
Figure \ref{figure_BinomCoeff_RisingFactFn_PolyExp_Formula_v2}.
\subsubsection{Example: An Exponential Generating Function for the
Binomial Coefficients}
\begin{figure}[h!]
\begin{center}
\fbox{\includegraphics[width=0.95\textwidth]{binom-EGF-formula-v1.pdf}}
\end{center}
\caption{An Exponential Generating Function for the
Binomial Coefficients (\Mm)}
\label{figure_binom_EGF_formula_v1}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{boxedminipage}{0.87\linewidth}
\begin{sagecommandline}
sage: ## Exponential generating functions for the symmetric--indexed
sage: ## binomial coefficients
sage: from GuessPolynomialSequenceFunction import *
sage: n, z = var('n z')
sage: poly_seq_func = lambda k: sum(binomial(n + k, k) * (z ** n) / factorial(n), n, 0, infinity) * exp(-z)
sage: user_guess_func = lambda n, k: 1 / factorial(k)
sage: pseq_data = map(poly_seq_func, range(1, 6))
sage: guess_polynomial_sequence(pseq_data, z, seq_factors = ["Binom2"], user_guess_func = user_guess_func, index_offset = 1);
\end{sagecommandline}
\end{boxedminipage}
\end{center}
\caption{An Exponential Generating Function for the
Binomial Coefficients (\emph{Sage})}
\label{figure_binom_EGF_formula_v1.2}
\end{figure}
A sequence of exponential generating functions for the symmetric form of the
binomial coefficients, $\binom{n+m}{m}$, taken over $m \in \mathbb{N}$
satisfies the formula given in the following equation
\citep[\cf \S 7.2]{GKP}:
\begin{equation}
\label{eqn_binom_EGF_formula_v1}
\EGF_z\left(\frac{1}{(1-z)^{m+1}}\right) \equiv
\sum_{n=0}^{\infty} \binom{n+m}{n} \frac{z^n}{n!} \equiv
\sum_{s=0}^{m} \binom{m}{s} \frac{e^{z} \cdot z^s}{s!}.
\end{equation}
A proof of this identity is given using \emph{Vandermonde's convolution}
identity for the binomial coefficients
\citep[\S Table 174; \S 5.2; \cf eq. (5.22)]{GKP}.
Figure \ref{figure_binom_EGF_formula_v1}
shows a use of the package to guess the formula in
\eqref{eqn_binom_EGF_formula_v1} by providing
a user guess function that effectively removes the
factor of $e^{z}$ in the expected formula, and that cancels out the
coefficient factors of $1/s!$ to produce an input sequence with
integer coefficients.
\subsection{Troubleshooting Possible Issues}
\label{subSection_PkgUsageChapter_Troubleshooting_PotentialIssues}
\subsubsection{Inputting an Insufficient Number of Sequence Elements}
There are a couple of issues that can arise in running the package
routines when too few values of the sequence are passed to the
\GPSFPkgGuessFnName function. The first of these is that
\FSFFn may require a lower bound on the number of sequence values
necessary to compute formulas for the remaining sequence terms.
This can occur, for example, when the remaining sequence is a
polynomial in the summation index.
\begin{figure}[h!]
\begin{center}
\fbox{\includegraphics[width=0.95\textwidth]{insufficient-elements-example-v1.pdf}}
\end{center}
\caption{Troubleshooting an Insufficient Number of Sequence Elements}
\label{figure_insufficient_elements_example_v1}
\end{figure}
Another quirk of \MmPlain's built--in \FSFFn is that it may return a
sequence formula matching a recurrence relation that is
actually accurate for the few sequence elements input to the function.
An example of this behavior is illustrated by the output given in
Figure \ref{figure_insufficient_elements_example_v1}.
In most cases, the problem is resolved by simply passing more
polynomials from the sequence, usually at least $6$, but possibly
$8$ or more elements from the sequence.
The package is configured to warn users when less than $6$ initial terms
are input to the function with no matching formulas.
\begin{figure}[h!]
\begin{center}
\fbox{\includegraphics[width=0.95\textwidth]{triangle-num-rows-example-v1.pdf}}
\end{center}
\caption{Troubleshooting Runtime Settings of the
\texttt{TriangularSequenceNumRows} Option}
\label{figure_triangle-num-rows-example-v1}
\end{figure}
\subsubsection{Number of Rows for the Expected Triangular Sequence Factors}
In some cases, the package functions may not be able to obtain a formula
for an input sequence due to an insufficient setting for the number of
rows to consider for the expected triangular sequence factors.
The runtime option to change the number of rows used to detect the
factors of the expected triangular sequence is
\url{TriangularSequenceNumRows}
(the current default setting is \url{TriangularSequenceNumRows->12}).
Figure \ref{figure_triangle-num-rows-example-v1}
provides an example involving the Stirling numbers of the second kind
where the upper index of the sequence depends quadratically on the
polynomial index $j$.
In this example, the package routines are unable to obtain a formula when the
runtime option is reset to \url{TriangularSequenceNumRows->24}, but
correctly finds the sequence formula by setting the option to the
higher value of \url{TriangularSequenceNumRows->72}.
Notice that choosing a significantly higher default setting for this
option may result in much slower running times, especially if the
expected triangular sequence factors contain a large number of $1$--valued
entries, for example, as in the Stirling numbers of the first kind,
binomial coefficient, and first--order Eulerian number triangles.
\subsubsection{Handling Long Running Times with Multiple Sequence Factors}
The package function \GPSFPkgGuessFnName is able to return
sequence formulas in the single--factor form given in
\eqref{eqn_Polyjx_single_factor_seq_formula_template_v1}
in a reasonable amount of running time.
As suggested in the double--factor sequence examples of the form in
\eqref{eqn_Polyjx_double_factor_seq_formula_template_v2} from
Figure \ref{figure_S1S2-double-factor-formula-example-v1} and
Figure \ref{figure_S1Binom-double-factor-formula-example-v2}, the
runtime option \url{IndexOffsetPairs} is needed to speed--up the
running time for the computations involved in these sequence cases.
The \url{IndexOffsetPairs} option is defined as a list of lists of the form
\begin{equation}
\label{eqn_IndexOffsetPairs_option_element_forms_v1}
\mathtt{
\{ \{u_1, \ell_1\}, \{u_2, \ell_2\}, \ldots, \{u_r, \ell_r\} \}
},
\end{equation}
where $r \geq 1$ denotes the expected number of sequence factors
involved in the search for the sequence formulas by \GPSFPkgGuessFnNamePlain.
In the examples cited in
Figure \ref{figure_S1S2-double-factor-formula-example-v1},
Figure \ref{figure_S1Binom-double-factor-formula-example-v2}, and in the
template form of \eqref{eqn_Polyjx_double_factor_seq_formula_template_v2}, the
value of $r$ corresponds to $r := 2$.
For a fixed choice of $r \geq 1$,
each element of the list defined by \url{IndexOffsetPairs} passed in the
form of \eqref{eqn_IndexOffsetPairs_option_element_forms_v1}
corresponds to a search for a sequence formula of the form
\begin{align}
\notag
\Poly_j(x) & := \sum_{i=0}^{j+j_0} \left(
\prod_{i=1}^{r}
\genfracSeq{\widetilde{u}_i(j) + u_i i}{\widetilde{\ell}_i(j) + \ell_i i}_i
\right)
\times \RS_1(i) \RS_2(j+j_0-i) \cdot x^i.
\end{align}
Thus resetting the value of this option at runtime can speed--up the
search for matching formulas in the cases of multiple expected
sequence factors, especially
compared to the number of index offset pairs resulting
from the default enumeration of these pair values.
\section{More Examples of Polynomial Sequence Types Recognized by the Packages}
\label{subSection_PkgUsageChapter_MoreExamples_Full}
The examples cited in this section are intended to document further
forms of the polynomial sequence types that the package is able to
recognize. These examples include handling polynomial sequence formulas
that depend on arithmetic progressions of indices,
coefficients that contain symbolic data, and examples of sequence formulas
obtained by the package routines when the expected sequence factors do not
depend on the summation index, i.e., when the factors only depend on the
polynomial sequence index.
\begin{figure}[h!]
\begin{center}
\fbox{\includegraphics[width=0.95\textwidth]{arith-progs-of-coeffs-example-v1.pdf}}
\end{center}
\caption{Handling Arithmetic Progressions of Indices}
\label{figure_arith-progs-of-coeffs-example-v1}
\end{figure}
\subsection{Example: Arithmetic Progressions of Coefficient Indices}
The package function \GPSFPkgGuessFnName can be configured to search for
sequence formulas involving arithmetic progressions of the summation
index, $f(j)+ai$, for values besides $a := \pm 1$ by resetting the
runtime option \url{IndexMultiples}. The default setting of this option is
\url{IndexMultiples->{0,1}}.
Figure \ref{figure_arith-progs-of-coeffs-example-v1}
provides an example of recognizing sequence formulas involving squares of the
binomial coefficients where the upper index of the triangle does not
depend on the summation index (a setting of $a := 0$) and where the
lower triangle index involves an arithmetic progression of the summation
index with $a := \pm 3$.
Related sequence formulas are recognized by setting the runtime value of this
option to a list of test values that is some subset of the natural numbers.
Notice that if the list of values for the option \url{IndexMultiples}
does not contain $0$, the package routines will not find formulas like those
given in Figure \ref{figure_arith-progs-of-coeffs-example-v1}
where the upper index of the expected triangle factors only depends on the
polynomial sequence index ($j$ in the figure examples).
\begin{figure}[h!]
\begin{center}
\fbox{\includegraphics{symbolic-coeff-data-example-v2.pdf}}
\end{center}
\caption{Recognizing Sequence Formulas Involving Symbolic Coefficients}
\label{figure_symbolic-coeff-data-example-v2}
\end{figure}
\begin{figure}[h!]
\begin{center}
\fbox{\includegraphics{symbolic-coeff-data-example-v3.pdf}}
\end{center}
\caption{A Second Formula Involving Square Index Powers of
Symbolic Coefficients}
\label{figure_symbolic-coeff-data-example-v3}
\end{figure}
\subsection{Examples: Formulas Involving Symbolic Coefficient Data}
The function \GPSFPkgGuessFnName can be configured to search for
formulas where the input coefficients of the polynomial sequence contain
non--numeric factors of symbolic data through the runtime option
\url{AllowSymbolicData}.
Figure \ref{figure_symbolic-coeff-data-example-v2} and
Figure \ref{figure_symbolic-coeff-data-example-v3}
provide examples of sequence formulas involving non--numeric, symbolic terms,
named $a,b,c,q,r$, that are recognized by the package by passing \\
\url{AllowSymbolicData->True} to the \GPSFPkgGuessFnName function at runtime.
\begin{figure}[h!]
\begin{center}
\fbox{\includegraphics[width=0.95\textwidth]{other-examples-v1.pdf}}
\end{center}
\caption{Expected Sequence Factors Independent of the Sum Index}
\label{figure_other-examples-v1}
\end{figure}
\begin{figure}[h!]
\begin{center}
\fbox{\includegraphics[width=0.95\textwidth]{other-examples-v2.pdf}}
\end{center}
\caption{Another Example of Expected Sequence Factors
Independent of the Sum Index}
\label{figure_other-examples-v2}
\end{figure}
\subsection{Examples: Recognition of Other Sequence Formulas with the
\Mm Package}
Figure \ref{figure_other-examples-v1} and
Figure \ref{figure_other-examples-v2}
cite two additional examples of sequence formulas that the package is able to
recognize when the triangular sequence factors expected by the user do not
depend on the summation index, $i$, only the polynomial sequence index, $j$.
In the first example given in
Figure \ref{figure_other-examples-v1}, the expected binomial coefficient
factor corresponds to a polynomial in $j$.
In the second example given in
Figure \ref{figure_other-examples-v2}, the expected Stirling number
factor corresponds to an expansion in terms of $r$--order harmonic numbers,
$H_{j+2}^{(r)}$, that is reported as the factor of the original
Stirling number sequence.
\subsection{Examples: Recogntion of Other Polynomial Sum Identities with the
\emph{Sage} Package Implementation}
\subsubsection{Example: A Legendre Polynomial Identity Involving
Squares of the Binomial Coefficients}
A finite polynomial sum over the squared powers of the binomial
coefficients is expressed through the Legendre polynomials, $P_n(x)$, and
its ordinary generating function in two variables in the following forms
\citep[\S 18]{NISTHB}:
\begin{align*}
S_n(z) & := \sum_{k=0}^{n} \binom{n}{k}^2 z^k =
(1-z)^n P_n\left(\frac{1+z}{1-z}\right) =
[t^n] \left(\frac{1}{\sqrt{1 - (1+z) t + (z-1)^2 t^2}}\right).
\end{align*}
Alternately, we may obtain information about a closed--form sum for the
Legendre polynomials over these polynomial inputs to the sequence
through a known recurrence relation for the sums, $S_n(z)$, given by
\cite[p. 543]{GKP}
\begin{equation*}
(n+1) (1-z)^2 S_n(z) - (2n+3)(z+1) S_{n+1}(z) + (n+2) S_{n+2}(z) = 0.
\end{equation*}
Figure \ref{figure_LegendrePoly_SumFormula_formula_v1.2}
provides a listing of \emph{Sage} commands using the new package
implementation in \emph{Python} to obtain a sequence formula for the
right--hand--side polynomial expansions.
\begin{figure}[h]
\begin{center}
\begin{boxedminipage}{0.87\linewidth}
\begin{sagecommandline}
sage: ## A Legendre polynomial identity involving squared
sage: ## powers of the binomial coefficients
sage: from GuessPolynomialSequenceFunction import *
sage: n, z = var('n z')
sage: poly_seq_func = lambda m: expand( factor( legendre_P(m, (1+z)/(1-z)) * ((1-z) ** m) ) )
sage: pseq_data = map(poly_seq_func, range(1, 6))
sage: guess_polynomial_sequence(pseq_data, z, seq_factors = ["Binom2"]);
\end{sagecommandline}
\end{boxedminipage}
\end{center}
\caption{An Exponential Generating Function for the
Binomial Coefficients (\emph{Sage})}
\label{figure_LegendrePoly_SumFormula_formula_v1.2}
\end{figure}
\subsubsection{Example: An Exponential Generating Function for the
Exponential Bell Polynomials}
An exponential generating function for the Bell, or exponential,
polynomials and the corresponding finite sum expansion over the
Stirling numbers of the second kind is given in the next equation
\citep[\S 4.1.3]{UC}.
\begin{align*}
n! \cdot B_n(x) & = [t^n] \exp\left(\left(e^t - 1\right) x\right)
= \sum_{k=0}^{n} \gkpSII{n}{k} x^k
\end{align*}
Figure \ref{figure_BellExpPoly_SumFormula_formula_v1.2}
provides a listing of the \emph{Sage} commands needed to
recognize the rightmost identity for this special sequence.
\begin{figure}[h]
\begin{center}
\begin{boxedminipage}{0.87\linewidth}
\begin{sagecommandline}
sage: ## Finite summation formula for the exponential
sage: ## Bell polynomials
sage: from GuessPolynomialSequenceFunction import *
sage: def series_coefficient_zpow(f, fvar, ncoeff): return f.taylor(fvar, 0, ncoeff) - f.taylor(fvar, 0, ncoeff - 1)
sage: def series_coefficient(f, fvar, ncoeff): return series_coefficient_zpow(f, fvar, ncoeff).subs_expr(fvar == 1)
sage: n, x, t = var('n x t')
sage: spoly_ogf = exp( (exp(t) - 1) * x)
sage: poly_seq_func = lambda n: series_coefficient(spoly_ogf, t, n) * factorial(n)
sage: pseq_data = map(poly_seq_func, range(1, 6))
sage: guess_polynomial_sequence(pseq_data, x, seq_factors = ["S2"]);
\end{sagecommandline}
\end{boxedminipage}
\end{center}
\caption{An Exponential Generating Function for the
Binomial Coefficients (\emph{Sage})}
\label{figure_BellExpPoly_SumFormula_formula_v1.2}
\end{figure}
\chapter{Conclusions}
\label{Chapter_Conclusions}
\section{Concluding Remarks}
The package source code portion of the thesis
provides a successful ``proof of concept'' implementation of the logic
employed by the approach to the package to recognize
polynomial sequence formula types of the noted forms in
\eqref{eqn_Polyjx_single_factor_seq_formula_template_v1} and
\eqref{eqn_Polyjx_double_factor_seq_formula_template_v2}.
The primary deficiency of the package implementation is current as of
this writing is the long running time of the package function
\GPSFPkgGuessFnName when processing double--factor and
multiple--factor sequence formulas of the form outlined in
\eqref{eqn_pmx_gen_poly_sequence_forms-intro_stmt_v1}.
Single--factor polynomial sequence formulas in the form of
\eqref{eqn_Polyjx_single_factor_seq_formula_template_v1} like those cited in
\eqref{eqn_ModPolyPowGFs_of_the_S1StirlingNumbers-stmts_v1} of the
introduction are already somewhat easy, though not trivial,
to guess by the user.
For the package to be really useful in practice, the sequence
recognition routines provided through the wrapper function \GPSFPkgGuessFnName
should be able to guess double--factor formulas of the form in
\eqref{eqn_Polyjx_double_factor_seq_formula_template_v2}
fairly quickly and efficiently out--of--the--box.
The examples given in \chref{Chapter_PkgUsageChapter}
provide several non--trivial uses of the package
for recognizing single--factor polynomial formulas of the
first sequence form in
\eqref{eqn_Polyjx_single_factor_seq_formula_template_v1}.
These and related applications corresponding to polynomials that
satisfy a single--factor formula of this variety are easily and fairly
quickly recognized by the package given an accurate user--defined
setting of the \url{SequenceFactors} runtime option to
\GPSFPkgGuessFnNamePlain.
For polynomial sequences that satisfy a double--factor formula of the
second form in \eqref{eqn_Polyjx_double_factor_seq_formula_template_v2}, and
more generally a multiple--factor formula in the form stated in
\eqref{eqn_pmx_gen_poly_sequence_forms-intro_stmt_v1} where $r \geq 3$, the
current package implementation is unable to quickly search for matching
formulas without a somewhat manual limited setting of the
\url{IndexOffsetPairs} option provided at runtime.
The sample output for the examples given in
Figure \ref{figure_S1S2-double-factor-formula-example-v1} and
Figure \ref{figure_S1Binom-double-factor-formula-example-v2}
show the usage of the package for handling double--factor sequence
formulas with an appropriate setting of this option.
In future revisions of the package,
it should ideally be possible for the package to quickly obtain
formulas for these sequence cases without the user manually resetting the
default search options used with the \GPSFPkgGuessFnName
function provided by the package.
\section{Future Features in the Package}
\label{Chapter_PkgImpl_Section_FutureFeatures}
\subsection{Processing Polynomial Sequences with Rational Coefficients}
\label{subSection_FutureFeaturesChapter_RationalSeqTransforms}
One approach to extending the package functionality to recognize
formulas for polynomial sequences in $\mathbb{Q}\lbrack x\rbrack$
is to pre--process the rational--valued coefficients to
transform the sequence into the polynomials over the integers already
handled by the package routines. Variations of these pre--processing
transformations include normalizing the polynomials, its coefficients, or
both by exponential factors to clear the denominators of the
rational--valued input sequence. For example, let the polynomial
$p_j(x) := \sum_i c_i x^i$. Then these transformations are formulated as
obtaining the modified polynomials, $\widetilde{p}_j(x)$, as
$\widetilde{p}_j(x) := j! \cdot p_j(x)$, as
$\widetilde{p}_j(x) := \sum_i i! \cdot c_i x^i$, or in the combined form of
$\widetilde{p}_j(x) := \sum_i j! \cdot i! \cdot c_i x^i$,
whenever the resulting modified polynomial sequences are in
$\mathbb{Z}\lbrack x \rbrack$.
Another transformation option is applied to rationalize the polynomial
sequences, $S_m(n)$, in $n$ defined through the following sums where
$B_n$ denotes the (rational) sequence of Bernoulli numbers
\citep[\S 6.5]{GKP}:
\begin{equation}
\notag
S_m(n) := \sum_{k=0}^{n-1} k^m = \sum_{k=0}^{m} \binom{m+1}{m-k}
\frac{B_{m-k}}{(m+1)} \cdot n^{k+1}.
\end{equation}
The sequences in the previous equation are normalized by multiplying
each polynomial, $S_m(n)$, by the least common multiple of the
denominators of each coefficient of $n^{k+1}$ in the formula.
Then assuming access to the lookup capabilities of the \OEIS database,
which contains sequence entries for both integer sequences of the
numerators and denominators of the Bernoulli numbers, obvious
factors, of say $691$, are recognized to process the full formulas for the
sequences of $S_m(n)$ over $\mathbb{Q}\lbrack n \rbrack$.
\subsection{Polynomial Expansions With Respect to a Suitable Basis}
The discussion given in \citep[Appendix A]{RATE.M-PKG-DOCS} related to the
implementation of the \ttemph{Rate} package for
\Mm states a useful observation that may be
adapted to the polynomial formula searches local to this package.
Specifically, expressing input polynomial sequences with respect to a
``\emph{suitable}'' basis, like shifted factorial functions or
polynomial terms expressed by binomial coefficients, allows for
recognition of sequence formulas that are not apparent in the default
expansions of the polynomial sequence variable.
Several examples relevant to adapting this idea in the context of the
factorization--based approach in this package include the following
polynomial sequence expansions \citep[Ex. 6.78; \S 6.2; Ex. 6.68]{GKP}:
\begin{align*}
\binom{2n}{n} \frac{B_n}{(n+1)} & = \sum_{k=0}^{n} \gkpSII{n+k}{k}
\binom{2n}{n+k} \frac{(-1)^{k}}{(k+1)} \\
x^n & = \sum_{k=0}^{n} \gkpEI{n}{k} \binom{x + k}{n} \\
\gkpSI{x}{x-n} & = \sum_{k \geq 0} \gkpSII{n}{k} \binom{x+k}{2n},\ n \geq 0 \\
\gkpEII{n}{m} & = \sum_{k=0}^{m}
\binom{2n+1}{k} \gkpSII{n+m+1-k}{m+1-k} (-1)^{k},\ n > m \geq 0.
\end{align*}
These sequences provide applications related to the
polynomial expansions of the Catalan numbers (in $n$), the
Stirling convolution polynomials, $\sigma_n(x)$, and the
second--order Eulerian numbers, $\gkpEII{n}{m}$, respectively.
\section{Future Research Topics}
The next sections discuss several topics for future research
suggested by the implementation of the software package for the thesis.
These future research topics include
a new variation of integer factorization algorithms
motivated by the factorization--based approach to handling the
user--defined expected sequence factors in the package routines, as well as
additional topics for future exploration to extend the current
capabilities of the univariate polynomial sequence recognition in the
package.
The extension of the current package functionality to
recognizing polynomials in a single variable with rational--valued
coefficients is already considered in
\sref{Chapter_PkgImpl_Section_FutureFeatures} of the thesis above.
\subsection{Sequence--Based Integer Factorization Algorithms}
The treatment of the user--defined expected sequence factors as
``primitives'' in the formulas returned by the package functions
motivates the construction of a class of integer factorization algorithms
formulated briefly in the discussion below.
Much like computing the prime factorization of an arbitrary integer,
this class of algorithms should compute the decomposition of an
integer into a product of elements over some specified set of integer
sequences where the elements of these sequences are treated as ``atoms'' in
the factorization returned by the procedure.
Stated more precisely:
given an integer $i$ (or some set of integer--valued
polynomial coefficients) and a list of $k$ integer sequences,
$\{S_1, S_2, \ldots, S_k\}$,
we seek the most efficient way to decompose the integer into
all possible products of integer factors of the form
\begin{equation}
\label{eqn_IntFactorAlg_int_decomp_form-stmt_v1}
i := f_1 \cdot f_2 \cdot \cdots \cdot f_k \times r,
\end{equation}
where the factor $f_i$ belongs to the sequence $S_i$
(for each $1 \leq i \leq k$), and where the remaining factor term, $r$,
is reserved for later processing.
The computation of the list of all factors of the form in
\eqref{eqn_IntFactorAlg_int_decomp_form-stmt_v1} can be computed
over some specified number of elements of each sequence,
or a fixed number of rows for the case of a triangular sequence, $S_i$.
It seems reasonable to expect that such an algorithms must employ the
prime factorizations of the individual factor sequences, $F_i$.
We also seek a solution in the general case, though of course it may be
possible to derive sequence--specific procedures, say, to recognize
factors of the Stirling number or binomial coefficient triangles.
The need for this type of factorization is apparently new, as searches
for such subroutines to employ within the package returned no useful
known results, though
it is possible that there are existing prime factorization algorithms
that may be especially well--suited, or adapted, to this purpose.
This required factorization procedure is handled as an
inefficient implementation of an oracle of sorts within the current
implementation of the \Mm package.
\bibliographystyle{abbrv-mod}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,623 |
"Thank you!" was what she said when Sean "largest crowd in history, period" Spicer criticized her on Twitter. Andrea Mitchell called for the White House Correspondence folk to publicly apologize for Wolf's words, the same Andrea Mitchell who did nothing when her colleague Chuck Todd was attacked by Trump twice calling him "sleepy eyed Chuck Todd" and "a sleepy eyed sonuvabitch." They defended Kellyanne Conway too, saying Wolf went too far with her as well, the same Conway that created the term "alternative facts" in a defense to a blatant lie that no one is really sure, to this day, why Spicey insisted on standing his ground over. And don't forget the Bowling Greene Massacre. But Wolf went too far, so far that two days after the infamous dinner, we are all still asking if it was wrong or appropriate. Well I have the answer: Flint still doesn't have clean water.
In the 19 minute roast, Wolf did what looked like amateur hour for this administration. The biggest difference: in her vulgarity filled roast, a massive amount of truth bombs went off, making the room even more uncomfortable – something the White House and its current staff know nothing about. In fact, they spend most of their time defusing truths so they can cover them in a lie case that shields everyone from the explosion.
For all of those that want to whine about her language, I would suggest checking where she was at the time. She was at the one event that each year celebrates free speech. If you're offended, don't invite a well known risqué comedian to a first amendment celebration and tell her to keep a lid on it.
Have any of you idiots who are responsible ever actually watched her on The Daily Show? Did you miss her HBO special? Did you think that the wise mouth girl from Comedy Central would actually be nice enough to kiss your asses?
You guys are obsessed with Trump. Did you used to date him? Because you pretend like you hate him, but I think you love him. I think what no one in this room wants to admit is that Trump has helped all of you. He couldn't sell steaks or vodka or water or college or ties or Eric, but he has helped you.
He's helped you sell your papers and your books and your TV. You helped create this monster, and now you're profiting off of him.
It was at this moment that truth ran headlong into the headline makers who, all of sudden, began Tweeting that she had gone too far – about Sanders.
She shouldn't have said that about Sanders. I can't believe she'd say that with WHPS sitting right there. Sanders is brave for sitting there through it. WHCD should apologize for Wolf attacking Sanders looks which Wolf never mentioned.
They did just what she accused them of. And by Monday afternoon, they were making bank off Wolf's portrayal of Sanders, a part of the Trump White House. Part of the machine that is driving our culture into the ground. And they will continue to ride the wave of her 19 minutes all week.
The obsession with the Trump White House is addictive, consuming, and a vortex that leads nowhere. We're even guilty of falling into the Trump trap here at The BetaFiles. It's hard to stay out of the hole. But I think the message we can take away from her "vulgar display" is not the shots she took at Trump, but the ray of light she shined back at the press. The power elite that is supposed to be the voice of the people on Saturday and Sunday were proven to be nothing more than power elite who speak to and for themselves. A Trump Truth come to fruition. Self-fulfilling prophesy that has hurt the press more than Trump's insistence that its all fake news.
She didn't do it to them as so many in the press are accusing, rather they did it to themselves by reacting the way they did.
And they missed her harshest critique of both the Administration and the press: the story no one is covering; the same story the Trump White House is ignoring; the very symbol of multiple massive atrocities that the press should be covering into the ground until Washington listens and does something but instead has lost interest and, as a result the story, like so many that it represents, has fallen to the wayside to be forgotten. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,099 |
{"url":"https:\/\/celebratio.org\/Liggett_T\/article\/858\/","text":"# Celebratio Mathematica\n\n## Thomas Milton Liggett\n\n### Approximating multiples of strong Rayleigh random variables\n\n#### by Thomas M. Liggett\n\nThomas Lig\u00adgett\u2019s last pa\u00adper was un\u00adpub\u00adlished at the time of his death in 2020. The text that fol\u00adlows is an ed\u00adited draft of that pa\u00adper, pre\u00adpared by Wen\u00adpin Tang, who also provided the an\u00adnota\u00adtions throughout. Note: Strong Rayleigh ran\u00addom vari\u00adables are also called Pois\u00adson bi\u00adno\u00admi\u00adal ran\u00addom vari\u00adables. Some res\u00adults in this pa\u00adper are presen\u00adted in Sec\u00adtion\u00a04 of the sur\u00advey art\u00adicle of W.\u00a0Tang and F.\u00a0Tang (\u201cThe Pois\u00adson bi\u00adno\u00admi\u00adal dis\u00adtri\u00adbu\u00adtion\u2009\u2014\u2009old & new\u201d (2019), arX\u00adiv:1908.10024), which provides fur\u00adther ref\u00ader\u00adences.\n\n#### 1. Introduction\n\nSup\u00adpose that $$\\label{polyf} f(u)=\\sum_{i=0}^nc_iu^i$$ is a poly\u00adno\u00admi\u00adal with pos\u00adit\u00adive coef\u00adfi\u00adcients $$c_i > 0$$.1 Then:\n\n1. $$f$$ is said to be strong Rayleigh (SR) if all of its roots are real (and hence neg\u00adat\u00adive). This is equi\u00adval\u00adent to say\u00ading that $$f$$ can be factored in\u00adto poly\u00adno\u00admi\u00adals of de\u00adgree 1 with pos\u00adit\u00adive coef\u00adfi\u00adcients.\n2. $$f$$ is said to be Hur\u00adwitz (H) if all of its roots have neg\u00adat\u00adive real part. Since $(u-z)(u-\\bar z)=u^2-2u\\operatorname{Re}(z)+|z|^2,$ this is equi\u00adval\u00adent to say\u00ading that $$f$$ can be factored in\u00adto poly\u00adno\u00admi\u00adals of de\u00adgree at most 2 with pos\u00adit\u00adive coef\u00adfi\u00adcients.\n\nIn this pa\u00adper we con\u00adsider the fol\u00adlow\u00ading ques\u00adtion: When can $$f$$ be factored in\u00adto poly\u00adno\u00admi\u00adals of de\u00adgree at most $$j$$ with pos\u00adit\u00adive coef\u00adfi\u00adcients? We will call this prop\u00aderty $$P_j$$.\n\nOne mo\u00adtiv\u00ada\u00adtion for this is the fol\u00adlow\u00ading: Ex\u00adcept for the nor\u00admal\u00adiz\u00ada\u00adtion $$f(1)=1$$, the $$f$$ in \\eqref{polyf} is the prob\u00adab\u00adil\u00adity gen\u00ader\u00adat\u00ading func\u00adtion (pgf) $$Eu^X$$ of a ran\u00addom vari\u00adable $$X$$ with dis\u00adtri\u00adbu\u00adtion $$c_i=P(X=i)$$. If it2 has prop\u00aderty $$P_j$$, then there are in\u00adde\u00adpend\u00adent ran\u00addom vari\u00adables $$X_i$$ with val\u00adues in $$\\{0,1,\\dots, j\\}$$ so that $$X$$ and $$\\sum_iX_i$$ have the same dis\u00adtri\u00adbu\u00adtion. This prop\u00aderty was ex\u00adploited in [\u25ca] in the case $$j=1$$ in or\u00adder to ob\u00adtain vari\u00adous prob\u00adab\u00adil\u00adist\u00adic lim\u00adit the\u00ador\u00adems.\n\nAn\u00adoth\u00ader mo\u00adtiv\u00ada\u00adtion comes from [\u25ca]. In our con\u00adsid\u00ader\u00ada\u00adtion of mul\u00adtivari\u00adate cent\u00adral lim\u00adit the\u00ador\u00adems for SR vec\u00adtors, we raised the fol\u00adlow\u00ading ques\u00adtion: if $$X$$ has an SR pgf and $$j,k\\geq 1$$, how well can one ap\u00adprox\u00adim\u00adate $$\\frac jk X$$ by an SR ran\u00addom vari\u00adable? We were able to solve this prob\u00adlem for $$j=1$$, show\u00ading that $$\\bigl\\lfloor \\frac 1k X\\bigr\\rfloor$$ is SR. It turns out that $$\\bigl\\lfloor\\frac 2k X\\bigr\\rfloor$$ is very far from be\u00ading SR, in the sense that the ima\u00adgin\u00adary parts of some roots of the pgf are large. Nev\u00ader\u00adthe\u00adless, we will see that a com\u00adbin\u00ada\u00adtion of the res\u00adults in [\u25ca] with the Hermite\u2013Biehler the\u00ador\u00adem (see [\u25ca] for ex\u00adample) im\u00adplies that $$\\bigl\\lfloor\\frac 2k X\\bigr\\rfloor$$ is H. We then spec\u00adu\u00adlate about pos\u00adsible ex\u00adten\u00adsions of this to $$j\\geq 3$$. For the ap\u00adplic\u00ada\u00adtions we have in mind, this prop\u00aderty is gen\u00ader\u00adally as use\u00adful as SR.\n\nA suf\u00adfi\u00adcient con\u00addi\u00adtion for SR is that the coef\u00adfi\u00adcients of $$f$$ sat\u00adis\u00adfy $$\\label{SSR} c_i^2 > 4c_{i-1}c_{i+1},\\quad 1\\leq i\\leq n-1;$$ see [\u25ca]. A suf\u00adfi\u00adcient con\u00addi\u00adtion for H if $$n > 5$$ is that the coef\u00adfi\u00adcients of $$f$$ sat\u00adis\u00adfy $$\\label{SHB} c_i c_{i+1}\\geq (2.147\\dots)c_{i-1}c_{i+2},\\quad 1\\leq i\\leq n-2;$$ see [\u25ca].\n\nThese suf\u00adfi\u00adcient con\u00addi\u00adtions are quite re\u00adstrict\u00adive. For ex\u00adample, if $$f(u)=(u+1)^n$$, so that $$c_i=\\binom ni$$, then $\\frac {c_i^2}{c_{i-1}c_{i+1}}=\\frac{i+1}i\\frac{n-i+1}{n-i}.$ There\u00adfore $\\lim_{i\\rightarrow\\infty}\\lim_{n\\rightarrow\\infty}\\frac {c_i^2}{c_{i-1}c_{i+1}}=1,$ so \\eqref{SSR} fails for large $$n$$.\n\nThe Hermite\u2013Biehler the\u00ador\u00adem gives an NASC3 for H:\n\nThe\u00ador\u00adem\u00a01: (Hermite\u2013Biehler)4 Giv\u00aden a poly\u00adno\u00admi\u00adal $$f$$ as in \\eqref{polyf} with pos\u00adit\u00adive coef\u00adfi\u00adcients, write $f(u)=\\sum_{i=0}^{1}u^ih_i(u^2).$ Then $$f$$ has prop\u00aderty $$P_2$$ if and only if the roots of $$h_0,h_1$$ are neg\u00adat\u00adive and in\u00adter\u00adlace, with the largest of these be\u00ading a root of $$h_0$$.\n\nBy ana\u00adlogy with this res\u00adult, we con\u00adsider the fol\u00adlow\u00ading: Giv\u00aden a poly\u00adno\u00admi\u00adal $$f$$ as in \\eqref{polyf} with pos\u00adit\u00adive coef\u00adfi\u00adcients, write $$\\label{HBext}f(u)=\\sum_{i=0}^{j-1}u^ih_i(u^j).$$ We will say that $$f$$ has prop\u00aderty $$Q_j$$ if the roots of $$h_0,h_1,\\dots, h_{j-1}$$ are neg\u00adat\u00adive and in\u00adter\u00adlace, with the largest be\u00ading a root of $$h_0$$, the second largest be\u00ading a root of $$h_1$$, etc.\n\nNote that $$Q_1=P_1$$. The Hermite\u2013Biehler the\u00ador\u00adem is the state\u00adment that $$Q_2=P_2$$. However, $$Q_3\\neq P_3$$ as the fol\u00adlow\u00ading ex\u00adample shows. Let $f(u)=u^5+u^4+u^3+2u^2+\\textstyle\\frac32 u+\\frac13.$ By defin\u00adi\u00adtion \\eqref{HBext}, we have \\begin{equation*} h_0(u) = u + \\textstyle\\frac{1}{3}, \\quad h_1(u) = u + \\frac{3}{2}, \\quad h_2(u) = u+2. \\end{equation*} The roots of $$f$$ are $$z_1$$, $$\\bar z_1$$, $$z_2$$, $$\\bar z_2$$, $$w$$, where the (roun\u00added) val\u00adues are $z_1=-.725+.100i,z_2=.435+1.137i, w=-.420.$ Then $(u-w)(u-z_2)(u-\\bar z_2)=.623+1.116u-.449u^2+u^3,$ so $$f$$ is not $$P_3$$. However, the roots of $$h_0,h_1,h_2$$ are $$-\\frac13$$, $$-\\frac32$$, $$-2$$ re\u00adspect\u00adively, which do in\u00adter\u00adlace cor\u00adrectly. We are not aware of a use\u00adful cri\u00adterion for $$P_j$$ for $$j\\geq 3$$.\n\nIn the fol\u00adlow\u00ading sec\u00adtion, we prove that $$\\bigl\\lfloor\\frac2kX\\bigr\\rfloor$$ is $$P_2$$5 if $$X$$ is $$P_1$$, and spec\u00adu\u00adlate about the likely situ\u00adation for $$\\bigl\\lfloor\\frac3kX\\bigr\\rfloor$$. In par\u00adtic\u00adu\u00adlar, we con\u00adjec\u00adture that $$\\bigl\\lfloor\\frac34X\\bigr\\rfloor$$ is $$P_3$$. $$\\bigl\\lfloor\\frac35X\\bigr\\rfloor$$ is not $$P_3$$, but we con\u00adjec\u00adture that it is nearly so, in that the pgf can be factored in\u00adto poly\u00adno\u00admi\u00adals of de\u00adgree at most 3 with at most one of the factors hav\u00ading a neg\u00adat\u00adive coef\u00adfi\u00adcient. We will call this situ\u00adation al\u00admost $$P_3$$. More gen\u00ader\u00adally, it may be the case that $$\\bigl\\lfloor\\frac3kX\\bigr\\rfloor$$ is $$P_3$$ if $$k\\equiv1 \\pmod{3}$$ and al\u00admost $$P_3$$ if $$k\\equiv2 \\pmod{3}$$.\n\n#### 2. Some examples\n\nSup\u00adpose the roots of $$h_0,h_1,h_2$$ are neg\u00adat\u00adive and in\u00adter\u00adlace. Let these roots be de\u00adnoted by $t_m < t_{m-1} < \\dots < t_0 < 0.$ Then $f(u)=h_0(u^3)+uh_1(u^3)+u^2h_2(u^3),$ where $$h_0$$ has roots $$t_0,t_3,\\dots,$$ and $$h_1$$ has roots $$t_1,t_4,\\dots,$$ and $$h_2$$ has roots $$t_2,t_5,\\dots.$$ The de\u00adgree of $$f$$ is $$m+3$$.\n\nWe be\u00adgin with the case $$m=1$$, so $h_0(y)=a(y-t_0),\\quad h_1(y)=b(y-t_1),\\quad h_2(y)=c,$ with $$a,b,c > 0$$. Then $$\\label{fourth}f(u)=-at_0-bt_1u+cu^2+au^3+bu^4.$$ If we write $f(u)=(u+d)(c_3u^3+c_2u^2+c_1u+c_0)$ and solve, we get $c_3=b,\\quad c_2=a-bd,\\quad c_1=c-ad+bd^2,\\quad c_0=-bt_1-dc+ad^2-bd^3,$ where $$f(-d)=0$$. Now we seek the con\u00addi\u00adtion un\u00adder which $$f \\in P_3$$. For the coef\u00adfi\u00adcients to be non\u00adneg\u00adat\u00adive, we need $b \\geq0,\\quad a-bd\\geq0,\\quad c-ad+bd^2\\geq0,\\quad -bt_1-dc+ad^2-bd^3\\geq 0.$ Writ\u00ading $f(-d)=d(bd^3-ad^2+cd+bt_1)-at_0=d^2(bd^2-ad+c)-(-bdt_1+at_0),$ we see that these in\u00adequal\u00adit\u00adies are equi\u00adval\u00adent to $a-bd\\geq0,\\quad -bt_1d+at_0\\geq0.$\n\nA ne\u00adces\u00adsary and suf\u00adfi\u00adcient con\u00addi\u00adtion for the ex\u00adist\u00adence of a $$d > 0$$ so that6 $f(-d)=0,\\quad\\frac{at_0}{bt_1}\\leq d\\leq \\frac ab,$ is that $$f$$ take on op\u00adpos\u00adite signs (in\u00adclud\u00ading 0) at two points in the in\u00adter\u00adval $\\bigg(-\\frac{at_0}{bt_1},-\\frac ab\\bigg).$ To find a use\u00adful suf\u00adfi\u00adcient con\u00addi\u00adtion, write $b^3f\\bigg(-\\gamma\\frac ab\\bigg)=a^2bc\\gamma^2-ab^3(-\\gamma t_1+t_0)-\\gamma^3(1-\\gamma)a^4.$ If $$\\gamma=1$$, this is $a b[ac-b^2(-t_1+t_0)].$ This is $$\\geq0$$ if and only if $$-t_1+t_0\\leq ac\/b^2$$. It is $$\\leq 0$$ if $$\\gamma t_1\\leq t_0$$ and $$bc\\leq \\gamma(1-\\gamma)a^2$$. There\u00adfore, a suf\u00adfi\u00adcient con\u00addi\u00adtion for the ex\u00adist\u00adence of the re\u00adquired $$d$$ is that there ex\u00adist a $$\\gamma\\in [0,1]$$ so that $$\\label{suff4}t_0-t_1\\leq\\frac{ac}{b^2},\\quad \\gamma t_1\\leq t_0,\\quad\\text{and}\\quad bc\\leq\\gamma(1-\\gamma)a^2.$$\n\nWhen is $$Q_3$$ sat\u00adis\u00adfied? If $$n=3$$, $$P_3$$ and $$Q_3$$ are both auto\u00admat\u00adic. If $$n=4$$, $$Q_3$$ is equi\u00adval\u00adent to $$c_1c_3\\geq c_0c_4$$.7 If $$f$$ is $$P_3$$ but not $$P_2$$, then \\eqalign{ f(u)&=(a_0+a_1u)(b_0+b_1u+b_2u^2+b_3u^3), \\cr c_1c_3-c_0c_4&=a_1^2b_0b_2+a_0a_1b_1b_2+a_0^2b_1b_3, } so $$f$$ is $$Q_3$$. However, $$P_2$$ does not im\u00adply $$Q_3$$. For ex\u00adample, $$f(u)=(1+u^2)^2$$ is $$P_2$$ but not $$Q_3$$, since $$c_1c_3-c_0c_4=-1$$. More gen\u00ader\u00adally, if $f(u)=(a_0+a_1u+a_2u^2)(b_0+b_1u+b_2u^2),$ then $c_1c_3-c_0c_4=(a_0b_1+a_1b_0)(a_1b_2+a_2b_1)-a_0a_2b_0b_2.$ This is non\u00adneg\u00adat\u00adive if $$a_0a_2\\leq 4a_1^2$$, $$b_0b_2\\leq 4b_1^2$$.8\n\nIf $$n=5$$, $$Q_3$$ is equi\u00adval\u00adent to $$c_1c_3\\geq c_0c_4$$ and $$c_2c_4\\geq c_1c_5$$.9 If $f(u)=(a_0+a_1u+a_2u^2)\\bigl(\\textstyle\\frac{25}2+\\frac12 u^2+u^3\\bigr),$ then $c_1c_3-c_0c_4=\\textstyle\\frac{25}2(a_1^2-a_0a_2).$ In this case, $$f$$ is $$P_3$$ but not $$P_2$$, but it may not be $$Q_3$$.\n\n#### 3. The approximations\n\nWe be\u00adgin by show\u00ading that in gen\u00ader\u00adal, $$X$$ be\u00ading SR does not im\u00adply that $$\\bigl\\lfloor\\frac{2}3X\\bigr\\rfloor$$ is SR, and in fact that this is very far from be\u00ading the case.\n\nThe\u00ador\u00adem\u00a02: Sup\u00adpose $$X$$ is $$B\\bigl(3n,\\frac12\\bigr)$$, and $$z_i$$ are the roots of the pgf of $$\\bigl\\lfloor\\frac{2}3X\\bigr\\rfloor$$. Then $\\max [\\operatorname{Im}(z_i)]^2\\geq\\textstyle\\frac12 (9n^2-9n-1).$\n\nProof.\u2009 Up to a con\u00adstant mul\u00adtiple, the pgf of $$\\bigl\\lfloor\\frac{2}3X\\bigr\\rfloor$$ is $f(u)=\\sum_{i= 0}^{3n}\\binom {3n}iu^{\\lfloor 2i\/3\\rfloor}=\\sum_{i\\geq 0}\\binom{3n+1}{3i+1}u^{2i}+\\sum_{i\\geq 0}\\binom{3n}{3i+2}u^{2i+1},$ which has de\u00adgree $$d=2n$$. Let $$z_1,z_2,\\dots$$ be the com\u00adplex roots with pos\u00adit\u00adive ima\u00adgin\u00adary part and $$w_1,w_2,\\dots$$ be the real roots. Then $f(u)=\\prod_j(u^2-2u\\operatorname{Re}(z_j)+|z_j|^2)\\prod_j(u-w_j).$ Both of these ex\u00adpres\u00adsions have $$u^d$$ coef\u00adfi\u00adcients equal to 1. Identi\u00adfy\u00ading the coef\u00adfi\u00adcients of $$u^{d-1}$$ gives $$\\label{id1} -2\\sum_j\\operatorname{Re}(z_j)-\\sum_jw_j=3n.$$ Identi\u00adfy\u00ading the coef\u00adfi\u00adcients of $$u^{d-2}$$ gives $4\\sum_{i < j}\\operatorname{Re}(z_i)\\operatorname{Re}(z_j)+\\sum_j|z_j|^2 +2\\sum_j\\operatorname{Re}(z_j)\\sum_iw_i+\\sum_{i < j}w_iw_j=\\binom{3n+1}{3n-2}.$ Com\u00adbin\u00ading this with \\eqref{id1} leads to $\\sum_i[\\operatorname{Im}(z_i)]^2-\\sum_i[\\operatorname{Re}(z_i)]^2-{\\textstyle\\frac12}\\sum_iw_i^2 =\\textstyle\\frac12 n(9n^2-9n-1).$ There\u00adfore, since there are at most $$n$$ pairs of com\u00adplex con\u00adjug\u00adate roots, $n\\max [\\operatorname{Im}(z_i)]^2\\geq\\sum_i[\\operatorname{Im}(z_i)]^2\\geq \\textstyle\\frac12 n(9n^2-9n-1). \\quad \u25fb$\n\nHere is the res\u00adult from [\u25ca] that we used to show that $$\\bigl\\lfloor \\frac 1k X\\bigr\\rfloor$$ is SR if $$X$$ is SR. For a poly\u00adno\u00admi\u00adal $$f$$ of de\u00adgree $$n$$, write $$\\label{poly} f(x)=\\sum_{i=0}^{k-1}x^ig_i(x^k),$$ where $$g_i$$ is a poly\u00adno\u00admi\u00adal of de\u00adgree $$\\bigl\\lfloor\\frac{n-i}k\\bigr\\rfloor$$.\n\nThe\u00ador\u00adem\u00a03: ([\u25ca],Theorem\u00a04.3)\u2009 If $$f$$ is SR with de\u00adgree $$n$$, the cor\u00adres\u00adpond\u00ading poly\u00adno\u00admi\u00adals $$g_i$$ are SR as well. Fur\u00adther\u00admore, their roots are in\u00adter\u00adlaced in the sense that if the col\u00adlec\u00adtion of all $$n-k+1$$ roots $$s_j$$ of the $$g_i$$\u2019s are placed in in\u00adcreas\u00ading or\u00adder, $s_{n-k} < \\cdots < s_4 < s_3 < s_2 < s_1 < s_0 < 0,$ then the roots of $$g_i$$ are $$s_i$$, $$s_{i+k}$$, $$s_{i+2k}, \\dots.$$\n\nThis state\u00adment is il\u00adlus\u00adtrated in the fol\u00adlow\u00ading ar\u00adray, in the case $$k=3$$: $\\begin{pmatrix} &\\cdots&s_{6}&s_{5}&s_{4}&s_{3}&s_{2}&s_{1}&s_0&0\\\\\\\\ g_0&\\cdots&0&+&+&0&-&-&0&+\\\\ g_1&\\cdots&+&+&0&-&-&0&+&+\\\\ g_2&\\cdots&+&0&-&-&0&+&+&+ \\end{pmatrix}.$ Note that each row is peri\u00adod\u00adic of peri\u00adod\u00a06, and each row is ob\u00adtained from the pre\u00advi\u00adous row via a shift. The proof in [\u25ca] is by in\u00adduc\u00adtion on the de\u00adgree of $$f$$. The pgf of $$\\bigl\\lfloor\\frac 1k X\\bigr\\rfloor$$ is $$\\sum_ig_i$$, which al\u00adtern\u00adates signs at the points $$s_{mk}$$. There\u00adfore, it has a root in each of the in\u00adter\u00advals $$(s_{(m+1)k},s_{mk})$$, thus show\u00ading that $$\\bigl\\lfloor \\frac 1k X\\bigr\\rfloor$$ is SR $$(=P_1)$$.\n\nThe\u00ador\u00adem\u00a04: If $$X$$ is $$P_1$$, then $$\\bigl\\lfloor\\frac 2k X\\bigr\\rfloor$$ is $$P_2$$.\n\nProof.\u2009 It suf\u00adfices to prove the res\u00adult for $$k$$ odd. If $$k$$ is odd, the pgf of $$\\bigl\\lfloor \\frac 2k X\\bigr\\rfloor$$ is $$\\label{2x\/k} \\sum_{i=0}^{(k-1)\/2}g_i(u^2)+u\\sum_{i=(k+1)\/2}^{k-1}g_i(u^2).$$ By the Hermite\u2013Biehler the\u00ador\u00adem, all roots of \\eqref{2x\/k} have neg\u00adat\u00adive real parts if and only if all the roots of $\\sum_{i=0}^{(k-1)\/2}g_i(-u^2)\\quad\\text{and}\\quad\\sum_{i=(k+1)\/2}^{k-1}g_i(-u^2)$ are neg\u00adat\u00adive and in\u00adter\u00adlace, with the largest of these be\u00ading a root of $\\sum_{i=0}^{(k-1)\/2}g_i(-u^2).$\n\nIf $$k=3$$, one can see from the above ar\u00adray that $g_0(\\,\\cdot\\,)+g_1(\\,\\cdot\\,)\\text{ has a root in each of the intervals }\\cdots (s_7,s_6),(s_4,s_3),(s_1,s_0)$ and $g_2(\\,\\cdot\\,)\\text{ has a root in each of the intervals }\\cdots (s_9,s_7),(s_6,s_4),(s_3,s_1),$ since the val\u00adues of the func\u00adtion at the two en\u00add\u00adpoints of each in\u00adter\u00adval have op\u00adpos\u00adite signs. There\u00adfore, the neg\u00adat\u00adive roots in\u00adter\u00adlace. For lar\u00adger odd $$k$$, the ar\u00adgu\u00adment is sim\u00adil\u00adar. Now $\\sum_{i=0}^{(k-1)\/2}g_i(\\,\\cdot\\,)\\text{ has a root in each of the intervals }\\cdots (s_{(k-1)\/2+2k},s_{2k}),(s_{(k-1)\/2+k},s_k),(s_{(k-1)\/2},s_0)$ and $\\sum_{i=(k+1)\/2}^{k-1}g_i(\\,\\cdot\\,)\\text{ has a root in each of the intervals }\\cdots (s_{3k-1},s_{(k+1)\/2+2k}),(s_{2k-1},s_{(k+1)\/2+k}),(s_{k-1},s_{(k+1)\/2}).$ It fol\u00adlows that $$\\bigl\\lfloor \\frac 2k X\\bigr\\rfloor$$ is H $$(=P_2)$$. \u00a0\u25fb\n\nThe situ\u00adation for the case $$\\bigl\\lfloor \\frac 3k X\\bigr\\rfloor$$, where $$k$$ is not a mul\u00adtiple of 3, is sim\u00adil\u00adar. Its pgf is $h_0(u)+uh_1(u)+u^2h_2(u),$ where $h_0(u)=\\sum_{i=0}^{\\lfloor k\/3\\rfloor}g_i(u),\\quad h_1(u)=\\sum_{\\lceil k\/3\\rceil}^{\\lfloor 2k\/3\\rfloor}g_i(u),\\quad h_2(u)=\\sum_{i=\\lceil 2k\/3\\rceil}^{k-1}g_i(u).$ Then \\eqalign{ &h_0\\text{ has a root in each of the intervals } (s_{\\lfloor k\/3\\rfloor+ik},s_{ik})\\text{ for }i\\geq 0, \\cr &h_1\\text{ has a root in each of the intervals } (s_{\\lfloor 2k\/3\\rfloor+ik},s_{\\lceil k\/3\\rceil+ik})\\text{ for }i\\geq 0, \\cr &h_2\\text{ has a root in each of the intervals } (s_{k-1+ik},s_{\\lceil 2k\/3\\rceil+ik})\\text{ for }i\\geq 0. } In a few cases, the in\u00adter\u00adval is of the form $$(a,a)$$, which is in\u00adter\u00adpreted as the singleton $$\\{a\\}$$. So, the roots of $$h_0,h_1,h_2$$ are neg\u00adat\u00adive and in\u00adter\u00adlace. Un\u00adfor\u00adtu\u00adnately, as was poin\u00adted out earli\u00ader, this prop\u00aderty does not im\u00adply $$P_3$$.\n\nThe case of $$\\bigl\\lfloor \\frac 3k X\\bigr\\rfloor$$ is more chal\u00adlen\u00adging, since for each root with pos\u00adit\u00adive real part, one must find a neg\u00adat\u00adive root that com\u00adpensates for it. It is not simply a mat\u00adter of prov\u00ading some prop\u00aderty for each root, as was the case for $$P_1$$ and $$P_2$$. All we can of\u00adfer at this point is spec\u00adu\u00adla\u00adtion based on com\u00adpu\u00adta\u00adtions of roots in spe\u00adcial cases. If $$f$$ is $$P_3$$ but not $$P_2$$, $$f$$ will have com\u00adplex roots of pos\u00adit\u00adive real part. For a neg\u00adat\u00adive real root $$w$$ and a con\u00adjug\u00adate pair of roots $$z,\\bar z$$ with pos\u00adit\u00adive real part, let $\\langle z,w\\rangle=(u-w)(u-z)(u-\\bar z).$ For this cu\u00adbic poly\u00adno\u00admi\u00adal to have pos\u00adit\u00adive coef\u00adfi\u00adcients, it is ne\u00adces\u00adsary and suf\u00adfi\u00adcient that $z+\\bar z\\leq -w\\leq\\frac{z \\bar z}{z+\\bar z}.$\n\nHere is what seems to be the case. Let $$w_1,w_2,\\dots$$ be the real roots of $$f$$ ordered in in\u00adcreas\u00ading or\u00adder of their ab\u00adso\u00adlute val\u00adues, and $$z_1,z_2,\\dots$$ be the roots with pos\u00adit\u00adive real and ima\u00adgin\u00adary parts, ordered in in\u00adcreas\u00ading or\u00adder of the real parts. As\u00adsume that $$X$$ is SR and takes val\u00adues $$0,1,\\dots,n$$.\n\nLet $$f$$ be the pgf of $$\\bigl\\lfloor\\frac 34X\\bigr\\rfloor$$. The de\u00adgree of $$f$$ is $$d=\\bigl\\lfloor\\frac34 n\\bigr\\rfloor$$. Then $$f$$ ap\u00adpears to be $$P_3$$ in gen\u00ader\u00adal. The num\u00adber of $$z_i$$\u2019s and the num\u00adber of $$w_i$$\u2019s are close to $$\\frac n4$$. This has been checked in a num\u00adber of cases, in\u00adclud\u00ading $$X\\sim B(n,p)$$ for $$n=54$$ and 75, and $$p=\\frac1{100}$$, $$\\frac12$$ and $$\\frac{100}{101}$$.10 If $$X\\sim B\\bigl(n,\\frac12\\bigr)$$, the pgf of $$\\bigl\\lfloor\\frac 34X\\bigr\\rfloor$$ is $$P_2$$ for $$n\\leq 9$$. If $$10\\leq n\\leq 30$$, it is $$P_3$$ and can be factored in\u00adto $$\\bigl\\lfloor\\frac n4\\bigr\\rfloor$$ ir\u00adre\u00addu\u00adcible cu\u00adbics, with an ex\u00adtra lin\u00adear factor if $$d\\equiv 1\\mod 3$$ and an ex\u00adtra ir\u00adre\u00addu\u00adcible quad\u00adrat\u00adic factor if $$d\\equiv 2\\bmod 3$$. Here ir\u00adre\u00addu\u00adcible means that it can\u00adnot be fur\u00adther factored in\u00adto poly\u00adno\u00admi\u00adals with pos\u00adit\u00adive coef\u00adfi\u00adcients. The ap\u00adpro\u00adpri\u00adate pair\u00ading of real and com\u00adplex roots is $$w_i$$ with $$z_i$$. This pic\u00adture con\u00adtin\u00adues to hold for many $$X$$\u2019s that are sums of in\u00adde\u00adpend\u00adent Bernoulli ran\u00addom vari\u00adables with ran\u00addomly chosen para\u00admet\u00aders.\n\nHere is some more de\u00adtail when $$X$$ is $$B\\bigl(n,\\frac12\\bigr)$$ and $$f$$ is the pgf of $$\\bigl\\lfloor\\frac 34X\\bigr\\rfloor$$, omit\u00adting mul\u00adti\u00adplic\u00adat\u00adive con\u00adstants: \\eqalign{ &n=2:\\quad f(u)=u+3 \\text{ is } P_1, \\cr &n=3:\\quad f(u)=u^2+3u+4 \\text{ is } P_2, \\cr &n=4:\\quad f(u)=u^3+4u^2+6u+5=(u+a)(u^2+bu+c) \\text{ is } P_2, \\cr &\\phantom{n=4:}\\quad f(-a)=0,\\quad b=4-a,\\quad c=\\textstyle\\frac 5a,\\quad f(0)=5,\\quad f(-4)=-19, } so $$f(-a)=0$$ for some $$0 < a < 4$$. There\u00adfore $$f$$ is $$P_2$$. It is not $$P_1$$ since the dis\u00adcrim\u00adin\u00adant of the quad\u00adrat\u00adic factor is $$< 0$$ for $$0 < a < 4$$. The ac\u00adtu\u00adal val\u00adues are $$a=2.35$$, $$b=1.65$$, $$c=2.12$$. \\eqalign{ &n=5:\\quad \\textstyle f(u)=u^3+\\frac53 u^2+\\frac 53 u+1=(u+a)(u^2+bu+c) \\text{ is } P_2, \\cr &\\phantom{n=5:}\\quad f(-a)=0,\\quad c=\\textstyle\\frac 1a,\\quad b=\\frac53-a,\\quad f(0)=1,\\quad f\\bigl(-\\frac 53\\bigr)=-\\frac{16}9, } so $$f(-a)=0$$ for some $$0 < a < \\frac53$$. Again, $$f$$ is $$P_2$$ but not $$P_1$$. The ac\u00adtu\u00adal val\u00adues are $$a=1$$, $$b=\\frac23$$, $$c=1$$. \\eqalign{ &n=6:\\quad f(u)=u^4+21u^3+20 u^2+15 u+7=(u+a)(u+b)(u^2+cu+d) \\text{ is } P_2, \\cr &\\phantom{n=6:}\\quad f(-a)=f(-b)=0,\\quad d=\\textstyle\\frac 7{ab},\\quad c=21-a-b. } Com\u00adput\u00ading $$f(x+iy)$$ with $$y\\neq 0$$ and set\u00adting it $$=0$$ leads to $f(x)-\\textstyle \\frac12 f^{\\prime\\prime}(x)y^2+y^4=0,\\quad f^{\\prime}(x)-\\textstyle\\frac16 y^2f^{\\prime\\prime\\prime}(x)=0.$ Solv\u00ading the second equa\u00adtion for $$y^2$$ and us\u00ading that in the first equa\u00adtion leads to $f(x) [f^{\\prime\\prime\\prime}(x)]^2-3f^{\\prime\\prime}(x)f^{\\prime}(x)f^{\\prime\\prime\\prime}(x)+36[f^{\\prime}(x)]^2=0.$ Ex\u00adpand\u00ading this yields a poly\u00adno\u00admi\u00adal in $$x$$, all of whose coef\u00adfi\u00adcients are neg\u00adat\u00adive, so $$x < 0$$. It fol\u00adlows that $$f$$ is $$P_2$$. The ac\u00adtu\u00adal val\u00adues are $$a=20.04, b=.66,c=.30,d=.53$$.\n\nThe first case that is $$P_3$$ is $$n=10$$. Now \\eqalign{ f(u)&=11 + 45 u + 120 u^2 + 462 u^3 + 210 u^4 + 120 u^5 + 55 u^6 + u^7 \\cr& =(u+53)(u^3+.24u^2+.10u+.03)(u^3+1.96u^2+3.23u+7.69). } Set\u00adting $$f(x+iy)=0$$ for $$y\\neq 0$$ gives $\\sum_{j=0}^3\\frac{(-1)^jf^{(2j)}(x)y^{2j}}{(2j)!}=0,\\quad \\sum_{j=0}^3\\frac{(-1)^jf^{(2j+1)}(x)y^{2j}}{(2j+1)!}=0.$\n\nCon\u00adsider small $$n$$ with $$X$$ gen\u00ader\u00adal. Use $$e_j$$ for the ele\u00adment\u00adary sym\u00admet\u00adric func\u00adtions.\n\nFor $$n=3$$, $f(u)=u^2e_0+ue_1+(e_2+e_3)$ is $$P_2$$.\n\nFor $$n=4$$, $f(u)=u^3e_0+u^2e_1+ue_2+(e_3+e_4)$ is $$P_3$$, but may not be $$P_2$$.\n\nFor $$n=5$$, $f(u)=u^3(e_0+e_1)+u^2e_2+ue_3+(e_4+e_5)$ is $$P_3$$, but may not be $$P_2$$.\n\nFor $$n=6$$, $f(u)=u^4e_0+u^3(e_1+e_2)+u^2e_3+ue_4+(e_5+e_6).$ Set\u00adting this equal to $(u+w)(u^3+a u^2+b u+c)$ with $$w\\geq 0$$ gives $$f(-w)=0$$ and \\eqalign{ w^3a&=w^2e_3-we_4+e_5+e_6=w^3(e_1+e_2-e_0w), \\cr w^2b&=we_4-e_5-e_6, \\cr wc&=e_5+e_6. } So that $$a,b,c\\geq 0$$, we need $\\frac{e_5+e_6}{e_4}\\leq w\\leq\\frac{e_1+e_2}{e_0}.$ Com\u00adpute $\\displaylines{ e_0^2f\\biggl(-\\frac{e_1+e_2}{e_0}\\biggr)=e_1(e_1e_3-e_0e_4)+e_2(2e_1e_3-e_0e_4)+e_2^2e_3+e_0^2(e_5+e_6)\\geq 0, \\cr 16e_0^3f\\biggl(-\\frac{e_1+e_2}{2e_0}\\biggr)=-e_1^4-4e_1^2(e_1^2-e_0e_3)-8e_0^2(e_1e_4-2e_0e_5), \\cr -2e_1e_2(3e_1e_2-4e_0e_3)-8e_0^2(e_2e_4-2e_0e_6)-4e_2^2(e_1e_2-4e_0e_3)-e_2^4\\leq 0. }$ Since $\\frac{e_5+e_6}{e_4}\\leq\\frac{e_1+e_2}{2e_0},$ it fol\u00adlows that $$f$$ has a root $$-w$$ which makes $$a,b,c\\geq0$$. The in\u00adequal\u00adit\u00adies above fol\u00adlow from the New\u00adton in\u00adequal\u00adit\u00adies $\\frac16\\frac{e_1}{e_0}\\geq\\frac 25\\frac{e_2}{e_1}\\geq\\frac34\\frac{e_3}{e_2}\\geq\\frac43\\frac{e_4}{e_3}\\geq\\frac52\\frac{e_5}{e_4}\\geq\\frac61\\frac{e_6}{e_5}.$ It fol\u00adlows that $$f$$ is $$P_3$$.\n\nFor $$n=7$$, $f(u)=u^5e_0+u^4e_1+u^3(e_2+e_3)+u^2e_4+ue_5+(e_6+e_7).$ Set this equal to $(u^2+au+b)(u^3+cu^2+du+g).$ Solv\u00ading for the coef\u00adfi\u00adcients gives \\eqalign{ c&=e_1-a, \\cr d&=a^2-b-ae_1+e_2+e_3, \\cr g&=-a^3+2ab+a^2e_1-be_1-a(e_2+e_3)+e_4, } with $b^2-b[3a^2+(e_2+e_3)-2ae_1)]+a^4-a^3e_1+a^2(e_2+e_3)-ae_4+e_5=0$ and $(e_1-2a)b^2+b[a^3-a^2e_1+a(e_2+e_3]-e_4]+(e_6+e_7)=0.$ Com\u00adbin\u00ading these two equa\u00adtions gives (and noth\u00ading is lost if $$e_1\\neq 2a$$) $b=\\frac{2a^5-3e_1a^4+[e_1^2+2(e_2+e_3)]a^3-[e_1(e_2+e_3)+2e_4]a^2+(e_1e_4+2e_5)a-[(e_1e_5-(e_6+e_7)]}{5a^3-6e_1a^2+[2e_1^2+(e_2+e_3)]a-[e_1(e_2+e_3)-e_4]}.$ Us\u00ading this value of $$b$$ in either of those equa\u00adtions im\u00adplies that $$a$$ must be a root of a de\u00adgree-10 poly\u00adno\u00admi\u00adal $$H(a)$$. Also, let\u00adting $D=5a^3-6e_1a^2+[2e_1^2+(e_2+e_3)]a-[e_1(e_2+e_3)-e_4],$ we have \\eqalign{ Dd&=3a^5-8e_1a^4+[7e_1^2+4(e_2+e_3)]a^3-[2e_1^3+7e_1(e_2+e_3)-3e_4]a^2 \\cr&\\quad +[(e_2+e_3)(e_2+e_3+3e_1^2)-2(e_1e_4+e_5)]a \\cr&\\qquad -[e_1(e_2+e_3)^2-e_4(e_2+e_3)-e_1e_5+(e_6+e_7)] } and \\eqalign{ Dg&=-a^6+3e_1a^5-[3e_1^2+2(e_2+e_3)]a^4+[e_1^3+4e_1(e_2+e_3)]a^3 \\cr&\\quad -[(e_2+e_3)(e_2+e_3+2e_1^2)+(e_1e_4-4e_5)]a^2+[e_1(e_2+e_3)^2+e_1^2e_4-4e_1e_5+2(e_6+e_7)]a \\cr&\\qquad -[e_1e_4(e_2+e_3)-e_4^2-e_1^2e_5+e_1(e_6+e_7)]. } Us\u00ading Poly\u00adno\u00admi\u00adalRe\u00adduce, $H(a)=[a^4-a^3e_1+a^2(e_2+e_3)-ae_4+e_5]Dg-\\textstyle\\frac 13(e_6+e_7)[13Dd+K(a)],$ where \\eqalign{ K(a)&=29e_1a^4-2[20e_1^2+17(e_2+e_3)]a^3+[14e_1^3+61(e_2+e_3)-24e_4]a^2 \\cr&\\quad -[27e_1^2(e_2+e_3)+10(e_2+e_3)^2-17e_1e_4-20e_5]a \\cr&\\qquad +10[e_1(e_2+e_3)^2-e_4(e_2+e_3)-e_1e_5+e_6+e_7]. } Now the New\u00adton in\u00adequal\u00adit\u00adies are $\\frac17\\frac{e_1}{e_0}\\geq\\frac 26\\frac{e_2}{e_1}\\geq\\frac35\\frac{e_3}{e_2}\\geq\\frac44\\frac{e_4}{e_3}\\geq\\frac53\\frac{e_5}{e_4}\\geq\\frac62\\frac{e_6}{e_5}\\geq\\frac71\\frac{e_7}{e_6}.$\n\nUn\u00adfor\u00adtu\u00adnately, the pgf of $$f$$ $$\\bigl\\lfloor\\frac 35X\\bigr\\rfloor$$ is not ne\u00adces\u00adsar\u00adily $$P_3$$. Take $$X$$ to be $$B\\bigl(n,\\frac12\\bigr)$$. For $$n\\leq 16$$, $$f$$ is $$P_2$$, and for $$17\\leq n\\leq 20$$ is $$P_3$$. For $$n=21$$, $$d=12$$, and there are 4 $$w_i$$\u2019s and 4 $$z_j$$\u2019s. For $$f$$ to be $$P_3$$, there would have to be a pair\u00ading of the $$w_i$$\u2019s and $$z_j$$\u2019s so that for each such pair, $$\\langle z_i,w_j\\rangle$$ has pos\u00adit\u00adive coef\u00adfi\u00adcients. But, no such pair\u00ading ex\u00adists, since $$\\langle w_1,z_i\\rangle$$ has a neg\u00adat\u00adive quad\u00adrat\u00adic coef\u00adfi\u00adcient for $$i=1,2,3,4$$, so $$f$$ is not $$P_3$$. However, $$\\langle z_i,w_i\\rangle$$ and $$\\langle z_{i-1},w_i\\rangle$$ have pos\u00adit\u00adive coef\u00adfi\u00adcients for $$i=2,3,4$$, so one can choose a fac\u00adtor\u00adiz\u00ada\u00adtion of $$f$$ so that only one factor has a neg\u00adat\u00adive coef\u00adfi\u00adcient. Note that it is not $$P_4$$ either. It is $$P_6$$, since $(u-w_1)(u-w_2)(u-z_1)(u-\\bar z_1)(u-z_2)(u-\\bar z_2)$ has pos\u00adit\u00adive coef\u00adfi\u00adcients. This may or may not be use\u00adful.\n\nHowever, all is not lost, due to the fol\u00adlow\u00ading res\u00adult.\n\nPro\u00adpos\u00adi\u00adtion\u00a05: Sup\u00adpose $$U$$ and $$V$$ are non\u00adneg\u00adat\u00adive in\u00adteger-val\u00adued ran\u00addom vari\u00adables with $$P(U=m)=p_m$$ and $$P(V=m)=q_m$$ whose pgf\u2019s sat\u00adis\u00adfy $Eu^U=Eu^V(au^3+bu^2+cu+d).$ If $$b+c+d\\geq 0$$, $$c+d\\geq 0$$, $$d\\geq 0$$, then $$U,V$$ can be coupled so that $$U\\leq V+3$$ and then $E(V+3-U)=b+2c+3d.$\n\nProof.\u2009 The dis\u00adtri\u00adbu\u00adtions of $$U,V$$ sat\u00adis\u00adfy $p_m=aq_{m-3}+bq_{m-2}+cq_{m-1}+dq_m.$ Since $$a+b+c+d=1$$, sum\u00adming gives $P(U\\leq m)=P(V\\leq m-3)+(b+c+d)q_{m-2}+(c+d)q_{m-1}+dq_m.$ There\u00adfore, if $$b+c+d\\geq 0$$, $$c+d\\geq 0$$, $$d\\geq 0$$, $$U$$ and $$V$$ can be coupled so that $$U\\leq V+3$$ al\u00admost surely.11 Com\u00adput\u00ading the first mo\u00adment, we see that \\eqalign{ EU&=\\sum_m m[aq_{m-3}+bq_{m-2}+cq_{m-1}+dq_m] \\cr& =a(EV+3)+b(EV+2)+c(EV+1)+dEV=EV+(3a+2b+c), } so us\u00ading again $$a+b+c+d=1$$, we con\u00adclude that $E(V+3-U)=b+2c+3d. \\quad \u25fb$\n\nFor the above ex\u00adample, $$X\\sim B\\bigl(21,\\frac12\\bigr)$$, we in\u00addic\u00adate the situ\u00adation be\u00adlow, with the $$(i,j)$$ entry cor\u00adres\u00adpond\u00ading to $$\\langle z_i,w_j\\rangle$$. If the entry is $$+$$, all coef\u00adfi\u00adcients of $$\\langle z_i,w_j\\rangle$$ are pos\u00adit\u00adive. Oth\u00ader\u00adwise, the entry in\u00addic\u00adates which coef\u00adfi\u00adcient is neg\u00adat\u00adive, and gives the value of $$b+2c+3d$$: $\\begin{pmatrix} b;.036&+&c;.48&c; .92\\\\ b;.61\\phantom{0}&+&+&c;1.5\\\\ b;1.80&b;1.99&+&+\\\\ b;2.01&b;2.20&+&+ \\end{pmatrix}.$ Writ\u00ading $f=\\prod_{i=1}^4\\langle z_i,w_i\\rangle,$ we see that there is a coup\u00adling of $$\\bigl\\lfloor\\frac35X\\bigr\\rfloor$$ with $$V$$, a sum of three in\u00adde\u00adpend\u00adent ran\u00addom val\u00adues with val\u00adues $$0,1,2,3$$, so that $$\\bigl\\lfloor\\frac35X\\bigr\\rfloor\\leq V+3$$ and $\\textstyle E\\bigl(V+3-\\bigl\\lfloor\\frac35X\\bigr\\rfloor\\bigr)=.035\\dots.$\n\nLet $$f$$ be the pgf of $$\\bigl\\lfloor\\frac 35X\\bigr\\rfloor$$ where $$X$$ is a gen\u00ader\u00adal SR ran\u00addom vari\u00adable tak\u00ading val\u00adues $$0,1,\\dots,n$$. The de\u00adgree of $$f$$ is $$d=\\bigl\\lfloor\\frac35 n\\bigr\\rfloor$$. Then the fol\u00adlow\u00ading ap\u00adpears to be the case. There is a (not in gen\u00ader\u00adal unique) in\u00addex $$m$$ so that $$\\langle z_i,w_{i+1}\\rangle$$ has pos\u00adit\u00adive coef\u00adfi\u00adcients if $$i < m$$ and $$\\langle z_i,w_{i}\\rangle$$ has pos\u00adit\u00adive coef\u00adfi\u00adcients if $$i > m$$. Then the pro\u00adpos\u00adi\u00adtion can be ap\u00adplied to the case $$\\langle z_m,w_1\\rangle$$. In many cases, this par\u00adti\u00adtion of the $$z_i$$\u2019s is not ne\u00adces\u00adsary, and $$f$$ is ac\u00adtu\u00adally $$P_3$$. This ap\u00adpears to be the case, for ex\u00adample, if $$X$$ is $$B(21,p)$$ for $$p$$ close to 0 or 1. In one of these cases, $$\\langle z_i,w_{i+1}\\rangle$$ has pos\u00adit\u00adive coef\u00adfi\u00adcients for all $$i$$, and in the oth\u00ader $$\\langle z_i,w_{i}\\rangle$$ has pos\u00adit\u00adive coef\u00adfi\u00adcients for all $$i$$.\n\nFor $$21\\leq n\\leq 26$$, one can choose all but one factor to have pos\u00adit\u00adive coef\u00adfi\u00adcients, and the (nor\u00admal\u00adized) cu\u00adbics cor\u00adres\u00adpond\u00ading to $$w_1$$ are \\eqalign{ n&=21:\\quad .015+.984u-.002u^2+.003u^3, \\cr n&=22:\\quad .013+.990u-.011u^2+.007u^3, \\cr n&=23:\\quad .012+.997u-.024u^2+.015u^3, \\cr n&=24:\\quad .011+1.003u-.040u^2+.026u^3, \\cr n&=25:\\quad .011+1.007u-.059u^2+.042u^3, \\cr n&=26:\\quad .010+1.008u-.0819u^2+.063u^3. }\n\nHere is a lar\u00adger ex\u00adample, with $$X$$ $$B\\bigl(33,\\frac12\\bigr)$$, $$d=19$$. Now there are 7 $$w_i$$\u2019s and 6 $$z_i$$\u2019s: $\\begin{pmatrix} - & + & - & - & - & - & -\\\\ - & b & + & c & - & - & -\\\\ b & b & + & + & c & c & c\\\\ b & b & b & + & + & c & c\\\\ b & b & b & b & + & c & c\\\\ b & b & b & b & b & + & c \\end{pmatrix}.$ Now, $$+$$ means that all four coef\u00adfi\u00adcients are pos\u00adit\u00adive, $$b$$ or $$c$$ means that the coef\u00adfi\u00adcient is neg\u00adat\u00adive, but the hy\u00adpo\u00adtheses of the pro\u00adpos\u00adi\u00adtion are sat\u00adis\u00adfied, and $$-$$ means that the hy\u00adpo\u00adtheses of the pro\u00adpos\u00adi\u00adtion are not sat\u00adis\u00adfied at all. In the con\u00adtext of the last para\u00adgraph, the pos\u00adsible val\u00adues of $$m$$ are 2, 3, 4, 5.\n\nFrom the ex\u00adamples we have looked at, it ap\u00adpears that the pgf of $$\\bigl\\lfloor\\frac 3kX\\bigr\\rfloor$$ \u201cusu\u00adally\u201d sat\u00adis\u00adfies $$P_3$$, and when it does not, it can be factored in\u00adto cu\u00adbics that have pos\u00adit\u00adive coef\u00adfi\u00adcients ex\u00adcept for one. That one can be treated us\u00ading the pro\u00adpos\u00adi\u00adtion. Typ\u00adic\u00adally, the way this arises is that $$\\langle z_i,w_i\\rangle$$ has pos\u00adit\u00adive coef\u00adfi\u00adcients for one range of $$i$$\u2019s, and $$\\langle z_i,w_{i+1}\\rangle$$ has pos\u00adit\u00adive coef\u00adfi\u00adcients for a com\u00adple\u00adment\u00adary range of $$i$$\u2019s. The ex\u00adcep\u00adtion\u00adal factor ap\u00adpears in a trans\u00adition from one range to the oth\u00ader.\n\nWrite $$z_i\\sim w_j$$ to mean that $$\\langle z_i,w_j\\rangle$$ has pos\u00adit\u00adive coef\u00adfi\u00adcients. Take $$X$$ to be $$B\\bigl(n,\\frac12\\bigr)$$.\n\nIf $$n=33$$, $$d=19$$, and $$\\bigl\\lfloor\\frac 35X\\bigr\\rfloor$$ sat\u00adis\u00adfies \\eqalign{ z_i&\\sim w_{i+1}&\\quad&\\text{for }1\\leq i\\leq 4, \\cr z_i&\\sim w_{i}&\\quad&\\text{for }3\\leq i\\leq 6. } If $$n=51$$, $$d=30$$, and $$\\bigl\\lfloor\\frac 35X\\bigr\\rfloor$$ sat\u00adis\u00adfies \\eqalign{ z_i&\\sim w_{i+1}&\\quad&\\text{for }1\\leq i\\leq 6, \\cr z_i&\\sim w_{i}&\\quad&\\text{for }4\\leq i\\leq 10. } In neither case is $$P_3$$ sat\u00adis\u00adfied.\n\nIf $$n=105$$, $$d=45$$, and $$\\bigl\\lfloor\\frac 37X\\bigr\\rfloor$$ sat\u00adis\u00adfies \\eqalign{ z_i&\\sim w_{i+1}&\\quad&\\text{for }i=14, \\cr z_i&\\sim w_{i}&\\quad&\\text{for }1\\leq i\\leq 14, \\cr z_i&\\sim w_{i-1}&\\quad&\\text{for }i=14,15, } so $$P_3$$ is sat\u00adis\u00adfied. We do not have a counter\u00adexample to $$P_3$$ for $$\\bigl\\lfloor\\frac 37X\\bigr\\rfloor$$.\n\nIf $$n=105$$, $$d=39$$, and $$\\bigl\\lfloor\\frac 38X\\bigr\\rfloor$$ sat\u00adis\u00adfies \\eqalign{ z_i&\\sim w_{i+1}&\\quad&\\text{for }1\\leq i\\leq 9, \\cr z_i&\\sim w_{i}&\\quad&\\text{for }5\\leq i\\leq 13, \\cr z_i&\\sim w_{i-1}&\\quad&\\text{for }i=13, } so $$P_3$$ is not sat\u00adis\u00adfied.\n\n#### 4. The sufficient conditions\n\nMo\u00adtiv\u00adated by the suf\u00adfi\u00adcient con\u00addi\u00adtions \\eqref{SSR} and \\eqref{SHB}, we con\u00adsider poly\u00adno\u00admi\u00adals $f(u)=\\sum_{i=0}^nc_iu^i$ sat\u00adis\u00adfy\u00ading12 $c_i^2=ac_{i-1}c_{i+1},\\quad c_0=c_n.$ Then for $$n=2$$ $f\\text{ is } \\begin{cases} P_1&\\text{if }a\\geq 4,\\\\ P_2&\\text{if }a < 4. \\end{cases}$ If $$n=3$$, $f(z)=c_0(1+az+az^2+z^3)=c_0(z+1)(z^2+(a-1)z+1),$ so $f\\text{ is } \\begin{cases} P_1&\\text{if }a\\geq 3,\\\\ P_2&\\text{if }1\\leq a < 3,\\\\ P_3&\\text{if }a < 1. \\end{cases}$ At the trans\u00adition cases, ex\u00adcept for the factor of $$c_0$$, $f(z)= \\begin{cases} (z+1)^3&\\text{if }a=3,\\\\ (z+1)(z^2+1)&\\text{if }a=1,\\\\ (z^3+1)&\\text{if }a=0. \\end{cases}$\n\nIf $$n=4$$, $f(z)=c_0(1+b^3z+b^4z^2+b^3z^3+z^4),$ where $$a=b^2$$, so $f\\text{ is } \\begin{cases} P_1&\\text{if }a > 3.236,\\\\ P_2&\\text{if }\\sqrt 2 < a < 3.236,\\\\ P_4&\\text{if }a < \\sqrt 2. \\end{cases}$ If $$a=3.236$$, $$f(z)$$ has double roots at $$-2.513$$ and $$-.398$$. If $$a=\\sqrt 2$$, $f(z)=c_0(1+z^2)(1+2^{3\/4}z+z^2).$\n\nIf $$n=5$$, \\eqalign{ f(z)&=c_0(1+a^2z+a^3z^2+a^3z^3+a^2z^4+z^5) \\cr& =c_0(z+1)[z^4+(a^2-1)z^3+(a^3-a^2+1)z^2+(a^2-1)z+1], } so $f\\text{ is } \\begin{cases} P_1&\\text{if }a > 3.234,\\\\ P_2&\\text{if }1.465 < a < 3.234,\\\\ P_3&\\text{if }1\\leq a < 1.465,\\\\ P_4&\\text{if } a < 1. \\end{cases}$\n\nIf $$a=1$$, $(z)=c_0(1+z^3)(1+z+z^2).$\n\nIf $$n=6$$, $f(z)=c_0(1+b^5z+b^8z^2+b^9z^3+b^8z^4+b^5z^5+z^6),$ where $$a=b^2$$, so for vari\u00adous val\u00adues of $$a$$ $0 < P_6 < 1.019 < P_4 < 1.091 < P_2 < 1.341 < P_1.$ This sug\u00adgests the fol\u00adlow\u00ading gen\u00ader\u00adal con\u00adjec\u00adture.\n\nCon\u00adjec\u00adture\u00a06: If $$c_i^2\\geq\\sqrt 2 c_{i-1} c_{i+1}$$ (or $$c_{i-1}c_{i+1}\\geq ac_{i-2}c_{i+2}$$ for some $$a$$), then $$f$$ is $$P_3$$.\n\nGiv\u00aden the poly\u00adno\u00admi\u00adal $f(u)=\\sum_{i=0}^nc_iu^i$ and a $$b > 0$$, define a poly\u00adno\u00admi\u00adal $g(u)=\\sum_{i=0}^nc_iu^{\\lfloor bi\\rfloor}=\\sum_{j=0}^{\\lfloor bn\\rfloor}d_ju^j,\\quad d_j=\\sum_{i:j\\leq bi < j+1}c_i.$\n\nIf $$f$$ is the pgf of $$X$$, then $$g$$ is the pgf of $$\\lfloor bX\\rfloor$$. Then $d_j^2-ad_{j-1}d_{j+1}=\\sum_m\\biggl[\\sum_{\\substack{ j\\leq bi < j+1\\\\bm-j-1 < bi\\leq bm-j}}c_ic_{m-i}-a\\sum_{\\substack{ j-1\\leq bi < j\\\\bm-j-2 < bi\\leq bm-j-1}}c_ic_{m-i}\\biggr].$ We will say that the coef\u00adfi\u00adcients $$c_i$$ sat\u00adis\u00adfy the New\u00adton in\u00adequal\u00adit\u00adies if $$\\label{Newton} \\frac{c_i^2}{\\binom ni^2}\\geq \\frac{c_{i-1}}{\\binom n{i-1}}\\frac{c_{i+1}}{\\binom n{i+1}};$$ see [\u25ca], for ex\u00adample.\n\nIf $$b=\\frac1k$$, $d_j^2-ad_{j-1}d_{j+1}=\\sum_{i=0}^{k-1}\\sum_{l=0}^{k-1}[c_{kj+i}c_{kj+l}-ac_{kj+i-k}c_{kj+l+k}].$ As poin\u00adted out earli\u00ader, even if $$k=1$$ and $$c_i=\\binom ni$$, \\eqref{SSR} fails for large $$n$$.\n\n#### Acknowledgment\n\nThe ed\u00adit\u00ador thanks Subhro Ghosh for re\u00adview\u00ading the pa\u00adper, and for his vari\u00adous sug\u00adges\u00adtions which im\u00adproved the present\u00ada\u00adtion of the pa\u00adper.","date":"2021-07-28 06:17:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 10, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9822655916213989, \"perplexity\": 3176.782152327265}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046153531.10\/warc\/CC-MAIN-20210728060744-20210728090744-00164.warc.gz\"}"} | null | null |
create table llx_c_departements
(
rowid integer AUTO_INCREMENT PRIMARY KEY,
code_departement varchar(6) NOT NULL,
fk_region integer,
cheflieu varchar(7),
tncc integer,
ncc varchar(50),
nom varchar(50),
active tinyint DEFAULT 1 NOT NULL
)type=innodb;
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,163 |
{"url":"https:\/\/tex.stackexchange.com\/questions\/289851\/how-can-i-independently-number-figure-pages","text":"# How can I independently number figure pages?\n\nI have a document where all the figures are put on separate pages (sometimes more than one figure to a page, which is fine). This is achieved using the \\begin{figure}[p] option in the figure environment. This does exactly what I want and puts any figures always on their own pages.\n\nHowever, I need these pages to be numbered independently from the main body of text, with a counter for the total number of figure pages. For example, I need the document to progress: page 1, page 2, Figure page 1\/3, page 3, Figure page 2\/3, Figure page 3\/3, page 4, page 5.\n\nI don't mind if the main text page numbers continue counting through the figure pages, but I'd prefer it if they didn't as per my example above.\n\nThanks.\n\nThis isn't quite what you wanted, but you might warm up to it. It moves all the floats to the end of the document.\n\n\\documentclass{article}\n\\usepackage[nofiglist,nomarkers]{endfloat}\n\\usepackage{lastpage}\n\\usepackage{mwe}\n\n\\makeatletter\n\\def\\ps@float{\\ps@plain\n\\def\\@oddfoot{\\hfil Float \\thepage{ of }\\pageref{LastPage}\\hfil}%\n\\let\\@evenfoot=\\@oddfoot}\n\\makeatother\n\n\\AtBeginDelayedFloats{\\pagestyle{float}\\setcounter{page}{1}}\n\n\\begin{document}\n\n\\begin{figure}[p]\n\\centering\n\\includegraphics{example-image}\n\\caption{test}\n\\end{figure}\n\n\\lipsum[1-8]\n\n\\end{document}\n\n\nThe only thing I know of that will delay until the float is outputted is \\protected@write.\n\nI added two new counters and redefine \\thepage as \\thetextpage or \\thefloatpage as needed, so the \\tableofcontents, \\listoffigures and \\pageref should work.\n\n\\documentclass{article}\n\\usepackage{xcolor}\n\\usepackage{everypage}\n\\usepackage{mwe}\n\n\\newcounter{textpage}\n\\newcounter{floatpage}\n\\renewcommand{\\thefloatpage}{{\\color{red}\\arabic{floatpage}}}\n\n\\makeatletter\n\\newcommand{\\floatpage}{\\protected@write\\@auxout{\\let\\arabic\\relax}% expand at shipout\n{\\string\\newfloatpage{\\arabic{page}}}}\n\n\\newcommand{\\newfloatpage}[1]% #1 = \\arabic{page}\n{\\@ifundefined{floatpage#1}{\\expandafter\\gdef\\csname floatpage#1\\endcsname{\\relax}}{}}\n\n\\newcommand{\\mypagestyle}{\\@ifundefined{floatpage\\arabic{page}}%\n{\\stepcounter{textpage}\\global\\let\\thepage=\\thetextpage}%\n{\\stepcounter{floatpage}\\global\\let\\thepage=\\thefloatpage}}\n\\makeatother\n\n\\begin{document}\n\n\\listoffigures\n\\bigskip\n\n\\begin{figure}[p]\n\\floatpage% place in every figure[p]\n\\centering\n\\includegraphics[height=.9\\textheight,width=.9\\textwidth]{example-image}\n\\caption{test}\n\\end{figure}\n\n\\lipsum[1-8]\n\n\\end{document}","date":"2020-02-20 02:03:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.777449905872345, \"perplexity\": 1759.2334520257039}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875144498.68\/warc\/CC-MAIN-20200220005045-20200220035045-00023.warc.gz\"}"} | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.