text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
Виборчий округ 195 — виборчий округ в Черкаській області. В сучасному вигляді був утворений 28 квітня 2012 постановою ЦВК №82 (до цього моменту існувала інша система виборчих округів). Окружна виборча комісія цього округу розташовується в Черкаському міському Палаці молоді за адресою м. Черкаси, вул. Благовісна, 170.
До складу округу входять частина Соснівського району міста Черкаси (територія між вулицями Смілянська, Михайла Грушевського та Івана Гонти), а також Драбівський, Чигиринський і Чорнобаївський райони. Виборчий округ 195 межує з округом 151 на північному сході, з округом 148 на сході, з округом 150 на південному сході, з округом 102 на півдні, з округом 198 і округом 194 на південному заході, з округом 197 на заході та з округом 98 на північному заході. Виборчий округ №195 складається з виборчих дільниць під номерами 710036-710083, 710738-710819, 711043, 711045-711048, 711050-711067 та 711074-711079.
Народні депутати від округу
Результати виборів
Парламентські
2019
Кандидати-мажоритарники:
Арсенюк Олег Олексійович (Слуга народу)
Курбет Євгеній Олександрович (самовисування)
Боєчко Владислав Федорович (Сила і честь)
Друмашко Володимир Григорович (Голос)
Савенко Олександр Сергійович (Європейська Солідарність)
Малкова Лариса Володимирівна (самовисування)
Буданцев Роман Петрович (самовисування)
Толмачов Олег Анатолійович (самовисування)
Гриценко Володимир Миколайович (Партія вільних демократів)
Коломієць Богдан Іванович (Опозиційний блок)
Квітка Віктор Володимирович (самовисування)
Пошиваник Микола Михайлович (самовисування)
Хоменко Богдан Іванович (Партія Шарія)
Рябошлик Олександр Володимирович (самовисування)
Гур'янова Марина Ігорівна (самовисування)
Семенов Олександр Миколайович (самовисування)
Давидов Віктор Вячеславович (самовисування)
Єфремова Марія Іванівна (самовисування)
Польова Олена Володимирівна (самовисування)
Легомінова Юлія Олегівна (самовисування)
Франків Олена Анатоліївна (самовисування)
Мельник Андрій Вікторович (самовисування)
Ткач Костянтин Володимирович (самовисування)
Прокопов Артем Станіславович (самовисування)
Стопчак Ліна Анатоліївна (самовисування)
Цуран Олександр Вікторович (самовисування)
Федунь Андрій Володимирович (самовисування)
2014
Кандидати-мажоритарники:
Зубик Володимир Володимирович (самовисування)
Турченяк Олександр Васильович (Народний фронт)
Добровольський Микола Михайлович (Батьківщина)
Юрченко Василь Григорович (самовисування)
Черевань Андрій Борисович (Правий сектор)
Гетьман Олексій Юрійович (Радикальна партія)
Арсенюк Олексій Максимович (Опозиційний блок)
Гуляницький Артем Артурович (Зелена планета)
Лазарєва Олена Миколаївна (самовисування)
Радченко Семен Володимирович (Сильна Україна)
Малий Олег Георгійович (самовисування)
Чуприна Олександр Олександрович (самовисування)
Марченко Юрій Іванович (самовисування)
2012
Кандидати-мажоритарники:
Зубик Володимир Володимирович (самовисування)
Гресь Володимир Анатолійович (Батьківщина)
Чуприна Олександр Олександрович (УДАР)
Малий Олег Георгійович (Комуністична партія України)
Радуцький Олександр Романович (Соціалістична партія України)
Островський Олег Анатолійович (самовисування)
Кравченко Наталія Іванівна (самовисування)
Згура Віталій Олександрович (самовисування)
Гвоздь Микола Гаврилович (самовисування)
Гришкін Олександр Петрович (самовисування)
Пацьора Михайло Іванович (Рідна Вітчизна)
Вчорашній Павло Юрійович (самовисування)
Гавриленко Володимир Іванович (самовисування)
Президентські
Явка
Явка виборців на окрузі:
Посилання
Округ №195 — сайт Державного реєстру виборців
Виборчі округи, Черкаська область — сайт Державного реєстру виборців
Одномандатний виборчий округ №195 — сайт Центральної виборчої комісії
Примітки | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,078 |
Q: right align item next to semantic-ui's container my problem is the following:
I want to put an item directly to the right of a semantic-ui container.
<div class="outer-container">
<div class="ui main container">...</div>
<div class="right-aligned">...</div>
</div>
My outer-container is a footer that is sticky to the page.
.outer-container {
position: sticky;
bottom: 0;
height: 35px;
position: -webkit-sticky;
}
Without having any properties, the right-aligned div is pushed to the far right, but I'd like it to be directly next to the container.
I tried using flexbox but I did not succeed, and I suspect this is due to semantic-ui's container having the property margin-right:auto !important.
I'd be thankful for any pointers.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,773 |
According to Kristin
TV/Movie Reviews, Podcasts, Interviews, Music Reviews/Suggestions, Giveaways, Entertainment News & more!
It's Official! I'm A Published Poet!
My Essay, ON THE WAY DOWN, Has Been Published On Anti-Heroin Chic
The MARCUS KING BAND Release New Single 'Where I'm Headed'
September 25, 2018 September 25, 2018 by Kristin
THE MARCUS KING BAND
REVEAL NEW TRACK
'WHERE I'M HEADED'
CAROLINA CONFESSIONS
OUT OCTOBER 5th VIA SNAKEFARM RECORDS
PREORDER THE ALBUM HERE
HEADLINE UK TOUR DATES IN OCTOBER
The fast-rising Marcus King Band have revealed a new track, 'Where I'm Headed', taken from their upcoming / third studio album Carolina Confessions, out October 5th via Snakefarm Records on CD, vinyl & DL.
'Where I'm Headed' is the latest song to be showcased from the album, following on from 'Homesick' & 'Welcome 'Round Here'. Blending emotive brass with soulful gospel backing vocals, the track is a driving and uplifting journey from laidback acoustic tones through to howling solos, spotlighting Marcus' virtuosic guitar playing and impassioned vocal talents.
"I wrote it about a very special lady," he says of the track. "It's to do with always being on the road and not really knowing where you're going next. A lot of times people ask where we're going and I don't really remember without looking at the tour book. So I'm saying to her that I don't know where I'm headed, but I know you'll be there."
Produced & mixed by Grammy award-winner Dave Cobb (Chris Stapleton / Sturgill Simpson) at Nashville's iconic RCA Studio A, Carolina Confessions sees 22-year-old Marcus and his band taking a major leap forwards in terms of song-writing, production, musical dexterity and emotional weight. The featured tracks, including lead UK single 'Welcome 'Round Here', a potent mix of blues-soaked guitars and intoxicating crescendos, find the group's trademark sound buoyed by a new narrative depth, as Marcus delves into heavy themes: absolution, guilt, leaving home, yearning, love and other affairs of the soul.
The album features 10 brand new songs, all penned by Marcus except for 'How Long', a co-write with The Black Keys' Dan Auerbach and veteran songwriter Pat McLaughlin. Throughout the record, Marcus – born and raised in Greenville, South Carolina – exhibits an almost Southern gothic sensibility in his songs, owning up to failed relationships, portraying the complex connection with his hometown, and generally displaying an expansive musical firmament in the process.
A Blue Ridge Mountain boy, Marcus has been writing songs and performing onstage for half of his lifetime, and fronting his own groups for nearly a decade. Since he was a teenager, he's been trading licks with famous fans and mentors Warren Haynes and Derek Trucks – indeed, so blown away was Haynes by the then-19-year-old that he signed him to his Evil Teen label, releasing the band's debut album, 'Soul Insight', in 2015 and producing the self-titled follow-up a year later.
Simply, Marcus and his five band-mates – drummer Jack Ryan, bass player Stephen Campbell, trumpeter / trombonist Justin Johnson, sax player Dean Mitchell and keyboard player DeShawn 'D-Vibes' Alexander – are on a mission to create a fresh musical experience effortlessly combining rock, blues and soul into a dynamic and atmospheric sound.
Watch the album teaser here.
CAROLINA CONFESSIONS TRACK-LIST (physical sequence)
1) Confessions
2) Where I'm Headed
3) Homesick
4) 8 a.m.
5) How Long
6) Remember
7) Side Door
8) Autumn Rains
9) Welcome 'Round Here
10) Goodbye Carolina
The UK dates in October come in the wake of a huge US run taking in shows on the Tedeschi Trucks Band's Wheels of Soul Tour, plus headline appearances in New York, Boston, Nashville, San Francisco and Los Angeles. Marcus also has his own 'Family Reunion' music festival in North Carolina.
OCTOBER 2018 UK TOUR DATES
25th – Bristol, The Fleece
26th – London, Islington Assembly Hall
27th – Manchester, Night & Day Café
28th – Glasgow, Stereo
blog blogger blogging Carolina Confessions Dave Cobb Dean Mitchell DeShawn Alexander Homesick Jack Ryan Justin Johnson Marcus King Marcus King Band Music new music RCA Studio A Snakefarm Records Stephen Campbell Tedeschi Trucks Band The Marcus King Band Welcome Round Here Wheels of Soul Tour Where I'm Headed
FØNX | Reveals Video For New Single 'Come On Over'
THE STRUTS Announce New Album YOUNG&DANGEROUS
Follow According to Kristin on WordPress.com
Hey there! I'm Kristin. I typically spend my time perfecting the art of sarcasm, binge-watching & concert-going. I love reading, writing, music & pop culture. You can find a little bit of everything here on my blog. XOXO
View accordingtokristin's profile on Facebook
View ThisIsKristin_'s profile on Twitter
View thisiskristin_'s profile on Instagram
View ThisIsKristin_'s profile on Pinterest
View kristintrujillo's profile on LinkedIn
View 409kristin's profile on YouTube
View thisiskristin's profile on Tumblr
Enter to Win: SELENA | Online Giveaway
Enter to Win: I STILL BELIEVE | Online Giveaway
Enter to Win: KNIVES OUT | Online Giveaway
Enter to Win: MIDWAY | Online Giveaway
Instagram | @thisiskristin_
sam m on Enter to Win! | RAMBO: LAST BL…
Carmen Mallea on Enter to Win: Disney's A…
Kristin on Enter to Win: BUMBLEBEE | Give…
shelly dixon on Enter to Win: BUMBLEBEE | Give…
Talented Emerging Ar… on Interview: Up and Coming Music…
Categories Select Category #MovieWednesday #MusicMonday #ThrowbackThursday #TVTuesday Blog Books Comedy Concert Reviews Entertainment News Giveaway Interviews Movie Reviews Movie Screening Giveaway Movies Music Podcast The Chatterbox TV TV Reviews
Archives Select Month May 2020 February 2020 January 2020 December 2019 November 2019 October 2019 September 2019 August 2019 July 2019 June 2019 May 2019 April 2019 March 2019 February 2019 January 2019 December 2018 November 2018 October 2018 September 2018 August 2018 July 2018 June 2018 May 2018 April 2018 March 2018 February 2018 January 2018 December 2017 November 2017 October 2017 September 2017 August 2017 July 2017 June 2017 May 2017 April 2017 March 2017 February 2017 January 2017 December 2016 November 2016 October 2016 September 2016 August 2016 July 2016 June 2016 May 2016 April 2016 March 2016 February 2016 January 2016 December 2015 November 2015 October 2015 September 2015 August 2015 July 2015 June 2015 May 2015 April 2015 March 2015 August 2014 July 2014
Please consider making a donation to help keep According to Kristin up and running! | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,794 |
\section{Introduction}
Several enumeration problems on planar graphs have been solved
recently. It has been shown~\cite{gn} that the number of labelled
planar graphs with $n$ vertices is asymptotically equal to
\begin{equation}\label{asympt-planar}
c \cdot n^{-7/2} \cdot \gamma^n n!,
\end{equation}
for suitable constants $c$ and $\gamma$. For series-parallel
graphs~\cite{SP}, the asymptotic estimate is of the form, again for
suitable constants $d$ and $\delta$,
\begin{equation}\label{asympt-sp}
d \cdot n^{-5/2} \cdot \delta^n n!.
\end{equation}
As can be seen from the proofs in \cite{SP,gn}, the difference in
the subexponential term comes from a different behaviour of the
counting generating functions near their dominant singularities.
Related families of labelled graphs have been studied, like
outerplanar graphs~\cite{SP}, graphs not containing $K_{3,3}$ as a
minor \cite{k33}, and, more generally, classes of graphs closed
under minors \cite{growth}. In all cases where asymptotic estimates
have been obtained, the subexponential term is systematically either
$n^{-7/2}$ or $n^{-5/2}$. The present chapter grew up as an attempt
to understand this dichotomy.
A~\emph{class} of graphs is a family of labelled graphs which is
closed under isomorphism. A~class~$\mathcal{G}$ is \emph{closed} if
the following condition holds: a graph is in $\mathcal{G}$ if and
only if its connected, 2-connected and 3-connected components are
in~$\mathcal{G}$. A~closed class is completely determined by its
3-connected members. The basic example is the class of planar
graphs, but there are others, specially minor-closed classes whose
excluded minors are 3-connected.
In this paper we present a general framework for enumerating
closed classes of graphs. Let $T(x,z)$ be the generating function
associated to the family of 3-connected graphs in a closed class
$\mathcal{G}$, where $x$ marks vertices and $z$ marks edges, and let
$g_n$ be the number of graphs in~$\mathcal{G}$ with~$n$ vertices.
Our first result shows that the asymptotic estimates for $g_n$ depend crucially
on the singular behaviour of $T(x,z)$. For a fixed value of $x$, let
$r(x)$ be the dominant singularity of $T(x,z)$. If $T(x,z)$ has an
expansion at $r(x)$ in powers of $Z =\sqrt{1-z/r(x)}$ with dominant
term $Z^{5}$, then the estimate for $g_n$ is as in
Equation~(\ref{asympt-planar}); if $T(x,z)$ is either analytic
everywhere or the dominant term is $Z^{3}$, then the pattern is
that of Equation~(\ref{asympt-sp}).
Our analysis gives a clear explanation of these facts in term of the vanishing of
certain coefficients in singular expansions (Propositions
\ref{proposition:B's}, \ref{prop:singB}, and \ref{proposition:C's}).
There also mixed cases, where
2-connected and connected graphs in $\mathcal{G}$ get different
exponents. And there are \emph{critical} cases too, due to the
confluence of two sources for the dominant singularity, where a
subexponential term $n^{-8/3}$ appears. This is the content of
Theorem~\ref{th:mainresult}, whose proof is based on a careful
analysis of singularities.
Section \ref{se:prelimi} presents technical preliminaries needed
in the paper, and Section \ref{se:asympt} contains the main results.
In Section~\ref{se:laws}, extending the analytic techniques
developed for asymptotic enumeration, we analyze random graphs from
closed classes of graphs. We show that several basic parameters
converge in distribution either to a normal law or to a Poisson
law. In particular, the number of edges, number of blocks and number
of cut vertices are asymptotically normal with linear mean and
variance. This is also the case for the number of special copies of
a fixed graph or a fixed block in the class. On the other hand, the
number of connected components converges to a discrete Poisson law.
In Section~\ref{se:bloc} we study a key extremal parameter: the size
of the largest block, or the largest 2-connected component. And in
this case we find a striking difference depending on the class of
graphs. For planar graphs there is asymptotically almost surely a
block of linear size, and the remaining blocks are of order
$O(n^{2/3})$. For series-parallel graphs there is no block of linear
size. This also applies more generally to the classes considered in
Theorem~\ref{th:mainresult}. A similar dichotomy occurs when
considering the size of the largest 3-connected component. This is
proved using the techniques developed by Banderier et al.
\cite{airy} for analyzing largest components in random maps. For
planar graphs we prove the following precise result in
Theorem~\ref{th:largest-block}. If ${X}_n$ is the size of the
largest block in random planar graphs with $n$ vertices, then
$$
\mathbf{P}\left({X}_n = \alpha n + x n^{2/3}\right) \sim n^{-2/3} c g(c x),
$$
where $\alpha \approx 0.95982$ and $ c \approx 128.35169$ are
well-defined analytic constants, and $g(x)$ is the so called Airy
distribution of the map type, which is a particular instance of a stable
law of index $3/2$. Moreover, the size of the second largest block
is $O(n^{2/3})$. The giant block is uniformly distributed among the
planar 2-connected graphs with the same number of vertices, hence
according to the results in \cite{bender} it has about $2.2629 \cdot
0.95982\,n = 2.172\, n$ edges, again with deviations of order
$O(n^{2/3})$ (the deviations for the normal law are of order $n^{1/2}$, but
the $n^{2/3}$ term coming from the Airy distribution dominates).
We remark that the size of the largest block has been analyzed too in \cite{kostas}
using different techniques. The main improvement with respect to
\cite{kostas} is that we are able to obtain a precise limit distribution.
With respect to the largest 3-connected component in a
random planar graph, we show that it follows an Airy distribution and has $\eta n$ vertices and
$\zeta n$ edges, where $\eta \approx 0.7346 $ and $\zeta \approx
1.7921 $ are again well-defined. This is technically more involved since we have to analyze the composition of two Airy laws and different probability distributions in 2-connected graphs.
The picture that emerges for random planar graphs is the
following. Start with a large 3-connected planar graph $M$ (or the
skeleton of a polytope in the space if one prefers a more geometric
view), and perform the following operations. First edges of $M$ are
substituted by small blocks with a distinguished oriented edge,
giving rise to the giant block
$L$; then small connected graphs are attached to some of the
vertices of $L$, which become cut vertices, giving rise to the
largest connected component~$C$. As we show later, $C$
has size $n-O(1)$. This model can be made
more precise and will be the subject of future research.
An interesting open question is whether there are other parameters
besides the size of the largest block (or largest 3-connected
component) for which planar graphs and series-parallel graphs differ
in a qualitative way. We remark that with respect to the largest
component there is no qualitative difference. This is also true for the degree
distribution \cite{degrees}. If $d_k$ is the probability that a
given vertex has degree $k>0$, then in both cases it can be shown
that the $d_k$ decay as $c\cdot n^\alpha q^k$, where $c,\alpha$ and
$q$ depend on the class under consideration \cite{degrees}.
In Section~\ref{se:examples} we apply the previous machinery to the
analysis of several classes of graphs closed under minors, including planar graphs and
series-parallel graphs. Whenever the generating function $T(x,z)$
can be computed explicitly, we obtain precise asymptotic estimates
for the number of graphs $g_n$, and limit laws for the main
parameters. In particular we determine the asymptotic probability of
a random graph being connected, the constant $\kappa$ such that
the expected number of edges is asymptotically $\kappa n$, and
other fundamental constants.
Our techniques allow also to study graphs with a given density, or
average degree. To fix ideas, let $g_{n,\lfloor \mu n \rfloor}$ be
the number of planar graphs with $n$ vertices and $\lfloor \mu n
\rfloor$ edges: $\mu$~is the edge density and $2\mu$ is the average
degree. For $\mu \in (1,3)$, a precise estimate for $g_{n,\lfloor
\mu n \rfloor}$ can be obtained using a local limit
theorem~\cite{gn}. And parameters like the number of components or
the number of blocks can be analyzed too when the edge density
varies. It turns out that the family of planar graphs with density
$\mu \in (1,3)$ shares the main characteristics of planar graphs.
This is also the case for series-parallel graphs, where $\mu \in
(1,2)$ since maximal graphs in this class have only $2n-3$ edges.
In Section~\ref{se:critical} we show examples of \emph{critical phenomena} by
a suitable choice of the family $\mathcal{T}$ of 3-connected graphs.
In the associated closed class $\mathcal{G}$, graphs below a
critical density $\mu_0$ behave like series-parallel graphs, and
above $\mu_0$ they behave like planar graphs, or conversely. We even
have examples with more than one critical value.
We remark that graph classes with given 3-connected components are analyzed also in \cite{grammar} and \cite{gagarin}, where the emphasis is on combinatorial decompositions rather than asymptotic analysis.
\section{Preliminaries}\label{se:prelimi}
Generating functions are of the exponential type, unless we say
explicitly the contrary. The partial derivatives of $A(x,y)$ are
written $A_x(x,y)$ and $A_y(x,y)$. In some cases the derivative
with respect to $x$ is written $A'(x,y)$. The second derivatives are
written $A_{xx}(x,y)$, and so on. By a.a.s.\ we mean asymptotically
almost surely, which in our case means a property of random graphs
whose probability tends to $1$ as $n$ goes to infinity.
The decomposition of a graph into connected components, and of a
connected graph into blocks (2-connected components) are well known.
We also need the decomposition of a 2-connected graph decomposes
into 3-connected components \cite{Tutte}. A 2-connected graph is
built by series and parallel compositions and 3-connected graphs in
which each edge has been substituted by a block; see below the definition of networks.
A class of labelled graphs~$\mathcal{G}$ is \emph{closed} if a graph
$G$ is in $\mathcal{G}$ if and only if the connected, 2-connected
and 3-connected components of $G$ are in~$\mathcal{G}$. A~closed
class is completely determined by the family $\mathcal{T}$ of its
3-connected members. Let $g_{n}$ be the number of graphs in
$\mathcal{G}$ with $n$ vertices, and let $g_{n,k}$ be the number of
graphs with $n$ vertices and $k$ edges. We define similarly
$c_n,b_n,t_n$ for the number of connected, 2-connected and
3-connected graphs, respectively, as well as the corresponding
$c_{n,k},b_{n,k},t_{n,k}$. We introduce the EGFs
$$
G(x,y) = \sum_{n,k} g_{n,k} \, y^k {x^n \over n!},
$$
and similarly for $C(x,y)$ and $B(x,y)$. When $y=1$ we recover the
univariate EGFs
$$
B(x) = \sum b_n {x^n \over n!}, \qquad C(x) = \sum c_n {x^n \over n!},
\qquad G(x) = \sum g_n {x^n \over n!}.
$$
The following equations reflect the decomposition into connected
components and 2-connected components:
\begin{equation}\label{eq:GCB2}
G(x,y) = \exp(C(x,y)), \qquad x C'(x,y) =
x\exp\left(B'(x C'(x,y),y)
\right),
\end{equation}
%
In the first decomposition, one must notice that a general graph is
simply a \emph{set} of labelled connected graphs, hence the equation
$G(x,y) = \exp(C(x,y))$. The second decomposition is a bit more involved.
The EGF $x C'(x,y)$ is associated to the family of connected graphs
with rooted at a vertex. Then, the second equation in~(\ref{eq:GCB2})
says that a connected graph with a rooted vertex is obtained from a
set of rooted $2$-connected graphs (where the root bears no label), in
which we substitute each vertex by a connected graph with a rooted vertex
(the roots allow us to recover the graph from its constituents). We also
define
$$
T(x,z) = \sum_{n,k} t_{n,k} \, z^n {x^n\over n!},
$$
where the only difference is that the variable for marking edges is
now~$z$. This convention is useful and will be maintained throughout
the paper.
A~\emph{network} is a graph with two distinguished vertices, called
\emph{poles}, such that the graph obtained by adding an edge between
the two poles is 2-connected. Moreover, the two poles are not
labelled. Networks are the key technical device for encoding the
decomposition of 2-connected graphs into 3-connected components.
%
We distinguish between three kinds of networks. A network is \textit{series} if it is obtained from a cycle C with a distinguished edge $e$, whose endpoints become the
poles, and every edge different from $e$ is replaced by a network. Equivalently, when removing the root edge if present, the resulting graph is not 2-connected. A network
is \textit{parallel} if it is obtained by gluing two or more networks, none of them containing the root edge, along the common poles. Equivalently, when the two poles are a 2-cut of the network. Finally, an \textit{h-network} is obtained from a 3-connected graph $H$ rooted at an oriented edge, by replacing every edge of $H$ (other than the root) by an arbitrary network. Trakhtenbrot \cite{trak} showed that a network is either series, parallel or an h-network, and Walsh \cite{walsh} translated this fact into generating functions as we show next.
Let $D(x,y)$ be the GF associated to networks, where again $x$ and $y$
mark vertices and edges. Then $D=D(x,y)$ satisfies~(see
\cite{bender}, who draws on \cite{trak,walsh})
\begin{equation}\label{eq:D}
{2 \over x^2} T_z(x,D) - \log\left({1+D \over 1+y}\right) +
{xD^2 \over 1+xD} = 0,
\end{equation}
and $B(x,y)$ is related to $D(x,y)$ through
\begin{equation}\label{eq:BD}
B_y(x,y) = {x^2 \over 2} \left( {1+D(x,y) \over 1+y} \right).
\end{equation}
%
For future reference, we set
\begin{equation}\label{eq:phi}
\Phi(x,z)= \frac{2}{x^{2}}\,T_z(x,z)
-\log\left(\frac{1+z}{1+y}\right) +\frac{xz^{2}}{1+xz},
\end{equation}
so that Equation~(\ref{eq:D}) is written in the form $\Phi(x,D)=0$,
for a given value of $y$. By integrating (\ref{eq:BD}) using the
techniques developed in \cite{gn}, we obtain an explicit expression
for $B(x,y)$ in terms of $D(x,y)$ and $T(x,z)$ (see the first part
of the proof of Lemma~5 in \cite{gn}):
%
\begin{eqnarray}\label{eq:Bexplicit}
B(x,y)&=& T(x,D(x,y))
-\frac{1}{2}xD(x,y)+\frac{1}{2}\log(1+xD(x,y))+ \\
&& \frac{x^{2}}{2}\left(D(x,y)+\frac{1}{2}D(x,y)^2+
(1+D(x,y))\log\left(\frac{1+y}{1+D(x,y)}\right)\right) . \nonumber
\end{eqnarray}
This relation is valid for every closed defined in terms of
3-connected graphs, and can be proved in a more combinatorial
way~\cite{grammar} (see also \cite{gagarin}).
We use singularity analysis for obtaining asymptotic estimates; the main reference here is \cite{FlajoletSedgewig:analytic-combinatorics}.
The singular expansions we encounter in this paper are always of the form
$$
f(x) = f_0 + f_2X^2 + f_4X^4 + \cdots + f_{2k}X^{2k} + f_{2k+1}X_{2k+1} + O(X_{2k+2}),
$$
where $X = \sqrt{1 - x/\rho}$. That is, $2k + 1$ is the smallest odd integer $i$ such that $f_i \ne 0$. The even powers of X are analytic functions and do not contribute to the asymptotics of $[x^n]f(x)$.
The number $\alpha = (2k+1)/2$ is called the \textit{singular exponent},
and by the transfer theorem \cite{FlajoletSedgewig:analytic-combinatorics} we obtain the estimate
$$
[x^n]f(x) \sim c \cdot n^{\alpha-1} \rho{-n},
$$
where $c = f_{2k+1}/\Gamma(-\alpha)$.
We assume that, for a fixed value of $x$, $T(x,z)$ has a unique
dominant singularity $r(x)$, and that there is a singular
expansion near $r(x)$ of the form
\begin{equation}\label{eq:singT}
T(x,z) = \sum_{n\ge n_0} T_n(x) \left(1- {z \over r(x)}\right)^{n/\kappa},
\end{equation}
where $n_0$ is an integer, possibly negative, and the functions
$t_n(x)$ and $r(x)$ are analytic. This is a rather general
assumption, as it includes singularities coming from algebraic and
meromorphic functions.
The case when $\mathcal{T}$ is empty (there are no 3-connected
graphs) gives rise to the class of series-parallel graphs.
It is shown in~\cite{SP} that, for a fixed value $y=y_0$, $D(x,y_0)$
has a unique dominant singularity $R(y_0)$. This is also true for
arbitrary $\mathcal{T}$, since adding 3-connected graphs can only
increase the number of networks.
\section{Asymptotic enumeration}\label{se:asympt}
Throughout the rest of the chapter we assume that $\mathcal{T}$ is a
family of 3-connected graphs whose GF $T(x,z)$ satisfies the
requirements described in Section~\ref{se:prelimi}. We assume that a
singular expansion like (\ref{eq:singT}) holds, and we let $r(x)$ be
the dominant singularity of $T(x,z)$, and $\alpha$ the singular
exponent.
Our main result gives precise asymptotic estimates for $g_n, c_n,
b_n$ depending on the singularities of $T(x,z)$. Cases (1) and (2)
in the next statement can be considered as generic, whereas (1) and
(2.1) are those encountered in `natural' classes of graphs. The two
situations in case~(3) come from critical conditions, when two
possible sources of singularities coincide. This is the reason for
the unusual exponent $-8/3$, which comes from a singularity of
cubic-root type instead of the familiar square-root type.
\begin{theorem}\label{th:mainresult}
Let $\mathcal{G}$ be a closed family of graphs, and let $T(x,z)$ be
the GF of the family of 3-connected graphs in $\mathcal{G}$. In all
cases $b,c,g,R,\rho$ are explicit positive constants and $\rho < R$.
\begin{itemize}
\item[(1)] If $T_z(x,z)$ is either analytic or has singular
exponent $\alpha<1$, then
$$
b_n\sim b\, n^{-5/2} R^{-n} n!, \qquad c_n\sim c\, n^{-5/2}
\rho^{-n} n!, \qquad g_n\sim g\, n^{-5/2} \rho^{-n} n!
$$
\item[(2)] If $T_z(x,z)$ has singular exponent $\alpha=3/2$, then one of
the following holds:
\smallskip
\begin{itemize}
\item[(2.1)]
$ \displaystyle b_n\sim b\, n^{-7/2} R^{-n} n!, \qquad c_n\sim c\,
n^{-7/2} \rho^{-n} n!, \qquad g_n\sim g\, n^{-7/2} \rho^{-n} n! $
\item[(2.2)]
$ \displaystyle b_n\sim b\, n^{-7/2} R^{-n} n!, \qquad c_n\sim c\,
n^{-5/2} \rho^{-n} n!, \qquad g_n\sim g\, n^{-5/2} \rho^{-n} n! $
\item[(2.3)]
$ \displaystyle b_n\sim b\, n^{-5/2} R^{-n} n!, \qquad c_n\sim c\,
n^{-5/2} \rho^{-n} n!, \qquad g_n\sim g\, n^{-5/2} \rho^{-n} n! $
\end{itemize}
\bigskip
\item[(3)]
If $T_z(x,z)$ has singular exponent $\alpha=3/2$, and in addition a
critical condition is satisfied, one of the following holds:
\smallskip
\begin{itemize}
\item[(3.1)]
$ \displaystyle b_n\sim b\, n^{-8/3} R^{-n} n!, \qquad c_n\sim c\,
n^{-5/2} \rho^{-n} n!, \qquad g_n\sim g\, n^{-5/2} \rho^{-n} n! $
\item[(3.2)]
$ \displaystyle b_n\sim b\, n^{-7/2} R^{-n} n!, \qquad c_n\sim c\,
n^{-8/3} \rho^{-n} n!, \qquad g_n\sim g\, n^{-8/3} \rho^{-n} n! $
\end{itemize}
\end{itemize}
\end{theorem}
Using the Transfer Theorems~\cite{FlajoletSedgewig:analytic-combinatorics}, the
previous theorem is a direct application of the following analytic
result for $y=1$. We prove it for arbitrary values of $y=y_0$, since
this has important consequences later on.
\begin{theorem}\label{th:mainresult2}
Let $\mathcal{G}$ be a closed family of graphs, and let $T(x,z)$ be
the GF of the family of 3-connected graphs in $\mathcal{G}$.
For a fixed value $y=y_0$, let $R=R(y_0)$ be the dominant singularity of
$D(x,y_0)$, and let $D_0 = D(R,y_0)$.
\begin{itemize}
\item[(1)] If~$T_z(x,z)$ is either analytic or has singular
exponent $\alpha<1$ at $(R,D_0)$, then $B(x,y_0)$, $C(x,y_0)$ and
$G(x,y_0)$ have singular exponent $3/2$.
\bigskip
\item[(2)] If $T_z(x,z)$ has singular exponent $\alpha=3/2$ at $(R,D_0)$,
then one of the following holds:
\smallskip
\begin{itemize}
\item[(2.1)]
$B(x,y_0), C(x,y_0)$ and $G(x,y_0)$ have singular exponent $5/2$.
\item[(2.2)]
$B(x,y_0)$ has singular exponent $5/2$, and $C(x,y_0), G(x,y_0)$
have singular exponent $3/2$.
\item[(2.3)]
$B(x,y_0), C(x,y_0)$ and $G(x,y_0)$ have singular exponent $3/2$.
\end{itemize}
\bigskip
\item[(3)]
If $T_z(x,z)$ has singular exponent $\alpha=3/2$ at $(R,D_0)$, and
in addition a critical condition is satisfied for the singularities
of either $B(x,y)$ or $C(x,y)$, then one of the following holds:
\smallskip
\begin{itemize}
\item[(3.1)]
$B(x,y_0)$ has singular exponent $5/3$, and $C(x,y_0), G(x,y_0)$
have singular exponent $3/2$.
\item[(3.2)]
$B(x,y_0)$ has singular exponent $5/2$, and $C(x,y_0), G(x,y_0)$
have singular exponent $5/3$.
\end{itemize}
\end{itemize}
\end{theorem}
The rest of the section is devoted to the proof of Theorem
\ref{th:mainresult2}, which implies Theorem~\ref{th:mainresult}.
First we study the singularities of $B(x,y)$, which is the most
technical part. Then we study the singularities of $C(x,y)$ and
$G(x,y)$, which are always of the same type since $G(x,y)= \exp
{(C(x,y))}$.
\subsection{Singularity analysis of $B(x,y)$}\label{subsec:singB}
From now on, we assume that $y=y_0$ is a fixed value, and let
$D(x)=D(x, y_0)$. Recall from Equation (\ref{eq:phi}) that $D(x)$
satisfies $\Phi(x,D(x))=0$, where
$$
\Phi(x,z)= \frac{2}{x^{2}} T_z(x,z)
-\log\left(\frac{1+z}{1+y_0}\right) +\frac{xz^{2}}{1+xz}.
$$
Since a 3-connected graph has at least four vertices, $T(x,z)$ is
$O(x^4)$. If follows that $D(0) = y_0$ and $\Phi_z(0,0) =
-1/(1+y_0)<0$. It follows from the implicit function theorem that
$D(x)$ is analytic at $x=0$.
The next result shows that $D(x)$ has a positive singularity $R$ and that
$D(x)$ is finite at $R$.
\begin{lemma} With the previous assumptions, $D(x)$ has a positive singularity
$R=R(y)$, and $D(R)$ is also finite.
\end{lemma}
\begin{proof} We first show that $D(x)$ has a finite singularity. Consider the
family of networks without 3-connected components, which corresponds
to series-parallel networks, and let $D_{\emptyset}(x,y)$ be the
associated GF. It is shown in~\cite{SP} that the radius of
convergence $R_{\emptyset}(y_0)$ of $D_{\emptyset}(x, y_0)$ is
finite for all $y>0$. Since the set of networks enumerated by
$D(x,y_0)$ contains the networks without three-connected components,
it follows that $D_{\emptyset}(x,y) \le D(x,y)$ and $D(x)$ has a
finite singularity $R(y_0)\leq R_{\emptyset}(y_0)$.
Next we show that $D(x)$ is finite at its dominant singularity $R =
R(y_0)$. Since $R$ is the smallest singularity and $\Phi_z(0,0)<0$,
we have $\Phi_z(x, D(x)) < 0$ for $0\le x<R$. We also have
$\Phi_{zz}(x,z) > 0$ for $x,z>0$. Indeed, the first summand in
$\Phi$ is a series with positive coefficients, and all its
derivatives are positive; the other two terms have with second
derivatives $1/(1+z)^2$ and $2x/(1+xz)^3$, which are also positive.
As a consequence, $\Phi_z(x,D(x))$ is an increasing function and
$\lim_{x\to R^-} \Phi(x,D(x))$ exists and is finite. It follows that
$D(R)$ cannot go to infinity, as claimed.
\end{proof}
Since $R$ is the smallest singularity of $D(x)$, $\Phi(x,z)$ is
analytic for all $x<R$ along the curve defined by $\Phi(x,D(x))=0$.
For $x,z>0$ it is clear that $\Phi$ is analytic at $(x,z)$ if and
only if $T(x,z)$ is also analytic. Thus $T(x,z)$ is also analytic
along the curve $\Phi(x,D(x))=0$ for $x<R$. As a consequence, the
singularity $R$ can only have two possible sources:
\begin{itemize}
\item[(a)] A branch-point $(R, D_0)$ when solving $\Phi(x,z)=0$,
that is, $\Phi$ and $\Phi_z$ vanish at $(R, D_0)$.
\item[(b)] $T(x,z)$ becomes singular at $(R, D_0)$, so that $\Phi(x,z)$
is also singular.
\end{itemize}
Case (a) corresponds to case (1) in Theorem~\ref{th:mainresult2}.
For case (b) we assume that the singular exponent of $T(x,z)$ at the
dominant singularity is $5/2$, which corresponds to families of
3-connected graphs coming for 3-connected planar maps, and related
families of graphs. We could allow more general types of singular exponents but they do not appear in the main examples we have analyzed.
The typical situation is case (2.1) in
Theorem~\ref{th:mainresult2}, but (2.2) and (2.3) are also possible.
It is also possible to have a critical situation, where (a) and (b)
both hold, and this leads to case (3.1): this is treated at the end
of this subsection. Finally, a confluence of singularities may also
arise when solving equation
%
$$
x C'(x,y)= x\exp\left(B'(x C'(x,y),y)\right).
$$
\medskip
\noindent
Indeed the singularity may come from: (a) a branch point when solving the previous equation; or (b) $B(x,y)$ becomes singular at $\rho(y) C'(\rho(y),y)$, where $\rho(y)$ is the singularity of $C(x,y)$.
When the two sources (a) and (b) for the singularity coincide, we are in case (3.2).
This is treated at the end of
Section~\ref{subsec:singC}.
\subsubsection{$\Phi$ has a branch-point at $(R,D_{0})$}
We assume that $\Phi_z(R, D_0)=0$ and that $\Phi$ is analytic at
$(R, D_0)$. We have seen that $\Phi_{zz}(x,z) > 0$ for $x,z>0$.
Under these conditions, $D(x)$ admits a singular expansion near $R$
of the form
%
\begin{equation}\label{eq:singD}
D(x) = D_{0}+D_{1}X+D_{2}X^{2}+D_{3}X^{3}+O(X^{4}),
\end{equation}
where $X=\sqrt{1-x/R}$, and $D_1=-\sqrt{2R \Phi_x(R,
D_0)/\Phi_{zz}(R, D_0)}$
(see~\cite{FlajoletSedgewig:analytic-combinatorics}). We remark that
$R$ and the $D_{i}$'s depend implicitly on $y_{0}$.
In the next result we find an explicit expression for $D_1$, which is the dominant term in (\ref{eq:singD}).
This puts into perspective the result found in \cite{SP} for series-parallel graphs, where it was shown that $D_1 <0$ for that class.
\begin{prop}\label{proposition:D's}
Consider the singular expansion~(\ref{eq:singD}). Then $D_1 < 0$ is
given by
%
\begin{equation*
D_1 = - \left(\frac{\displaystyle 2R T_{xz}
-4T_z+\frac{R^3D_{0}^2}{(1+RD_0)^2}}
{\displaystyle \frac{R^2}{2(1+D_0)^2}+\frac{R^{3}}{\left(1+R D_0\right)^3}+
T_{zzz}}
\right)^{1/2},
\end{equation*}
where the partial derivatives of $T$ are evaluated at $(R, D_0)$.
\end{prop}
\begin{proof} We plug the expansion (\ref{eq:singD}) inside (\ref{eq:phi}) and
extract work out the undetermined coefficients $D_i$. The expression
for $D_1$ follows from a direct computation of $\Phi_x$ and
$\Phi_{zz}$, and evaluating at $(R, D_0)$.
%
To show that $D_{1}$ does not vanish, notice that
$$
2R T_{x z} -4 T_z = R^3{\partial \over \partial x}
\left(\frac{2}{x^2}T_z(x,z)\right).
$$
This is positive since $2/x^2 T_z$ is a series with positive
coefficients. Since $R, D_0 >0 $, the remaining term in the
numerator inside the square root is clearly positive, and so is the
denominator. Hence $D_1 <0$.
\end{proof}
From the singular expansion of $D(x)$ and the explicit
expression~(\ref{eq:Bexplicit}) of $B(x,y_{0})$ in terms of
$D(x,y_0)$, it is clear that $B(x)=B(x,y_{0})$ also admits a
singular expansion at the same singularity $R$ of the form
\begin{equation}\label{eq:singB}
B(x) = B_{0}+B_{1}X+B_{2}X^{2}+B_{3}X^{3}+O(X^{4}).
\end{equation}
The next result shows that the singular exponent of $B(x)$ is $3/2$,
as claimed. Again, the fact that $B_1=0$ and $B_3 >0$ explains the results found in \cite{SP} for series-parallel graphs.
\begin{prop}\label{proposition:B's}
Consider the singular expansion~(\ref{eq:singB}). Then $B_{1}=0$
and $B_{3}>0$ is given by
\begin{equation}\label{eq:B3}
B_3 =
\frac{1}{3}\left(4T_z-2R T_{xz} -\frac{R^{3}D_{0}^2}{\left(1+RD_{0}\right)^2}\right)D_{1},
\end{equation}
where the partial derivatives of $T$ are evaluated in $(R,D_{0})$.
\end{prop}
\begin{proof} We plug the singular expansion~(\ref{eq:singD}) of $D(x)$ into
Equation~(\ref{eq:Bexplicit}) and work out the undetermined
coefficients $B_i$. One can check that $B_{1}=2R^{2}
\Phi(R,D_{0})D_{1}$, which vanishes because $\Phi(x,D(x))=0$.
When computing $B_3$, it turns out that the values $D_2$ and $D_3$
are irrelevant because they appear in a term which contains a factor
$\Phi_z$, which by definition vanishes at $(R,D_0)$. This
observation gives directly Equation~(\ref{eq:B3}). The fact $B_3\neq
0$ follows from applying the same argument as in the proof of
Proposition~\ref{proposition:D's}, that is, $4T_z - 2R T_{x z}<0$.
Then $B_3>0$ since it is the product of two negative numbers.
\end{proof}
\subsubsection{$\Phi$ is singular at $(R,
D_0)$}\label{sssec:Phi-singular}
In this case we assume that $T(x,z)$ is singular at $(R, D_0)$ and
that $\Phi_z(R, D_0)<0$. The situation where both $T(x,z)$ is
singular and $\Phi_z(R, D_0)=0$ is treated in the next subsection. We start with a technical lemma.
\begin{lemma}\label{lem:boundedT}
The function $T_{zz}$ is bounded at the singular point $(R, D_0)$.
\end{lemma}
\begin{proof} By differentiating Equation~(\ref{eq:phi}) with respect to $z$ we
obtain
$$
\Phi_z(x,z)= \frac{2}{x^{2}} T_{zz}(x,z)
-\frac{1}{1+z}-\frac{1}{(1+xz)^2}+1.
$$
Since $\Phi_z(R, D_0)<0$, we have
$$
{2\over R^2}T_{zz}(R,D_0) < {1\over 1+D_0}
+ {1\over (1+RD_0)^2}-1 < 1.
$$
Hence $T_{zz}(R,D_0) < R^2/2$.
\end{proof}
Let us consider now the singular expansions of $\Phi$ and $T$ in
terms of $Z=\sqrt{1-z/r(x)}$, where $r(x)$ is the dominant
singularity. Note that, by Equation~(\ref{eq:phi}), $\Phi$ and $T_z$
have the same singular behaviour. By Lemma~\ref{lem:boundedT}, the
singular exponent $\alpha$ of the dominant singular term $Z^\alpha$
of $T_{zz}$ must be greater than $0$ and, consequently, the singular
exponent of $T_z$ and $\Phi$ is greater than $1$. As discussed
above, we only study the case where the singular exponent of
$T(x,z)$ is $5/2$ (equivalently, the singular exponent of
$\Phi(x,z)$ is $3/2$), which corresponds to several families of
three-connected graphs arising from maps. That is, we assume that
$T$ has a singular expansion of the
form
$$
T(x,z) = T_0(x) + T_2(x) Z^2 + T_4(x) Z^4 + T_{5}(x) Z^5 + O(Z^6),
$$
where $Z = \sqrt{1-z/r(x)}$, and the functions $r(x)$ and $T_i(x)$
are analytic in a neighborhood of $R$. Notice that $r(R)=D_0$. Since
we are assuming that the singular exponent is $5/2$, we have that
$T_5(R) \ne 0$.
We introduce now the Taylor expansion of the coefficients $T_i(x)$
at $R$. However, since we aim at computing the singular expansions
of $D(x)$ and $B(x)$ at $R$, we expand in even powers of
$X=\sqrt{1-x/R}$:
\begin{eqnarray}\label{eq-singT}
T(x,z) &= & T_{0,0}+ T_{0,2}X^2 + O(X^4) \\
&&+\left(T_{2,0}+ T_{2,2}X^2 + O(X^4)\right)\cdot Z^2 \nonumber \\
& &+\left(T_{4,0}+ T_{4,2}X^2 + O(X^4)\right)\cdot Z^4 \nonumber \\
&&+\left(T_{5,0}+ T_{5,2}X^2 + O(X^4)\right)\cdot Z^5 +O(Z^6). \nonumber
\end{eqnarray}
Notice that $T_{5,0} = T_5(R) \ne 0$.
Similarly, we also consider the expansion of $\Phi$ given by
\begin{eqnarray}\label{eq-singPhi}
\Phi(x,z) &= & \Phi_{0,0}+ \Phi_{0,2}X^2 + O(X^4) \\
& &+\left(\Phi_{2,0}+ \Phi_{2,2}X^2 + O(X^4)\right)\cdot Z^2 \nonumber \\
& &+\left(\Phi_{3,0}+ \Phi_{3,2}X^2 + O(X^4)\right)\cdot Z^3 +O(Z^4),\nonumber
\end{eqnarray}
where $\Phi_{2,0}\neq 0$ because $\Phi_z(R, D_0)<0$.
The next result shows that $D_1=0$ and $D_3>0$. This was proved in \cite{bender} for the class of planar graphs, but there was no obvious reason explaining this fact. Now we see it follows directly from our general assumptions on $T(x,z)$, which are satisfied when $T(x,z)$ is the GF of 3-connected planar graphs.
\begin{prop}\label{pro:Dsing}
The function $D(x)$ admits the following singular expansion
$$
D(x) = D_0 + D_2 X^2 + D_3 X^3 + O(X^4),
$$
where $X = \sqrt{1-x/R}$. Moreover,
$$
D_2 = D_0 \, \frac{P}{Q}-Rr', \qquad
D_3 = -\frac{ 5 T_{5,0} (-P)^{3/2}}{R^2 \, Q^{5/2}} > 0,
$$
where $r'$ is the evaluation of the derivative $r'(x)$ at $x=R$, and
$P<0$ and $Q>0$ are given by
\begin{align*}
P =\, \Phi_{0,2} = & -\frac{4T_{2,0}+ 2T_{2,2}}{R^2D_0}
- \frac{2T_{2,0}r'}{RD_0^2}
+ \frac{Rr'}{1+D_0}
- \frac{RD_0(D_0+(2+RD_0)Rr')}{(1+RD_0)^2}, \\
Q =\, \Phi_{2,0} =& -\frac{4T_{4,0}}{R^2D_0}+\frac{D_0}{1+D_0}-\frac{2RD_0^2}{1+RD_0}
+ \frac{R^2D_0^3}{(1+RD_0)^2}.
\end{align*}
\end{prop}
\begin{proof} We consider Equation~(\ref{eq-singPhi}) as a power series
$\Phi(X,Z)$, where $X = \sqrt{1-x/R}$ and $Z = \sqrt{1-z/D_0}$. We
look for a solution $Z(X)$ such that $\Phi(X, Z(X)) = 0$; we also
impose $Z(0)=0$, since $\Phi_{0,0}=\Phi(R,D_0)=0$. Define $D(x)$ as
$$
D(x) = r(x) (1-Z(X)^2),
$$
which satisfies $\Phi(x, D(x))=0$
and $D(R) = D_0$. By indeterminate coefficients we obtain
$$
Z(X) = \pm\, \sqrt{\frac{-\Phi_{0,2}}{\Phi_{2,0}}} X +
\frac{\Phi_{3,0}\,\Phi_{0,2}}{2\,{\Phi_{2,0}}^2} X^2 +
O(X^3),
$$
where the sign of the coefficient in $X$ is determined later. Now we
use this expression and the Taylor series of the analytic function
$r(x)$ at $x=R$ to obtain the following singular expansion for
$D(x)$:
$$
D(x) = D_0 + \left(D_0\frac{\Phi_{0,2}}{\Phi_{2,0}}-R r'\right)X^2
\pm
D_0\frac{(-\Phi_{0,2})^{3/2}\,\Phi_{3,0}}{{\Phi_{2,0}}^{5/2}}X^3+O(X^4).
$$
Observe in particular that the coefficient of $X$ vanishes. We
define $P=\Phi_{0,2}$ and $Q=\Phi_{2,0}$. The fact that $P<0$ and
$Q>0$ follows from the relations
$$
\Phi_z = \frac{-1}{D_0} \Phi_{2,0}, \qquad
\Phi_x = \frac{-1}{R} \Phi_{0,2} +
\frac{r'}{D_0^2}\Phi_{2,0},
$$
that are obtained by differentiating Equation~(\ref{eq-singPhi}). We
have $\Phi_z<0$ by assumption, and $\Phi_x>0$ following the proof
of Proposition~\ref{proposition:D's}.
The coefficient $D_3$ must have positive sign, since $D''(x)$ is a
positive function and its singular expansion is $D_{xx}(x) =
3D_3(4R^2)^{-1} X^{-1} +O(1)$. The coefficients $\Phi_{i,j}$ in
Equation~(\ref{eq-singPhi}) are easily expressed in term of the
$T_{i,j}$, and a simple computation gives the result as claimed.
\end{proof}
\begin{prop}\label{prop:singB}
The function $B(x)$ admits the following singular expansion
$$
B(x) = B_0 + B_2 X^2 + B_4 X^4 + B_5 X^5 + O(X^6),
$$
where $X = \sqrt{1-x/R}$. Moreover,
\begin{eqnarray*}
B_0 &=& \frac{R^2}{2}\left(D_0+\frac{1}{2}D_0^2 \right) -\frac{1}{2}RD_0+\frac{1}{2}\log\left(1+RD_0\right)
-\frac{1}{2}(1+D_0)\frac{R^3D_{0}^2}{1+RD_{0}}\\
&&+T_{0,0}+\frac{1+D_{0}}{D_{0}}T_{2,0}, \\
B_2 &=& \frac{R^2 D_0(D_0^2 R-2)}{2(1+R D_0)}+T_{0,2}-\left(2\frac{1+D_{0}}{D_0}+\frac{R r'}{D_0}\right)T_{2,0}, \\
B_4 &=& \left(T_{0,4}+\frac{2R^3D_0^2-R^4D_0^4+2R^2D_0}{4(1+RD_0)^2} \right)+
\left(\frac{1+D_0+r''}{D_0}\right)T_{2,0}+\frac{P^2}{Q}\frac{R^2 D_0}{4}\\
&& +\left(\frac{2R}{D_0}T_{2,0}+\frac{R^4D_0^2}{2(1+RD_0)^2}\right)r'
+\frac{R^4}{4}\left(\frac{D_0}{1+D_0} -\frac{1}{(1+RD_0)^2}\right)(r')^2, \\
B_5 &=& T_{5,0}\left(-\frac{P}{Q}\right)^{5/2} < 0 ,
\end{eqnarray*}
where $P$ and $Q$ are as in Proposition~\ref{pro:Dsing}, and $r'$
and $r''$ are the derivatives of $r(x)$ evaluated at $x=R$.
\end{prop}
\begin{proof} Our starting point is Equation~(\ref{eq:Bexplicit}) relating
functions $D$, $B$ and $T$. We replace $T$ by the singular expansion
in Equation~(\ref{eq-singT}), $D$ by the singular expansion given in
Proposition~\ref{pro:Dsing}, and we set $x=X^2(1-R)$. The
expressions for $B_i$ follow by indeterminate coefficients.
When performing these computations we observe that the coefficients
$B_1$ and $B_3$ vanish identically, and that several simplifications
occur in the remaining expressions.
\end{proof}
\subsubsection{$\Phi$ has a branch-point and $T(x,z)$ is singular at
$(R, D_0)$}\label{se:criticalB}
This is the first critical situation, and corresponds to case (3.1)
in Theorem~\ref{th:mainresult}. To study this case we proceed
exactly as in the case where $\Phi$ is singular at $(R, D_0)$
(Section~\ref{sssec:Phi-singular}), except that now $\Phi_z(R,
D_0)=0$. It is easy to check that Lemma~\ref{lem:boundedT} still
applies (with the bound $T_{zz}(R, D_0) \leq R^2/2$). As done in the
previous section, we only take into consideration families of graphs
where the singular exponent of $T(x,z)$ is $5/2$ (equivalently, the
singular exponent of $\Phi(x,z)$ is $3/2$).
Equations~(\ref{eq-singT}) and~(\ref{eq-singPhi}) still hold, except
that now $\Phi_{2,0}=0$ because of the branch point at $(R, D_0)$.
This missing term is crucial, as we make clear in the following
analogous of Proposition~\ref{pro:Dsing}.
Notice that $\Phi_{3,0} \ne 0$ because of our assumptions on $T(x,z)$.
\begin{prop}\label{pro:Dsing-crit}
The function $D(x)$ admits the following singular expansion
$$
D(x) = D_0 + D_{4/3} X^{4/3} + O(X^{2}),
$$
where $X = \sqrt{1-x/R}$ and
$$
D_{4/3} = -D_0\left(\frac{-\Phi_{0,2}}{\Phi_{3,0}}\right)^{2/3}.
$$
\end{prop}
\begin{proof} As in the proof of Proposition~\ref{pro:Dsing}, we consider a
solution $Z(X)$ of the functional equation $\Phi(X, Z(X))=0$, and
define $D(x)$ as $r(x)(1-Z(X)^2)$. However, the singular development
of $\Phi(x,z)$ is now
\begin{eqnarray*
\Phi(x,z) &= & \Phi_{0,2}X^2 + O(X^4) \\
&&+\left(\Phi_{2,2}X^2 + O(X^4)\right)\cdot Z^2 \nonumber \\
&&+\left(\Phi_{3,0}+ \Phi_{3,2}X^2 + O(X^4)\right)\cdot Z^3 +
O(Z^4).\nonumber
\end{eqnarray*}
since $\Phi_{2,0} \ne 0$, the only way to get the necessary cancelations in $\Phi(X,Z(X))=$, is that the expansion of $Z(X)$ starts with a $Z^{2/3}$ term.
By indeterminate coefficients we get
$$
Z(X) =
\left(\frac{-\Phi_{0,2}}{\Phi_{3,0}}\right)^{2/3}X^{2/3}+O(X^{4/3}).
$$
To obtain the actual development of $D(x)$ we use the
equalities $D(x)=r(x)(1-Z(X)^2)$ and $r(R)=D_0$.
\end{proof}
Note that $X=\sqrt{1-x/R}$. Consequently the previous result implies
that the singular exponent of $D(x)$ is $2/3$. By using the explicit
integration of $B_y(x,y)$ of Equation~(\ref{eq:Bexplicit}), one can
check that the singular exponent of $B(x)$ is $5/3$ (the first
non-analytic term of $B(x)$ that does not vanish is $X^{10/3}$).
This implies that the subexponential term in the asymptotic of $b_n$
is $n^{-8/3}$, as claimed.
\subsection{Singularity analysis of $C(x,y)$ and $G(x,y)$}\label{subsec:singC}
The results in this section follow the same lines as those in the previous section. They are technically simpler, since the analysis applies to functions of one variable, whereas the second variable $y$ behaves only as a parameter.
It generalizes the analysis in
Section~$4$ of \cite{gn} and Section~$3$ of \cite{SP}.
Let
$F(x)=xC'(x)$, which is the GF of rooted connected graphs. We know
that $F(x)=x\exp(B'(F(x))$. Then $\psi(u)=u\exp(-B'(u))$ is the
functional inverse of $F(x)$. Denote by $\rho$ the dominant
singularity of $F$. As for 2-connected graphs, there are two
possible sources for the singularity:
\begin{itemize}
\item[(1)] There exists $\tau \in (0,R)$ (necessarily unique)
such that $\psi'(\tau)=0$. We have a branch point and by the inverse
function theorem $\psi$ ceases to be invertible at $\tau$. We have $\rho = \psi(\tau)$.
\item[(2)] We have $\psi'(u) \ne 0$ for all $u \in (0,R)$, and there
is no obstacle to the analyticity of the inverse function.
Then $\rho = \psi(R)$.
\end{itemize}
The critical case where both sources for singularity coincide is discussed at the end of this subsection. Notice that this happens precisely when $\psi'(R) = 0$.
Condition $\psi'(\tau)=0$ is equivalent to $B''(\tau) = 1/\tau$.
Since $B''(u)$ is increasing (the series $B(u)$ has positive
coefficients) and $1/u$ is decreasing, we are in case (1) if $B''(R)
> 1/R$, and in case (2) if $B''(R) < 1/R$. As we have already
discussed, series-parallel graphs correspond to case (1) and planar
graphs to case (2). In particular, if $B$ has singular exponent
$3/2$, like for series-parallel graphs, the function $B''(u)$ goes
to infinity when $u$ tends to $R$, so there is always a solution
$\tau < R$ satisfying $B''(\tau) = 1/\tau$. This explains why in
Theorem~\ref{th:mainresult} there is no case where $b_n$ has
sub-exponential growth $n^{-5/2}$ and $c_n$ has $n^{-7/2}$.
\begin{prop}\label{proposition:C's}
The value $S=RB''(R)$ determines the singular exponent of $C(x)$
and~$G(x)$ as follows:
\begin{enumerate}
\item[(1)] If $S>1$, then $C(x)$ and $G(x)$ admit the singular
expansions
\begin{eqnarray*}
C(x) &=& C_0+C_2 X^2 + C_3 X^3 + O(X^4), \\
G(x) &=& G_0+G_2 X^2 + G_3 X^3 + O(X^4),
\end{eqnarray*}
where $X=\sqrt{1-x/\rho}$, $\rho=\psi(\tau)$, and $\tau$ is the
unique solution to $\tau B''(\tau)=1$. We have
\begin{align*}
C_0 &= \tau(1+\log \rho-\log \tau)+B(\tau), & C_2 &= -\tau, \\
C_3 &= \frac{3}{2}\sqrt{\frac{2\rho \exp{\left(B'(\rho)\right)}}{\tau B'''(\tau)-\tau B''(\tau)^2+2B''(\tau)}}, \\
G_0 &= e^{C_0}, \qquad G_2 = C_2e^{C_0}, & G_3 &=C_3e^{C_0}.
\end{align*}
\item[(2)] If $S<1$, then $C(x)$ and $G(x)$ admit the singular
expansions
\begin{align*}
C(x) &= C_0+C_2 X^2 + C_4 X^4 + C_5 X^5 + O(X^6), \\
G(x) &= G_0+G_2 X^2 + G_4 X^4 + G_5 X^5 + O(X^6),
\end{align*}
where $X=\sqrt{1-x/\rho}$, $\rho=\psi(R)$. We have
\begin{align*}
C_0 &= \tau(1+\log \rho-\log R)+B_0, & C_2 &= -R, \\
C_4 &= -\frac{RB_4}{2B_4-R}, & C_5 &= B_5 \left(1-\frac{2B_4}{R}\right)^{-5/2}, \\
G_0 &= e^{C_0}, & G_2 &= C_2 e^{C_0}, \\
G_4 &= \left(C_4+\frac{1}{2}{C_2}^2\right)e^{C_0}, & G_5 &= C_5 e^{C_0},
\end{align*}
where $B_0$, $B_4$ and $B_5$ are as in Proposition~\ref{prop:singB}.
\end{enumerate}
\end{prop}
\begin{proof} The two cases $S>1$ and $S<1$ arise from the previous discussion. In
case (1) we follow the proof of Theorem~3.6 from~\cite{SP}, and in
case (2) the proof of Theorem~1 from~\cite{gn}.
First, we obtain the singular expansion of $F(x)=xC'(x)$ near
$x=\rho$. This can be done by indeterminate coefficients in the
equality $ \psi( F(x) )= x = \rho(1-X^2)$, with $X=\sqrt{1-x/\rho}$.
The expansion of $\psi$ can be either at $\tau=F(\rho)$ where it is
analytic, or at $R=F(\rho)$ where it is singular.
From the singular expansion of $F(x)$ we obtain $C_2$ and $C_3$ in
case (1), and $C_2$, $C_4$ and $C_5$ in case (2) by direct
computation. To obtain $C_0$, however, it is necessary to compute
$$
C(x) = \int_0^x \frac{F(t)}{t}\, dt,
$$
and this is done using the integration techniques developed
in~\cite{SP} and~\cite{gn}.
Finally, the coefficients for $G(x)$ are obtained directly from the
general relation $G(x) = \exp(C(x))$.
\end{proof}
To conclude this section we consider the critical case where both
sources of the dominant singularity $\rho$ coincide, that is, when
$\psi'(R)=0$. In this case $\psi$ is singular at $R$ because $R$ is
the singularity of $B(x)$, and at the same time the inverse $F(x)$
is singular at $\rho = \psi(R)$ because of the inverse function
theorem.
As we have shown before, this can only happen if $B(x)$ has
singular exponent $5/2$.
The argument is now as in the proof of Proposition \ref{pro:Dsing-crit}.
The singular development of $\psi(z)$
in terms of $Z=\sqrt{1-z/R}$ must be of the form
$$\psi(z) = \psi_{0} + \psi_2 Z^2 + \psi_3 Z^3 + O(Z^4), $$
where in addition $\psi_2$ vanishes due to $\psi'(R)=0$. A
similar analysis as that in Proposition \ref{pro:Dsing-crit} shows that
the singular exponent of $C(x)$ is $5/3$. Indeed, since
$\psi(F(x))=x=\rho(1-X^2)$, we deduce that the development of $F(x)$
in terms of $X=\sqrt{1-x/\rho}$ is
$$ F(x) = \rho + \left(\frac{-\rho^{5/3}}{\psi_3^{2/3}}\right)X^{4/3}
+ O(X^2).$$
Thus we obtain, by integration of $F(x)=xC'(x)$, that the singular
exponent of $C(x)$ is $5/3$, so that the subexponential term in the
asymptotic of $c_n$ is $n^{-8/3}$. Since $G(x) = \exp (C(x))$, the
same exponents hold for $G(x)$ and $g_n$.
\section{Limit laws}\label{se:laws}
In this section we discuss parameters of random graphs from a closed
family whose limit laws do not depend on the singular behaviour of
the GFs involved. As we are going to see, only the constants
associated to the first two moments depend on the singular
exponents.
The parameters we consider are asymptotically either normal or
Poisson distributed. The number of edges, number of blocks, number
of cut vertices, number of copies of a fixed block, and number of
special copies of a fixed subgraph are all normal. On the other
hand, the number of connected components is Poisson. The size of the
largest connected component (rather, the number of vertices not in
the largest component) also follows a discrete limit law. A
fundamental extremal parameter, the size of the largest block, is
treated in the next section, where it is shown that the asymptotic
limit law depends very strongly on the family under consideration.
As in the previous section, let $\mathcal{G}$ be a closed family of
graphs. For a fixed value of $y$, let $\rho(y)$ be the dominant
singularity of $C(x,y)$, and let $R(y)$ be that of $B(x,y)$. We
write $\rho = \rho(1)$ and $R=R(1)$. Recall that $B'(x,y)$ denotes
the derivative with respect to $x$.
When we speak of cases (1) and (2), we refer to the statement of
Proposition~\ref{proposition:C's}, which are exemplified,
respectively, by series-parallel and planar graphs. That is, in case
(1) the singular dominant term in $C(x)$ and $G(x)$ is
$(1-x/\rho)^{3/2}$, whereas in case (2) it is $(1-x/\rho)^{5/2}$.
Recall from the previous section that in case (1) we have $\rho(y) =
\tau(y)\exp{\left(-B'(\tau(y),y)\right)}$, where $\tau(y)
B''(\tau(y)) = 1$. In case (2) we have $\rho(y) =
R(y)\exp{\left(-B'(R(y),y)\right)}$.
\subsection{Number of edges}
The number of edges obeys a limit normal law, and the asymptotic
expression for the first two moments is always given in terms of the
function $\rho(y)$ for connected graphs, and in terms of $R(y)$ for
2-connected graphs.
\begin{theorem}\label{th:edges}
The number of edges in a random graph from $\mathcal{G}$ with $n$
vertices is asymptotically normal, and the mean $\mu_n$ and
variance $\sigma_n^2$ satisfy
\begin{equation*
\mu_n \sim \kappa n, \qquad \sigma_n^2 \sim \lambda n,
\end{equation*}
where
$$
\kappa = -{\rho'(1) \over \rho(1)}, \qquad
\lambda = -{\rho''(1) \over \rho(1)} -{\rho'(1) \over \rho(1)}
+ \left( {\rho'(1) \over \rho(1)}\right)^2.
$$
The same is true, with the same constants, for \emph{connected}
random graphs.
The number of edges in a random 2-connected graph from $\mathcal{G}$
with $n$ vertices is asymptotically normal, and the mean $\mu_n$
and variance $\sigma_n^2$ satisfy
\begin{equation*
\mu_n \sim \kappa_2 n, \qquad \sigma_n^2 \sim \lambda_2 n,
\end{equation*}
where
$$
\kappa_2 = -{R'(1) \over R(1)}, \qquad
\lambda_2 = -{R''(1) \over R(1)} -{R'(1) \over R(1)}
+ \left( {R'(1) \over R(1)}\right)^2.
$$
\end{theorem}
\begin{proof} The proof is as in \cite{gn} and \cite{SP}. In all cases the derivatives of
$\rho(y)$ and $R(y)$ are readily computed, and for a given family of
graphs we can compute the constants exactly.
\end{proof}
\subsection{Number of blocks and cut vertices}
Again we have normal limit laws but the asymptotic for the first two
moments depends on which case we are. In the next statements we set
$\tau = \tau(1)$.
\begin{theorem}\label{th:blocs}
The number of blocks in a random connected graph from $\mathcal{G}$
with $n$ vertices is asymptotically normal, and the mean $\mu_n$ and
variance $\sigma_n^2$ are linear in $n$. In case (1) we have
\begin{equation*
\mu_n \sim \log(\tau/ \rho)\, n, \qquad
\sigma_n^2 \sim \left(\log(\tau/ \rho) - {1 \over 1+\tau^2 B'''(\tau)}\right)\,
n.
\end{equation*}
In case (2) we have
\begin{equation*
\mu_n \sim \log(R/\rho) \,n, \qquad \sigma_n^2 \sim \log(R/\rho) \,
n.
\end{equation*}
The same is true, with the same constants, for arbitrary random
graphs.
\end{theorem}
\begin{proof} The proof for case (2) is as in \cite{gn}, and is based in an application of the Quasi-Powers Theorem. If $C(x,u)$ is the
generating function of connected graphs where now $u$ marks blocks,
then we have
\begin{equation}\label{eq:blo}
xC'(x,u) = x \exp\left(u B'(xC'(x,u)) \right),
\end{equation}
where derivatives are as usual with respect to $x$. For fixed $u$,
$\psi(t) = t\exp(-uB'(t))$ is the functional inverse of $xC'(x,u)$.
We know that for $u=1$, $\psi'(t)$ does not vanish, and the same is
true for $u$ close to 1 by continuity. The dominant singularity of
$C(x,u)$ is at $\sigma(u) = \psi(R) = R \exp(-uB'(R))$, and it is
easy to compute the derivatives $\sigma'(1)$ and $\sigma''(1)$ (see
\cite{gn} for details).
In case (1), Equation (\ref{eq:blo}) holds as well, but now the
dominant singularity is at $\psi(\tau)$. A routine (but longer)
computation gives the constants as claimed.
\end{proof}
\begin{theorem}\label{th:cut}
The number of cut vertices in a random connected graph from
$\mathcal{G}$ with $n$ vertices is asymptotically normal, and the
mean $\mu_n$ and variance $\sigma_n^2$ are linear in $n$. In case
(1) we have
\begin{equation*
\mu_n \sim \left(1- {\rho \over \tau}\right) n, \qquad \sigma_n^2
\sim
\left(\frac{(\tau-\rho)(\tau-2\tau\rho^2-\tau\rho-\rho+2\rho^3)}{\tau^2
\rho^2(1+\tau^2B'''(\tau))}-\left(\frac{\rho}{\tau}\right)^2
\right)n.
\end{equation*}
In case (2) we have
\begin{equation*
\mu_n \sim \left(1 - {\rho \over R} \right) n, \qquad \sigma_n^2
\sim {\rho \over R} \left(1 - {\rho \over R} \right) n.
\end{equation*}
The same is true, with the same constants, for arbitrary random
graphs.
\end{theorem}
\begin{proof} If $u$ marks cut vertices in $C(x,u)$, then we have
$$xC'(x,u)=xu(\exp\left(B'(xC'(x,u))\right)-1)+x.$$
It follows that, for given $u$,
$$\psi(t) = \frac{t}{u(\exp{(B'(t))}-1)+1}$$
is the inverse function of $xC'(x,u)$. In case (2) the dominant singularity $\sigma(u)$ is at $\psi(R)$.
Taking into account that $\rho = R\exp(B'(R))$, the derivatives of
$\sigma$ are easily computed. In case (1) the singularity is at
$\psi(\tau(u))$, where $\tau(u)$ is given by $\psi'(\tau(u))=0$. In
order to compute derivatives,
we differentiate $\psi(\tau(u))=0$ with respect to $u$ and solve for
$\tau'(u)$, and once more in order to get $\tau''(u)$. After several computations and simplifications using \texttt{Maple}, we get the values as claimed.
\end{proof}
\subsection{Number of copies of a subgraph}
Let $H$ be a fixed rooted graph from the class $\mathcal{G}$, with
vertex set $\{1,\ldots,h\}$ and root $r$. Following \cite{MSW}, we
say that $H$ \emph{appears} in $G$ at $W \subset V(G)$ if (a) there
is an increasing bijection from $\{1,\ldots,h\}$ to $W$ giving an
isomorphism between $H$ and the induced subgraph $G[W]$ of $G$; and
(b) there is exactly one edge in $G$ between $W$ and the rest of
$G$, and this edge is incident with the root $r$.
Thus an appearance of $H$ gives a copy of $H$ in $G$ of a very
particular type, since the copy is joined to the rest of the graph
through a unique pendant edge. We do not know how to count the
number of subgraphs isomorphic to $H$ in a random graph, but we can
count very precisely the number of appearances.
\begin{theorem}\label{th:appear}
Let $H$ be a fixed rooted connected graph in $\mathcal{G}$ with $h$
vertices. Let ${X}_n$ denote the number of appearances of $H$ in a
random rooted connected graph from $\mathcal{G}$ with $n$ vertices.
Then ${X}_n$ is asymptotically normal and the mean $\mu_n$ and
variance
$\sigma_n^2$ satisfy
\begin{equation*
\mu_n \sim {\rho^h \over h!}\,n, \qquad \sigma_n^2 \sim \rho\, n,
\end{equation*}
\end{theorem}
\begin{proof} The proof is as in \cite{gn}, and is based on the Quasi-Powers Theorem. If $f(x,u)$ is the generating
function of rooted connected graphs and $u$ counts appearances of
$H$ then, up to a simple term that does not affect the asymptotic
estimates, we have
\begin{equation*
f(x,u) = x \exp\left(B'(f(x,u)) + (u-1) {x^h\over h!} \right).
\end{equation*}
The dominant singularity is computed through a change of variable,
and the rest of the computation is standard; see the proof of
Theorem 5 in \cite{gn} for details. For this parameter is no
difference between cases (1) and (2).
\end{proof}
Now we study appearances of a fixed 2-connected subgraph $L$ from
$\mathcal{G}$ in rooted connected graphs. An appearance of $L$ in
this case corresponds to a block with a labelling order isomorphic
to $L$. Notice in this case that an appearance can be anywhere in
the tree of blocks, not only as a terminal block.
\begin{theorem}\label{th:appearBlock}
Let $L$ be a fixed rooted 2-connected graph in $\mathcal{G}$ with
$\ell+1$ vertices. Let ${X}_n$ denote the number of appearances of
$L$ in a random connected graph from $\mathcal{G}$ with $n$
vertices. Then ${X}_n$ is asymptotically normal and the mean $\mu_n$
and variance
$\sigma_n^2$ satisfy
\begin{equation*
\mu_n \sim {R^\ell \over \ell!}\,n, \qquad \sigma_n^2 \sim {R^\ell
\over \ell!} \, n,
\end{equation*}
\end{theorem}
\begin{proof} If $f(x,u)$ is the generating function of rooted connected graphs
and $u$ counts appearances of $L$, then we have
\begin{equation*
f(x,u) = x \exp\left(B'(f(x,u)) + (u-1) {f(x,u)^\ell\over \ell!} \right).
\end{equation*}
The reason as that each occurrence of $L$ is single out by
multiplying by $u$. Notice that $L$ has $\ell+1$ vertices by the
root bears no label. It follows that the inverse of $f(x,u)$ is
given by (for a given value of $u$) is
$$
\phi(t) = t \exp\left( -B'(t) - (u-1)t^\ell/\ell!\right).
$$
The singularity of $\phi(t)$ is equal to $R$, independently of $t$.
Since for $u=1$ we know that $\phi'(t)$ does not vanish, the same is
true for $u$ close to~1. Then the dominant singularity of $f(x,u)$
is given by
$$
\sigma(u) = \phi(R)= \rho\cdot \exp(-(u-1)R^\ell/\ell!),
$$
since $\rho = R \exp(-B'(R))$. A simple calculation gives
$$\sigma'(1) = -\rho {R^\ell \over \ell!}, \qquad \sigma''(1) = \rho
{R^{2\ell}\over \ell!^2}
$$
and the results follows easily as in the proof of
Theorem~\ref{th:edges}. Again, for this parameter is no difference
between cases (1) and (2).
\end{proof}
\subsection{Number of connected components}
Our next parameter, as opposed to the previous one, follows a
discrete limit law.
\begin{theorem}\label{th:components}
Let ${X}_n$ denote the number of connected components in a random
graph $\mathcal{G}$ with $n$ vertices. Then ${X}_n-1$ is distributed
asymptotically as a Poisson law of parameter $\nu$, where $\nu =
C(\rho)$.
As a consequence, the probability that a random graph $\mathcal{G}$
is connected is asymptotically equal to $e^{-\nu}$.
\end{theorem}
\begin{proof} The proof is as in \cite{gn}. The generating function of graphs
with exactly $k$ connected components is $C(x)^k/k!$. Taking the
$k$-th power of the singular expansion of $C(x)$, we have $ [x^n]
C(x)^k \sim k \nu^{k-1} [x^n] C(x)$. Hence the probability that a
random graphs has exactly $k$ components is asymptotically
$$
{ [x^n] C(x)^k / k! \over [x^n]G(x)} \sim {k \nu^{k-1} \over k!} \,
e^{-\nu} = {\nu^{k-1} \over (k-1)!} \, e^{-\nu}
$$
as was to be proved.
\end{proof}
\subsection{Size of the largest connected component}
Extremal parameters are treated in the next two sections. However,
the size of the largest component is easy to analyze and we include
it here. The notation ${M}_n$ in the next statement,
suggesting vertices \emph{missed} by the largest component, is
borrowed from \cite{surfaces}. Recall that $g_n,c_n$ are the numbers
of graphs and connected graphs, respectively, $R$ is the radius of
convergence of $B(x)$, and $C_i$ are the singular coefficients of
$C(x)$.
\begin{theorem}\label{th:largest-component}
Let ${L}_n$ denote the size of the largest connected
component in a random graph $\mathcal{G}$ with $n$ vertices, and
let ${M}_n = {L}_n-n$. Then
$$
\mathbf{P}\left({M}_n = k\right) \sim p_k = p\cdot g_k {\rho^k \over k!} ,
$$
where $p$ is the probability of a random graph being connected.
Asymptotically, either $p_k \sim c\, k^{-5/2}$ or $p_k \sim c\,
k^{-7/2}$ as $k \to \infty$, depending on the subexponential term
in the estimate of $g_k$.
In addition, we have $\sum p_k = 1$ and
$\mathbb{E}\left[{M}_n\right] \sim \tau$ in case $(1)$ and
$\mathbb{E}\left[{M}_n\right] \sim R$ in case $(2)$. In case (1)
the variance $\sigma^2({M}_n)$ does not exist and in case (2)
we have $\sigma^2({M}_n) \sim R + 2C_4 $.
\end{theorem}
\begin{proof} The proof is essentially the same as in \cite{surfaces}. For fixed
$k$, the probability that ${M}_n = k$ is equal to
$$
\binom{n}{k}\frac{c_{n-k} g_k}{g_n},
$$
since there are ${n \choose k}$ ways of choosing the labels of the
vertices not in the largest component, $c_{n-k}$ ways of choosing
the largest component, and $g_k$ ways of choosing the complement. In
case $(1)$, given the estimates
$$
g_n \sim g \cdot n^{-5/2} \rho^{-n} n!, \qquad
c_n \sim c \cdot n^{-5/2} \rho^{-n} n!,
$$
the estimate for $p_k$ follows at once (we argue similarly in each
subcase of $(2)$). Observe that $p = \lim c_n/g_n = c/g$.
For the second part of the statement notice that,
$$
\sum p_k = p \sum g_k {\rho^k \over k!} = p\, G(\rho) = 1,
$$
since from Theorem~\ref{th:components} it follows that $p =
e^{-C(\rho)} = 1/G(\rho)$. To compute the moments notice that the
probability GF is $f(u) = \sum p_k u^k = p G(\rho u)$. Then the
expectation is estimated as
$$
f'(1) = p\, \rho G'(\rho) = p\, G(\rho) \rho C'(\rho) ,
$$
which correspond with $\tau$ in case $(1)$ and $R$ in case $(2)$,
since $G(x) = \exp C(x)$. For the variance we compute
$$
f''(1) + f'(1) - f'(1)^2 = \rho C'(\rho) + \rho^2 C''(\rho).
$$
In case (1) $\lim _{x\to \rho} C''(x) = \infty$, so that the
variance does not exist. In case (2) we have $\rho C'(\rho) = R$ and
$\rho^2 C''(\rho) = 2C_4$.
\end{proof}
\section{Largest block and 2-connected core}\label{se:bloc}
The problem of estimating the largest block in random maps has been
well studied. We recall that a map is a connected planar graph
together with a specific embedding in the plane. Moreover, an edge
has been oriented and marked as the root edge. Gao and
Wormald~\cite{GW99} proved that the largest block in a random map
with $n$ edges has almost surely $n/3$ edges, with deviations of
order~$n^{2/3}$. More precisely, if ${X}_n$ is the size of the
largest block, then
$$
\mathbf{P}\left(|{X}_n - n/3| < \lambda(n) n^{2/3}\right) \to 1, \qquad \hbox{as $n
\to \infty$},
$$
where $\lambda(n)$ is any function going to infinity with $n$. The
picture was further clarified by Banderier et al.~\cite{airy}. They
found that the largest block in random maps obeys a continuous
limit law, which is called by the authors the `Airy distribution of
the map type', and is closely related to a stable law of index
$3/2$. As we will see shortly, the Airy distribution also appears in
random planar graphs.
A useful technical device is to work with the 2-connected core,
which in the case of maps is the (unique) block containing the root
edge. For graphs it is a bit more delicate. Consider a connected
graph $R$ rooted at a vertex $v$. We would like to say that the core
of $R$ is the block containing the root, but if $v$ is a cut vertex
then there are several blocks containing $v$ and there is no clear
way to single out one of them. Another possibility is to say that
the $2$-connected core is the union of $2$-connected components the
blocks containing the root, but then the core is not in general a
$2$-connected graph.
The definition we adopt is the following. If the root is not a cut
vertex, then the \emph{core} is the unique block containing the
root. Otherwise, we say that the rooted graph is \emph{coreless}.
Let $C^{\bullet}(x,u)$ be the generating function of rooted
connected graphs, where the root bears no label, and $u$ marks the
size of the $2$-connected core. Then we have
\begin{equation*
C^{\bullet}(x,u)= B'(uxC'(x))+ \exp(B'(xC'(x))-B'(xC'(x)),
\end{equation*}
where $C(x)$ and $B(x)$ are the GFs for connected and 2-connected
graphs, respectively. The first summand corresponds to graphs which
have a core, whose size is recorded through variable $u$, and the
second one to coreless graphs. We rewrite the former equation as
$$
C^{\bullet}(x,u)= Q(uH(x)) + Q_L(x),
$$
where
$$
H(x)=xC'(x), \quad Q(x)=B'(x), \quad Q_L(x) =
\exp(B'(xC'(x))-B'(xC'(x)).
$$
With this notation, $Q_L(x)$ enumerates coreless graphs, and
$Q(uH(x))$ enumerates graphs with core. The asymptotic probability
that a graph is coreless is
$$
p_L= \lim_{n\rightarrow \infty} \frac{[x^{n}]Q_{L}(x)}
{[x^{n}]C'(x)} = 1-\lim_{n\rightarrow \infty} \frac{[x^{n}]Q(H(x))}
{[x^{n}]C'(x)}.
$$
The key point is that graphs with core fit into a composition scheme
$$
Q(uH(x)).
$$
This has to be understood as follows. A rooted connected graph whose
root is not a cut vertex is obtained from a 2-connected graph (the
core), replacing each vertex of the core by a rooted connected
graph. It is shown in \cite{airy} that such a composition scheme
leads either to a discrete law or to a continuous law, depending on
the nature of the singularities of $Q(x)$ and $H(x)$.
Our analysis for a closed class $\mathcal{G}$ is divided into two
cases. If we are in case (1) of Proposition~\ref{proposition:C's},
we say that the class $\mathcal{G}$ is \emph{series-parallel-like};
in this situation the size of the core follows invariably a discrete
law which can be determined precisely in terms of $Q(x)$ and $H(x)$.
If we are in case (2) we say that the class $\mathcal{G}$ is
\emph{planar-like}. In this situation the size of the core has two
modes, a discrete law when the core is small, and a continuous Airy
distribution when the core has linear size. Moreover, for
planar-like classes, the size of the largest block follows the same
Airy distribution and is concentrated around $\alpha n$ for a
computable constant~$\alpha$. The critical case, discussed at the
end of Section~\ref{se:asympt}, is not treated here.
\subsection{Core of series-parallel-like classes}
Recall that in case (1) of Proposition~\ref{proposition:C's} we have
$H(\rho)=\rho C'(\rho)=\tau $, where $\tau$ is the solution to the
equation $\tau B''(\tau)=1$. Since $RB''(R)>1$ and $uB''(u)$ is an
increasing function, we conclude that $H(\rho)<R$. This gives rise
to the so called \emph{subcritical} composition scheme. We refer to
the exposition in section IX.3 of
\cite{FlajoletSedgewig:analytic-combinatorics}.
%
The main result we use is Proposition IX.1 from
\cite{FlajoletSedgewig:analytic-combinatorics}, which is the
following.
\begin{prop}\label{prop:subcritical-case}
Consider the composition scheme $Q(uH(x))$. Let $R,\rho$ be the
radius of convergence of $Q$ and $H$, respectively. Assume that $Q$
and $H$ satisfy the subcritical condition $\tau = H(\rho)\leq R$,
and that $H(x)$ has a unique singularity at $\rho$ on its disk of
convergence with a singular expansion
$$H(x)=\tau -c_{\lambda}(1-z/\rho)^{\lambda}+o((1-z/\rho)^{\lambda}),$$
where $\tau, c_{\lambda}>0$ and $0<\lambda<1$.
%
Then the size of the $Q$-core follows a discrete limit law,
$$\lim_{n\rightarrow \infty}\frac{[x^nu^k]Q(uH(x))}{[x^n]Q(H(x))}=q_{k}.$$
The probability generating function $q(u) = \sum q_ku^k$ of the
limit distribution is
$$
q(u)={uQ'(\tau u) \over Q'(\tau)}.
$$
\end{prop}
The previous result applies to our composition scheme $Q(uH(x))$,
that is, to the family of rooted connected graphs that have core.
\begin{theorem}\label{th:coreSP}
Let $\mathcal{G}$ be a series-parallel-like class, and let ${Y}_n$
be the size of the 2-connected core in a random rooted connected
graph $\mathcal{G}$ with core and $n$ vertices.
Then $\mathbf{P}\left({Y}_n= k\right)$ tends to a limit $q_k$ as $n$ goes to infinity. The
probability generating function $q(u) = \sum q_k u^k$ is given by
$$q(u)=\tau u B''(u\tau).$$
The estimates of $q_k$ for large $k$ depend on the singular
behaviour of $B(x)$ near $R$ as follows, where $X=\sqrt{1-x/R}$:
\begin{itemize}
\item[(a)]If $B(x)= B_{0}+B_2 X^2+B_{3}X^3+O(X^4)$, then
$q_{k}\sim
\displaystyle\frac{3B_{3}}{4R\sqrt\pi}k^{-1/2}\left(\frac{\tau}{R}\right)^{k}$.
\\
\item [(b)] If $B(x)= B_{0}+B_2 X^2+B_{4}X^4 +
B_5X^5+O(X^6)$, then $q_{k}\sim
-\displaystyle\frac{5B_{5}}{2R\sqrt\pi}k^{-3/2}\left(\frac{\tau}{R}\right)^{k}$.
\end{itemize}
Finally, the probability of a graph being coreless is asymptotically
equal to $1- \rho/\tau$.
\end{theorem}
\begin{proof} We apply Proposition \ref{prop:subcritical-case} with $Q(x)=B'(x)$
and $H(x)=xC'(x)$. Since $\tau B''(\tau)=1$, we have
$$q(u)=\frac{uB''(u\tau)}{B''(\tau)}=\tau u B''(u\tau),
$$
as claimed. The dominant singularity of $q(u)$ is at $u=R/\tau$. The
asymptotic for the tail of the distribution follow by the
corresponding singular expansions. In case (a) we have
$$B''(X)= {3B_3 \over 4R^2} X^{-1} + O(1).$$
In case (b) we have
$$B''(X)= {2B_4 \over R^2} + {5B_5 \over R^2} X + O(X^2).$$
By applying singularity analysis to $q(u)$, the result follows. We
remark that $B_3>0$ and $B_5 <0$, so that the multiplicative
constants are in each case positive.
\end{proof}
It is shown in \cite{kostas} that the largest block in series-parallel classes
is of order $O(\log n)$. This is to be expected given the exponential tails of the distributions in the previous theorem.
\subsection{Largest block of planar-like
classes}\label{se:largestbloc-planar}
In order to state our main result, we need to introduce the Airy
distribution. Its density is given by
\begin{equation}\label{eq:airy}
g(x) = 2 e^{-2x^3/3} (x {\rm Ai}(x^2) - {\rm Ai}'(x^2)),
\end{equation}
where ${\rm Ai}(x)$ is the Airy function, a particular solution of
the differential equation $y''-x y = 0$. An explicit series
expansion is (see equation (2) in \cite{airy})
$$
g(x) = {1 \over \pi x} \sum_{n\ge1} (-3^{2/3} x)^n {\Gamma(1+2n/3)
\over n!} \, \sin(-2n\pi/3).
$$
A plot of $g(x)$ is shown in Figure~\ref{fig:airy}. We remark that
the left tail (as $x \to -\infty$) decays polynomially while the
right tail (as $x \to +\infty$) decays exponentially.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.89\textwidth]{airy.eps}}
\bigskip \caption{The Airy distribution.} \label{fig:airy}
\end{figure}
We are in case (2) of Proposition~\ref{proposition:C's}. In this
situation we have $\rho = \psi(R)$ and $H(\rho) = R$, which is a
\emph{critical} composition scheme. We need Theorem $5$ of
\cite{airy} and the discussion preceding it, which we rephrase in
the following proposition.
\begin{prop}\label{prop:critical-case}
Consider the composition scheme $Q(uH(x))$. Let $R,\rho$ be the
radius of convergence of $Q$ and $H$ respectively. Assume that $Q$
and $H$ satisfy the critical condition $H(\rho)=R$, and that $H(x)$
and $Q(z)$ have a unique singularity at $\rho$ and $R$ in their
respective discs of convergence. Moreover, the singularities of
$H(x)$ and $Q(z)$ are of type $3/2$, that is,
\begin{eqnarray*}
H(x)&=& H_0 + H_2 X^2 + H_3 X^3 + O(X^4),\\
Q(z)&=&Q_0+Q_{2}Z^2+Q_3Z^3 + O(Z^4),
\end{eqnarray*}
where $X=\sqrt{1-x/\rho}$, $Z=\sqrt{1-z/R}$. Let $\alpha_0$ and
$M_3$ be
$$
\alpha_0 = -\frac{H_0}{H_2}, \qquad M_3 = -\frac{Q_2
H_3}{R}+Q_3\alpha_0^{-3/2}.
$$
Then the asymptotic distribution of the size of the $Q$-core in
$Q(uH(z))$ has two different modes. With probability $p_s = -Q_2
H_3/(R M_3)$ the core has size $O(1)$, and with probability $1-p_s$
the core follows a continuous limit Airy distribution
concentrated at $\alpha_0 n$. More precisely, let ${Y}_n$ be the
size of the $Q$-core of a random element of size $n$ of $Q(uH(z))$.
\begin{enumerate}
\item[(a)] For fixed $k$,
$$ \mathbf{P}\left({Y}_n = k\right) \sim \frac{H_3}{M_3} k R^{k-1} [z^k]Q(z).$$
\item[(b)] For $k = \alpha_0 n + x n^{2/3}$ with $x = O(1)$,
$$ n^{2/3} \mathbf{P}\left({Y}_n = k\right) \sim \frac{ Q_3 \alpha_0^{-3/2}}{M_3} c g(cx),
\quad c=\frac{1}{\alpha_0}\left( \frac{-H_2}{3 H_3}\right)^{2/3},$$
where $c g(cx)$ is the Airy distribution of parameter $c$.
\end{enumerate}
\end{prop}
In particular, we have $\mathbb{E}\left[{X}_n\right] \sim \alpha n$. The
parameter $c$ quantifies in some sense the dispersion of the
distribution (not the variance, since the second moment does not
exist). Note that the asymptotic probability that the core has size
$O(1)$ is
$$ p_s = \sum^{\infty}_{k=0} \mathbf{P}\left({X}_n = k\right) \sim
\frac{H_3}{M_3} \sum^{\infty}_{k=0} k R^{k-1} [z^k]Q(z) =
\frac{H_3}{M_3} Q'(R) = \frac{H_3}{M_3} \left(\frac{-Q_2}{R}\right),
$$
and that the asymptotic probability that the core has size
$\Theta(n)$ is
$$ {Q_3 \alpha_0^{-3/2} \over M_3} = 1-p_s.$$
Now we state the main result in this section. Recall that for a
planar-like class of graphs we have
$$
B(X)= B_{0}+B_2 X^2+B_{4}X^4 + B_5X^5+O(X^6),
$$
where $R$ is the dominant singularity of $B(x)$ and
$X=\sqrt{1-x/R}$.
\begin{theorem}\label{th:largest-block}
Let $\mathcal{G}$ be a planar-like class, and let $X_n$ be the size
of the largest block in a random connected graph $\mathcal{G}$ with
$n$ vertices. Then
$$
\mathbf{P}\left({X}_n = \alpha n + x n^{2/3}\right) \sim n^{-2/3} c g(c x),
$$
where $$ \alpha = {R-2B_4 \over R},
\qquad c = \left({-2R \over 15B_5}\right)^{2/3},$$ and $g(x)$ is as in~(\ref{eq:airy}).
Moreover, the size of the second largest block is $O(n^{2/3})$. In
particular, for the class of planar graphs we have
$\alpha \approx 0.95982$ and $c \approx
128.35169$.
\end{theorem}
\begin{proof} The composition scheme in our case is $B'(uxC'(x))$. In the notation
of the previous proposition, we have $Q(x)=B'(x) $ and $H(x) =
xC(x)$.
The size of the core is obtained as a direct application of
Proposition \ref{prop:critical-case}. The exact values for planar
graphs have been computed using the known singular expansions for
$B(x)$ and $C(x)$ given in the appendix of \cite{gn}.
For the size of the largest block, one can adapt an argument from
\cite{airy}, implying that the probability that the core has linear
size while not being the largest block tends to 0 exponentially
fast. It follows that the distribution of the size of the largest
block is exactly the same as the distribution of the core in the
linear range.
\end{proof}
The main conclusion is that for planar-like classes of graphs (and
in particular for planar graphs) there exists a unique largest block
of linear size, whose expected value is asymptotically $\alpha n$
for some computable constant $\alpha$. The remaining block are of
size $O(n^{2/3})$. This is in complete contrast with series-parallel
graphs, where we have seen that there are only blocks of sublinear
size.
\subsection*{Remark.}
An observation that we need later, is that if the largest block $L$
has $N$ vertices, the it is uniformly distributed among all the
2-connected graphs in the class. This is because the number of
graphs of given size whose largest block is $L$ depends only on the
number of vertices of $L$, and not on its isomorphism type.
\medskip
We can also analyze the size of the largest block for graphs with a
given edge density, or average degree.
We state a precise result for planar graphs, which is probably the most interesting one.
\begin{theorem}
For $\mu \in (1,3)$, the largest block in random planar graphs with $n$ vertices and $\lfloor \mu n\rfloor$ edges follows asymptotically an Airy law with computable parameters $\alpha(\mu)$ and $c(\mu)$.
\end{theorem}
\begin{proof}
As discussed in \cite{gn}, we
choose a value $y_0>0$ depending on $\mu$ such that, if we give
weight $y_0^k$ to a graph with $k$ edges, then only graphs with $n$
vertices and $\mu n$ edges have non negligible weight. If $\rho(y)$
is the radius of convergence of $C(x,y)$ as usual, the right choice
is the unique positive solution $y_0$ of
\begin{equation}\label{eq:mu-y}
-y \rho'(y)/\rho(y) = \mu,
\end{equation}
Then we work with the generating function $xC(x,y_0)$ instead of
$xC'(x)$. Again we have a critical composition scheme, and as in the proof of Theorem \ref{th:largest-block}, the size of the largest block follows asymptotically an Airy law.
\end{proof}
Figure \ref{fig:plot2conn} shows a plot of the main
parameter $\alpha(\mu)$ for planar graphs and $\mu \in (1,3)$. When $\mu \to 3^-$
we see that $\alpha(\mu)$ approaches 1; the explanation is that a planar triangulation is 3-connected and hence has a unique block. When $\mu \to 1^+$, $\alpha(\mu)$ tends to 0, in this case because the largest block in a tree is just an edge.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.8\textwidth]{planar_alphan}}\bigskip
\caption{Size of largest block for planar graphs
with $\mu n$ edges, $\mu \in (1,3)$. The ordinate gives the value
$\alpha(\mu)$ such that the largest block has size $\sim
\alpha(\mu) n$. The value at $\kappa$ is $0.9598$
as in Theorem~\ref{th:largest-block}. } \label{fig:plot2conn}
\end{figure}
\section{Largest $3$-connected component}
Let us recall that, given a 2-connected graph $G$, the 3-connected
components of $G$ are those 3-connected graphs that are the support
of h-networks in the network decomposition of $G$.
We have seen in Theorem~\ref{th:largest-block} that the largest
block in a random graph from a planar-like class is almost surely of
linear size, and it is unique. In this section we prove a similar
result for the largest 3-connected component in random connected
graphs with $n$ vertices. Again we obtain a limit Airy law, but the
proof is more involved.
There are three main technical issues we
need to address:
\begin{enumerate}
\item We start with a connected graph $G$. We know from Theorem~\ref{th:largest-block}
that the largest block $L$ of $G$ is distributed according to an Airy law. We show that the largest
3-connected component $T$ of $L$ is again Airy distributed. Thus we
have to \emph{concatenate} two Airy laws, and we show that we obtain
another Airy law with computable parameters. Our proof is based on
the fact that the sum of two independent stable laws of the same
index $\alpha$ (recall that the Airy law corresponds to a particular
stable law of index $3/2$) is again an Airy law with computable
parameters.
%
In order to illustrate this step, we prove a result of independent
interest: given a random planar map with $m$ edges, the size of the
largest 3-component is Airy distributed with expected value $n/9$.
\item We also need to analyze the number of edges in the largest block $L$
of a connected graph. The number of vertices of $L$ is Airy
distributed with known parameters~\ref{th:largest-block}. On the
other hand, the number of edges in 2-connected graphs with $N$
vertices is asymptotically normally distributed with expected value
$\kappa_2 N$ (see Theorem~\ref{th:edges}). Thus we have to study a
parameter normally distributed (number of edges) within the largest
block, whose size (number of vertices) follows an Airy law. We show
that the composition of these two limit laws gives rise to an Airy
law for the number of edges in the largest block, again with
computable parameters.
\item The analysis of the largest block in random connected graphs is
in terms of the number of vertices, but the analysis for the largest
3-connected component of a 2-connected graph is necessarily in terms
of the number of edges. Thus we need a way to relate both models.
This is done through a technical lemma that shows that two
probability distributions on 2-connected graphs with $m$ edges are
asymptotically equivalent. This is the content of
Lemma~\ref{the:asympt-equal}.
\end{enumerate}
Our main result is the following. We state it for planar graphs,
since this is the most interesting case and we can give explicitly
the parameters, but it holds more generally for planar-like classes
of graphs.
\begin{theorem}\label{th:3conn-main}
Let ${X}_n$ be the number of vertices in the largest 3-connected
component of a random connected planar graph with $n$ vertices. Then
$$
\mathbf{P}\left({X}_n = \alpha_2n + xn^{2/3}\right) \sim n^{-2/3} c_2g(c_2x),
$$
where $\alpha_2 \approx 0.7346$ and $c_2\approx 3.14596$ are
computable constants. Additionally, the number of edges in the
largest 3-connected component of a random connected planar graph
with $n$ vertices also follows asymptotically an Airy law with
parameters $\alpha_3 \approx 1.7921$ and $c_3 \approx 1.28956$.
\end{theorem}
The rest of the section is devoted to the proof of the theorem. The
next three subsections address the technical points discussed above.
We remark that for series-parallel-like classes there is no linear 3-connected component, just as for 2-connected components.
\subsection{Largest 3-connected component in random planar maps}
Recall that a planar map (we say just a map) is a connected planar
graph together with a specific embedding in the plane. The size of
largest $k$-components in several families of maps was thoroughly
studied in~\cite{airy}. Denote by $M(z)$, $B(z)$ and $C(z)$ the
ordinary GFs associated to maps, $2$-connected maps and
$3$-connected maps, respectively; in all cases, $z$ marks edges. Let
${L}_n$ be the random variable, defined over the set of maps with
$n$ edges, equal to the size of the largest 2-connected component.
Let ${T}_m$ be the random variable, defined over the set of
2-connected maps with $n$ edges, equal to the size of the largest
3-connected component.
In \cite{airy} it is shown the following result:
\begin{theorem}\label{thm:maps-airy}
The distribution of both ${L}_n$ and ${T}_m$ follows
asymptotically an Airy law, namely
\begin{eqnarray}\label{eq:Airy-maps}
\mathbf{P}\left({L}_n = a_1 n + xn^{2/3}\right) &\sim& n^{-2/3}c_1g(c_1x), \\
\mathbf{P}\left({T}_m = a_2 m + ym^{2/3}\right) &\sim&
m^{-2/3} c_2g(c_2 y),\nonumber
\end{eqnarray}
where $g(z)$ is the map Airy distribution, $a_1=1/3$,
$c_1=3/4^{2/3}$, $a_2=1/3$, and $c_2 = 3^{4/3}/4$.
\end{theorem}
\begin{proof}
Here is a sketch of the proof. In both cases, the distribution
arises from a critical composition scheme of the form $\frac{3}{2}
\,\circ \,\frac{3}{2}$. The distribution of ${L}_n$ is given by the
scheme $B\left(z(1+M(z))^2\right)$, which reflects the fact that a
map is obtained by gluing a map at each corner of a $2$-connected
map. In the second case, the result is obtained from the composition
scheme $C\left(B(z)/z-2\right)$, which reflects the fact that a
2-connected map is obtained by replacing each edge of a 3-connected
map by a non-trivial 2-connected map (to complete the picture one
must take also into account series and parallel compositions, but
these play no role in the analysis of the largest 3-connected
component, see \cite{census2} for details).
\end{proof}
Let ${X}_n$ be the random variable equal to the size of the largest
$3$-connected component in maps of $n$ vertices. In order to get a a
limit law for ${X}_n$, we need a more detailed study of stable laws.
In particular, Airy laws are particular examples of stable laws of
index $3/2$. Our main reference is the forthcoming
book~\cite{Nolan}. The result we need is Proposition $1.17$, which
appears in \cite[Section 1.6]{Nolan}. We rephrase it here in a form
convenient for us.
\begin{prop}\label{prop:nolan}
Let ${Y}_1$ and ${Y}_2$ be independent Airy distributions, with
density probability functions $c_1g(c_1x)$ and $c_2g(c_2x)$. Then
${Y}_1+{Y}_2$ follows an Airy distribution with
density probability function $c g(cx)$, with
$c=\left(c_1^{-3/2}+c_2^{-3/2}\right)^{-2/3}$.
\end{prop}
\begin{proof} We use the notation as in~\cite{Nolan}. A stable
law is characterized by its \emph{stability factor} $\alpha\in
[0,2)$, its \emph{skewness} $\beta\in [-1,1]$, its \emph{factor
scale} $\gamma>0$, and its \emph{location parameter} $\delta\in
\mathbb{R}$. A stable random variable with this parameters is
written in the form $S(\alpha,\beta,\gamma, \delta;1)$ (the constant
$1$ refers to the type of the parametrization; we only deal with
this type). Proposition $1.17$ in~\cite{Nolan} states that if
$S_1=S(\alpha,\beta_1,\gamma_1, \delta_1;1)$ and
$S_2=S(\alpha,\beta_2,\gamma_2, \delta_2;1)$ are independent random
variables, then $S_1+S_2=S(\alpha,\beta,\gamma, \delta, 1)$, with
\begin{equation}\label{eq:stable-parameters}
\beta=\frac{\beta_1\gamma_1^{\alpha}+\beta_2\gamma_2^\alpha}
{\gamma_1^{\alpha}+\gamma_2^\alpha},\,\,\gamma^{\alpha}=\gamma_1^{\alpha}+\gamma_2^\alpha,\,\,
\delta=\delta_1+\delta_2.
\end{equation}
Let us identify the Airy distribution with density probability
function $c g(cx)$ within the family of stable laws as defined. By
definition, the stability factor is equal to $3/2$. Additionally,
$\beta=-1$: this is the unique value that makes that a stable law
decreases exponentially fast (see Section $1.5$ of~\cite{Nolan}).
The value of the location parameter $\delta$ coincides with the
expectation of the random variable, hence $\delta=0$ (see
Proposition $1.13$). Finally, the factor scale can be written in the
form $\gamma_0/c$, for a suitable value of $\gamma_0$, the one which
corresponds with the normalized Airy distribution with density
$g(x)$. Since ${Y}_1=S(3/2,-1,\gamma_0/c_1,0;1)$ and
${Y}_2=S(3/2,-1,\gamma_0/c_2,0;1)$, the result follows from
(\ref{eq:stable-parameters}).
\end{proof}
\begin{theorem}\label{th:maps}
The size ${X}_n$ of the largest $3$-connected component in a random
map with $n$ edges follows asymptotically an Airy law of the form
$$
\mathbf{P}\left({X}_n = a n + zn^{2/3}\right) \sim n^{-2/3}
c g(c z),
$$
where $g(z)$ is the Airy distribution and
\begin{equation}
a=a_1 a_2=1/9, \qquad
c=\left(\left(\frac{c_1}{a_2}\right)^{-3/2}+c_2^{-3/2}a_1\right)^{-2/3}\approx
1.71707.
\end{equation}
\end{theorem}
\begin{proof}
Let us estimate $n^{2/3}\mathbf{P}\left({X}_n = a n +
zn^{2/3}\right)$ for large $n$. Considering the possible
values size of the largest $2$-connected component, we obtain
\begin{equation*}
n^{2/3}\mathbf{P}\left({X}_n = a n +
zn^{2/3}\right)=n^{2/3}\sum_{m=1}^{\infty}
\mathbf{P}\left({L}_n=m\right)\mathbf{P}\left({T}_m=a
n+zn^{2/3}\right).
\end{equation*}
In the previous equation we have used the fact that the largest
$2$-connected component is distributed uniformly among all
$2$-connected maps with the same number of edges; this is because
the number of ways a 2-connected map $M$ can be completed to a map
of given size depends only on the size of $M$.
Notice that ${X}_n$ and ${T}_m$ are \emph{integer} random
variables, hence the previous equation should be written in fact as
\begin{equation*}
n^{2/3}\mathbf{P}\left({X}_n = \lfloor a n + z
n^{2/3}\rfloor \right)=n^{2/3}\sum_{m=1}^{\infty}
\mathbf{P}\left({L}_n=m\right)\mathbf{P}\left({T}_m=\lfloor
a n+zn^{2/3}\rfloor\right).
\end{equation*}
Let us write $m=a_1 n+x n^{2/3}$. Then $an+zn^{2/3}=a_2 m+y
m^{2/3}+o\left(m^{2/3}\right)$, where $y=a_1^{-2/3}(z-a_2 x)$.
Observe that when we vary $m$ in one unit, we vary $x$ in $n^{-2/3}$
units. Let $x_0 = (1-a_1 n)n^{-2/3}$, so that $a_1 n + x_0 n^{2/3} =
1$ is the initial term in the sum. The previous sum can be written
in the form
\begin{equation*}
n^{2/3}\sum_{x=x_0+\ell n^{-2/3}}
\mathbf{P}\left({L}_n=a_1
n+xn^{2/3}\right)\mathbf{P}\left({T}_m=a_2
m+\alpha_1^{-2/3}(z-a_2 x)m^{2/3}\right).
\end{equation*}
where the sum is for all values $\ell \ge 0$. From
Theorem~\ref{thm:maps-airy} it follows that
\begin{eqnarray*}
&&n^{2/3}\sum_{x=x_0 + \ell n^{-2/3}}
\mathbf{P}\left({L}_n=a_1
n+xn^{2/3}\right)\mathbf{P}\left({T}_m=a_2 m+\alpha_1^{-2/3}(z-a_2 x)m^{2/3}\right)\\
&\sim& n^{2/3}\sum_{x=x_0 + \ell n^{-2/3}}
n^{-2/3}c_1g(c_1 x)\,\,m^{-2/3} c_2 g\left(c_2 a_1^{-2/3}(z-a_2
x)\right)\\
&\sim& \frac{1}{n^{2/3}}\sum_{x=x_0 + \ell n^{-2/3}}
c_1g(c_1 x) \,\,c_2 a_1^{-2/3} g\left(c_2 a_1^{-2/3}(z-a_2
x)\right).
\end{eqnarray*}
In the last equality we have used that $m^{-2/3}=(a_1
n)^{-2/3}(1+o(1)).$ Now we approximate by an integral:
\begin{eqnarray*}
&&n^{-2/3}\sum_{x=x_0 + \ell n^{-2/3}}
c_1g(c_1 x) \,\, c_2 a_1^{-2/3} g\left(c_2 a_1^{-2/3}(z-a_2
x)\right)\\
&\sim& \int_{-\infty}^\infty c_1 g(c_1 x) \,\,c_2a_1^{-2/3}
g\left(c_2a_1^{-2/3}(z-a_2 x)\right) dx
\end{eqnarray*}
The previous estimate holds uniformly for $x$ in a bounded interval.
Now we set $a_2 x=u$, and with this change of variables we get
$$\int_{-\infty}^\infty \frac{c_1}{a_2} g\left(\frac{c_1}{a_2} u\right)\,\, c_2a_1^{-2/3}
g\left(c_2a_1^{-2/3}(z-u)\right) du.$$
This convolution can be interpreted as a sum of stable laws with
parameter $3/2$ in the following way. Let ${Y}_1$ and ${Y}_2$ be
independent random variables with densities $\frac{c_1}{a_2}
g\left(\frac{c_1}{a_2} u\right)$ and $c_2a_1^{-2/3}
g\left(c_2a_1^{-2/3}(z-u)\right)$, respectively. Then, the previous
integral is precisely
$\mathbf{P}\left({Y}_1+{Y}_2=z\right)$, and the result
follows from Proposition~\ref{prop:nolan}.
\end{proof}
\subsection*{Remark.}
The previous theorem can be obtained, alternatively, using the
machinery developed in~\cite{airy}. The two composition schemes
$B(zM(z)^2)$ and $C(B(z)/z-2)$ can be composed algebraically into a
single composition scheme $C(B(zM(z)^2)/z-2$. This is again a
critical scheme with exponents $3/2$ and an Airy law follows from
the general scheme in~\cite{airy}. The parameters can be computed
using the singular expansions of $M(z)$, $B(z)$, $C(z)$ at their
dominant singularities which are, respectively, equal to $1/12$,
$4/27$ and $1/4$. We have performed the corresponding computations
in complete agreement with values obtained in Theorem~\ref{th:maps}.
We have chosen the present proof since the same ideas are used later
in the case of graphs, where no algebraic composition seems
available.
\subsection{Number of edges in the largest block of a connected graph}
As discussed above, we have a limit Airy law ${X}_n$ for the number
of vertices in the largest block $L$ in a random connected planar
graph. In order to analyze the largest 3-connected component of $L$,
we need to express ${X}_n$ in terms of the number of edges. This
amounts to combine the limit Airy law with a normal limit law,
leading to slightly modified Airy law. The precise result is the
following.
\begin{theorem}\label{th:edges-largestbloc}
Let ${Z}_n$ be the number of \emph{edges} in the largest block of a
random connected planar graph with $n$ vertices. Then
$$
\mathbf{P}\left({Z}_n = \kappa_2\alpha n + zn^{2/3}\right) \sim n^{-2/3}{c\over\kappa_2} g\left({c\over\kappa_2}z\right),
$$
here $\alpha$ and $c$ are as in Theorem~\ref{th:largest-block}, and
$\kappa_2\approx 2.26288$ is the constant for the expected number of
edges in random 2-connected planar graphs, as in
Theorem~\ref{th:edges}.
\end{theorem}
\begin{proof}
Let ${X}_n$ be, as in Theorem~\ref{th:largest-block}, the number of
vertices in the largest block. In addition, let ${Y}_N$ be the
number of edges in a random 2-connected planar graph with $N$
vertices. Then
\begin{equation}\label{suma}
\mathbf{P}\left({Z}_n = \kappa_2\alpha n + zn^{2/3}\right)
= \sum_{x=x_0+\ell n^{-2/3}} \mathbf{P}\left({X}_n = \alpha n +
xn^{2/3}\right) \mathbf{P}\left({Y}_{\alpha n + xn^{2/3}} =
\kappa_2 \alpha n + zn^{2/3}\right),
\end{equation}
with the same convention for the index of summation as in the
previous section.
Since ${Y}_N$ is asymptotically normal (Theorem~\ref{th:edges})
$$
\mathbf{P}\left({Y}_N=\kappa_2 N + y N^{1/2}\right) \sim
N^{-1/2} h(y),
$$
here $h(y)$ is the density of normal law suitably scaled. If we take
$N = \alpha n + xn^{2/3}$, then $$\kappa_2 N = \kappa_2 \alpha N +
\kappa_2xn^{2/3}.$$As a consequence, the significant terms in the
sum in (\ref{suma}) are concentrated around $\alpha n +
(z/\kappa_2)n^{2/3}$, within a window of size $N^{1/2} =
\Theta(n^{1/2})$. Thus we can conclude that
$$
\mathbf{P}\left({Z}_n =\kappa_2\alpha n + zn^{2/3} \right)
\sim {1 \over \kappa_2} \mathbf{P}\left({X}_n = \lfloor \alpha n +
(z/\kappa_2)n^{2/3} \rfloor\right) \sim n^{-2/3}
{c\over\kappa_2} g\left({c\over\kappa_2}z\right),
$$
where $c$ is the constant in Theorem~\ref{th:largest-block}. The
factor $1 \over \kappa_2$ in the middle arises since in $\lfloor
\kappa_2\alpha n + zn^{2/3} \rfloor$ we have steps of length
$n^{-2/3}$, whereas in $\lfloor \alpha n + (z/\kappa_2) n^{2/3}
\rfloor$ they are of length $n^{-2/3}/\kappa_2$.
\end{proof}
\subsection{Probability distributions for 2-connected graphs}
In this section we study several probability distributions defined
on the set of 2-connected graphs of $m$ edges. The first
distribution ${X}^m_n$ (in fact, a family of probability
distributions, one for each $n$) models the appearance of largest
blocks with $m$ edges in random connected graphs with $n$ vertices.
The second one is a weighted distribution where each 2-connected
graph with $m$ edges receives a weight according to the number of
vertices, and for which it is easy to obtain an Airy law for the
size of the largest 3-connected component. We show that these two
distributions are asymptotically equivalent in a suitable range. In
particular, it follows that the Airy law of the latter distribution
also occurs in the former one. We start by defining precisely both
distributions. We use capital letters like ${X}^m_n$ and ${Y}_m$ to
denote random variables whose output is a graph in the corresponding
universe of graphs, so that our distributions are associated to
these random variables. We find this convention more transparent
than defining the associated probability measures.
Let $n,m$ be fixed numbers, and let $\mathcal{C}^m_n$ denote the set
of connected graphs on $n$ vertices such that their largest block
$L$ has $m$ edges. The first probability distribution ${X}^m_n$ is
the distribution of $L$ in a graph of $\mathcal{C}^m_n$ chosen
uniformly at random. That is, if $B$ is a 2-connected graph with $m$
edges, and $\mathcal{C}^B_n\subseteq \mathcal{C}^m_n$ denotes the
set of connected graphs that have $L=B$ as the largest block, then
\[
\mathbf{P}\left({X}^m_n = B\right) =
\frac{|\mathcal{C}^B_n|}{|\mathcal{C}^m_n|}.
\]
Let $m$ be a fixed number. The second probability distribution
${Y}_m$ assigns to a 2-connected graph $B$ of $m$ edges and $k$
vertices the probability
\begin{equation}\label{eq:Ym}
\mathbf{P}\left({Y}_m = B\right) = \frac{R^{k}}{k!}
\frac{1}{[y^m]B(R,y)},
\end{equation}
where $R$ is the radius of convergence of the exponential generating
function $B(x)=B(x,1)$ enumerating 2-connected graphs.
It is clear that $[y^m]B(R,y)$
is the right normalization factor.
Now we state precisely what we mean when we say that these two
distributions are asymptotically equivalent in a suitable range. In
what follows, $\alpha$ and $\kappa_2$ are the multiplicative
constants of the expected size of the largest block and the expected
number of edges in a random connected graph (see Theorems
\ref{th:largest-block} and \ref{th:edges}). We denote by $V(G)$ and
$E(G)$ the set of vertices and edges of a graph $G$.
\begin{lemma}\label{the:asympt-equal}
Fix positive values $\bar{y},\bar{z}\in\mathbb{R}^+$. For fixed $m$,
let $I_n$ and $I_k$ denote the intervals
\begin{eqnarray*}
I_k &=& \left[\frac{1}{\kappa_2} m -\bar{z} m^{1/2},
\frac{1}{\kappa_2} m + \bar{z} m^{1/2}\right],\\
I_n &=& \left[\frac{1}{\alpha\kappa_2} m -\bar{y} m^{2/3},
\frac{1}{\alpha\kappa_2} m + \bar{y} m^{2/3}\right].
\end{eqnarray*}
%
Then the probability distributions ${Y}_m$ and ${X}^m_n$, for $n\in
I_n$, are asymptotically equal on graphs with $k\in I_k$ vertices,
with uniform convergence for both $k$ and $n$. That is, there exists
a function $\epsilon(m)$ with $\lim_{m\to \infty} \epsilon(m)=0$
such that, for every 2-connected graph $B$ with $m$ edges and $k\in
I_k$ vertices, and for every $n\in I_n$, it holds that
\[
\left|\frac{\mathbf{P}\left({X}^m_n = B\right)}{\mathbf{P}\left({Y}_m = B\right)} - 1\right| < \epsilon(m).
\]
\end{lemma}
\begin{proof}
Fix $y\in[-\bar{y}, \bar{y}]$, and let $n = \lfloor
\frac{1}{\alpha\kappa_2} m + y m^{2/3}\rfloor \in I_n$. First, we
prove that ${X}^m_n$ and ${Y}_m$ are concentrated on graphs with
$k=\frac{1}{\kappa_2} m+O(m^{1/2})$ vertices, that is,
\[\mathbf{P}\left(\frac{1}{\kappa_2}m-z m^{1/2} \leq |V({X}^m_n)|
\leq \frac{1}{\kappa_2}m+z m^{1/2}\right) \]
goes to $1$ when $z, m\to \infty$, and the same is true for ${Y}_m$.
Then, we show that ${X}^m_n$ and ${Y}_m$ are asymptotically
\emph{proportional} for graphs on $k\in I_k$ vertices. A direct
consequence of both facts is that ${X}^m_n$ and ${Y}_m$ are
asymptotically \emph{equal} in $I_k$, since the previous results are
valid for arbitrarily large $\bar{z}$.
We start by considering the probability distribution ${Y}_m$.
If we add (\ref{eq:Ym}) over all the $b_{k,m}$ 2-connected graphs
with $k$ vertices and $m$ edges, we get
\[ \mathbf{P}\left(|V({Y}_m)|=k\right) = b_{k,m} \frac{R^{k}}{k!} \frac{1}{[y^m]B(R,y)}.\]
On the one hand, the value $[y^m]B(R,y)$ is a constant that does not
depend on $k$. On the other hand, since the numbers $b_{k,m}$
satisfy a local limit theorem (the proof is the same as
in~\cite{gn}), it follows that the numbers $b_{k,m} R^k/k!$ follow a
normal distribution concentrated at $k=1/\kappa_2$ on a scale
$m^{1/2}$, as desired.
We show the same result for ${X}^m_n$. Let $B$ be a 2-connected
graph with $k$ vertices and $m$ edges. We write the probability that
a graph drawn according to ${X}^m_n$ is $B$ as a conditional
probability on the largest block $L_n$ of a random connected graph
of $n$ vertices. In what follows, $v\left(L_n\right)$ and
$e\left(L_n\right)$ denote, respectively, the number of vertices
and edges of $L_n$.
\begin{eqnarray*}
\mathbf{P}\left({X}^m_n=B\right) &= &\mathbf{P}\left(L_n=B
~\mid~ e(L_n)=m\right)\\&=&
\frac{\mathbf{P}\left(L_n=B,\,\,
e(L_n)=m\right)}{\mathbf{P}\left(e(L_n)=m\right)}=
\frac{\mathbf{P}\left(L_n=B\right)}{\mathbf{P}\left(e(L_n)=m\right)}.
\end{eqnarray*}
Note that in the last equality we drop the condition
$e\left(L_n\right)=m$ because it is subsumed by $L_n=B$.
The probability that the largest block $L_n$ is $B$ is the same
for all 2-connected graphs on $k$ vertices. Hence, if $b_{k}$
denotes the number of 2-connected graphs on $k$ vertices, we have
\[ \mathbf{P}\left({X}^m_n=B\right) = \frac{1}{b_k}\frac{\mathbf{P}\left(v(L_n)=k\right)}{\mathbf{P}\left(e(L_n)=m\right)},\]
If we sum over all the $b_{k,m}$ 2-connected graphs $B$ with $k$
vertices and $m$ edges, we finally get the probability of ${X}^m_n$
having $k$ vertices,
\[ \mathbf{P}\left(|V({X}^m_n)|=k\right) = \frac{b_{k,m}}{b_k}\frac{\mathbf{P}\left(v(L_n)=k\right)}{\mathbf{P}\left(e(L_n)=m\right)}.
\]
For fixed $n, m$, the numbers
$\mathbf{P}\left(V(L_n)=k\right)$ follow an Airy
distribution of scale $n^{2/3}$ concentrated at $k_1=\alpha n$ (see
Section~\ref{se:largestbloc-planar}), and the numbers $b_{k,m}/b_k$
are normally distributed around $k_2=m/\kappa_2$ on a scale
$m^{1/2}$. The choice of $n$ makes $k_1$ and $k_2$ coincide but for
a lower-order term $O(m^{2/3})$; hence, it follows that
$\mathbf{P}\left( |V({X}^m_n)|=k\right)$ is concentrated at
$k_2=m/\kappa_2$ on a scale $m^{1/2}$, as desired.
Now that we have established concentration for both probability
distributions, we just need to show that they are asymptotically
proportional in the range $k=m/\kappa_2+O(m^{1/2})$. This is easy to
establish by considering asymptotic estimates. Indeed, we have
\[
\mathbf{P}\left({X}^m_n=B\right) =
\frac{1}{b_k}\frac{\mathbf{P}\left(v(L_n)=k\right)}{\mathbf{P}\left(e(L_n)=m\right)},
\]
and since $\mathbf{P}\left(v(L_n)=k\right)$ is Airy
distributed in the range $k=m/\kappa_2+O(m^{2/3})$ and $b_k\sim
b\cdot k^{-7/2} R^{-k} k!$, it follows that
\[
\mathbf{P}\left({X}^m_n=B\right) \sim b \cdot k^{7/2}
\frac{R^{k}}{k!}
\frac{g(x)}{\mathbf{P}\left(e(L_n)=m\right)},
\]
where $x$ is defined as $(k-\alpha n) n^{-2/3}$ and $g(x)$ is the
Airy distribution of the appropriate scale factor. Let us compare it
with the exact expression for the probability distribution ${Y}_m$,
that is,
\[
\mathbf{P}\left({Y}_m=B\right) = \frac{R^{k}}{k!}
\frac{1}{[y^m]B(R,y)}.
\]
Clearly, both expressions coincide in the high order terms $R^k$ and
$1/k!$. The remaining terms are either constants like $b$,
$\mathbf{P}\left(e(L_n)=m\right)$ and $[y^m]B(R,y)$, or
expressions that are asymptotically constant in the range of
interest. This is the case for $k^{7/2}$, which is asymptotically
equal to $((1/\kappa_2)m)^{7/2}$. And also for $g(x)$, which is
asymptotically equal to $g(y(\alpha \kappa_2)^{2/3})$, since
$x=(k-\alpha n)n^{-2/3}$ and $n=\frac{1}{\alpha k_2}m + ym^{2/3}$
implies that
\begin{align*}
x &= \left( \frac{1}{k_2}m + O(m^{1/2}) - \frac{\alpha}{\alpha k_2}m + ym^{2/3} \right)n^{-2/3} \\
&=\left( ym^{2/3} \right)\left(\frac{m}{\alpha \kappa_2}\right)^{-2/3} + o(1) \\
&= y(\alpha \kappa_2)^{2/3} + o(1)
\end{align*}
in the given range. Hence, both distributions are asymptotically
proportional in the given range.
Thus, we have shown the result when $n$ is linked to $m$ by
$n=\frac{1}{\alpha\kappa_2}m + ym^{2/3}$, for any $y$. Clearly,
uniformity holds when $y$ is restricted to a compact set of
$\mathbb{R}$, like $[-\bar{y}, \bar{y}]$.
\end{proof}
\subsection{Proof of the main result}
In order to prove Theorem~\ref{th:3conn-main}, we have to
concatenate two Airy laws. The first one is the number of edges in
the largest block, given by Theorem~\ref{th:edges-largestbloc}. The
second is the number of edges in the largest 3-connected component
of a random 2-connected planar graph with a given number of edges.
This is a gain an Airy law produced by the composition scheme
$T_z(x,D(x,y))$, which encodes the combinatorial operation of
substituting each edge of a 3-connected graph by a network (which is
essentially a 2-connected graph rooted at an edge). However, this
scheme is relative to the variable $y$ marking edges. In order to
have a legal composition scheme we need to take a fixed value of
$x$. The right value is $x=R$, as shown by
Lemma~\ref{the:asympt-equal} Indeed, taking $x=R$ amounts to weight
a 2-connected graph $G$ with $m$ edges with $R^k/k!$, where $k$ is
the number of vertices in $G$. Thus the relevant composition scheme
is precisely $T_z(R,uD(R,y))$, where $u$ marks the size of the
3-connected core. Formally, we can write it as the scheme
$$
C(uH(y)), \qquad H(y) = D(R,y), \qquad C(y) = T_z(R,y).
$$
The composition scheme $T_z(R,D(R,y))$ is critical with exponents
$3/2$, and an Airy law appears. In order to compute the parameters
we need the expansion of $D(R,y)$ at the dominant singularity $y=1$,
which is of the form
\begin{equation}\label{eq:Di1}
D(R,y) = \widetilde{D}_0 + \widetilde{D}_2 Y^2 + \widetilde{D}_3
Y^3+ O(Y^4),
\end{equation}
and $Y =\sqrt{1-y}$. The different $\widetilde{D}_i$ can be obtained
in the same way as in Proposition \ref{pro:Dsing}.
\begin{prop}\label{prop:edges-core-3con}
Let ${W}_m$ be the number of edges in the largest 3-connected
component of a 2-connected planar graph with $m$ edges, weighted
with $R^k/k!$, where $k$ is the number of vertices. Then
$$
\mathbf{P}\left({W}_m = \beta n + zn^{2/3}\right) \sim n^{-2/3}c_2 g\left(c_2z\right),
$$
where $\beta = -\widetilde{D}_0/\widetilde{D}_2 \approx 0.82513$ and
$c_2 = {-\widetilde{D}_2 / \widetilde{D}_0} \left({-\widetilde{D}_2
/ 3\widetilde{D}_3} \right)^{2/3} \approx 2.16648$, and the
$\widetilde{D}_i$ are as in Equation~(\ref{eq:Di1}).
\end{prop}
\begin{proof}The proof is a direct application of the methods in Theorem~\ref{th:largest-block}.
\end{proof}
\paragraph{Proof of Theorem \ref{th:3conn-main}.}
Recall that ${X}_n$ is the number of vertices in the largest
3-connected component of a random connected planar graph with $n$
vertices. This variables aries as the composition of two random
variables we have already studied. First we consider ${Z}_n$ as in
Theorem~\ref{th:edges-largestbloc}, which is the number of edges in
the largest block, and then ${W}_m$ as in
Proposition~\ref{prop:edges-core-3con}.
The main parameter turns out to be $\alpha_2 = \mu \beta (\kappa_2
\alpha)$, where
\begin{enumerate}
\item $\alpha$ is for the expected number of vertices in the largest block;
\item $\kappa_2$ is for the expected number of edges in 2-connected graphs;
\item $\beta$ is for the expected number of edges in the largest 3-connected component;
\item $\mu$ is for the expected number of vertices in 3-connected graphs weighted according to $R^k/k!$ if $k$ is the number of vertices.
\end{enumerate}
The constant in 1. and 3. correspond to Airy laws, and the constants
in 2. and 4. to normal laws.
Let ${Y}_n$ be the number of edges in the largest 3-connected
component of a random connected planar graph with $n$ vertices
(observe that our main random variable ${X}_n$ is linked directly to
${Y}_n$ after extracting a parameter normally distributed like the
number of vertices). Then
$$
\mathbf{P}\left({Y}_n = \beta \kappa_2 \alpha n + z
n^{2/3}\right)=\sum_{m=1}^{\infty}
\mathbf{P}\left({Z}_n=m\right)\mathbf{P}\left({W}_m= \beta
\kappa_2 \alpha n+zn^{2/3}\right).
$$
This convolution can be analyzed in exactly the same way as in the
proof of Theorem~\ref{th:maps}, giving rise to a limit Airy law with
the parameters as claimed.
Finally, in order to go from ${Y}_n$ to ${X}_n$ we need only to
multiply the main parameter by $\mu$ and adjust the scale factor. To
compute $\mu$ we need the dominant singularity $\tau(x)$ of the
generating function $T(x,z)$ of 3-connected planar graphs, for a
given value of $x$ (see Section 6 in \cite{degrees}). Then
$$
\mu = -R \tau'(R) / \tau(R).
$$
Given that the inverse function $r(z)$ is explicit (see Equation
(25)~in~\cite{degrees}), the computation is straightforward.
\section{Minor-closed classes}\label{se:examples}
In this section we apply the machinery developed so far to analyse
families of graphs closed under minors. A class of graphs
$\mathcal{G}$ is minor-closed if whenever a graph is in
$\mathcal{G}$ all its minors are also in $\mathcal{G}$. Given a
minor-closed class $\mathcal{G}$, a graph $H$ is an excluded minor
for $\mathcal{G}$ if $H$ is not in $\mathcal{G}$ but every proper
minor is in~$\mathcal{G}$. It is an easy fact that a graph is in
$\mathcal{G}$ if and only if it does not contain as a minor any of
the excluded minors from $\mathcal{G}$. According to the fundamental
theorem of Robertson and Seymour, for every minor-closed class the
number of excluded minors is finite~\cite{RobSey}. We use the
notation $\mathcal{G} =\ensuremath{\mathrm{Ex}}(H_1, \cdots, H_k)$ if $H_1,\dots,H_k$ are
the excluded minors of $\mathcal{G}$. If all the $H_i$ are
3-connected, then $\ensuremath{\mathrm{Ex}}(H_1, \cdots, H_k)$ is a closed family. This
is because if none of the 3-connected components of a graph $G$
contains a forbidden, the same is true for $G$ itself.
In order to apply our results we must know which connected graphs
are in the set $\ensuremath{\mathrm{Ex}}(H_1, \cdots, H_k)$. There are several results in
the literature of this kind. The easiest one is $\ensuremath{\mathrm{Ex}}(K_4)$, which is
the class of series-parallel graphs. Since a graph in this class
always contains a vertex of degree at most two, there are no
3-connected graphs. Table~\ref{tab:excludedminor} contains several
such results, due to Wagner, Halin and others (see Chapter X
in~\cite{diestel2}). The proofs make systematic use of Tutte's
wheels theorem (consult Chapter $3$ of~\cite{Diestel}, for
instance): a 3-connected graph can be reduced to a wheel by a
sequence of deletions and contractions of edges, while keeping it
3-connected.
{\renewcommand{\arraystretch}{1.1}
\begin{table}[htb]
\renewcommand{\arraystretch}{1.2}
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Ex. minors & $3$-connected graphs & Generating function $T(x,z)$ \\\hline
$\displaystyle K_{4}$ & $\emptyset$ & $0$ \\
$W_{4}$ & $K_{4}$ & $z^{6}x^4/4!$\\
$K_{5}-e$ & $K_{3},K_{3,3},K_{3}\times K_{2},\{ W_{n}\}_{n\geq 3}$&
$70z^9x^6/6!-\frac{1}{2}x(\log(1-z^2x)+2z^2x+z^4x^2)$ \\
$K_5,K_{3,3}$ & Planar 3-connected & $T_p(x,z)$ \\
$K_{3,3}$ & Planar 3-connected, $K_5$ & $T_p(x,z) + z^{10}x^5/5!$ \\
$K_{3,3}^+$ & Planar 3-connected, $K_5, K_{3,3}$ & $T_p(x,z) + z^{10}x^5/5! + 10z^9x^6/6!$ \\
\hline
\end{tabular}\bigskip
\caption{Classes of graphs defined from one excluded minor.
$T_p(x,z)$ is the GF of planar 3-connected graphs.}
\label{tab:excludedminor}
\end{center}
\end{table}
}
The 3-connected graphs when excluding $W_5$ and the triangular
prism take longer to describe. For $K_3 \times K_2$ they are: $K_5,
K_5^-,\{ W_{n}\}_{n\geq 3}$, and the family $G_\Delta$ of graphs
obtained from $K_{3,n}$ by adding any number of edges to the part of
the bipartition having 3 vertices. For $W_5$ they are: $K_4,K_5$,
the family $G_\Delta$, the graphs of the octahedron and the cube
$Q$, the graph obtained from $Q$ by contracting one edge, the graph
$L$ obtained from $K_{3,3}$ by adding two edge in one of the parts
of the bipartition, plus all the 3-connected subgraphs of the former
list. Care is needed here for checking that all 3-connected graphs
are included and for counting how many labellings each graph has.
Once we have the full collection of 3-connected graphs, we have
$T(x,z)$ at our disposal. For the family of wheels we have a
logarithmic term (see the previous table) and for the family
$G_\Delta$ it is a simple expression involving $\exp(z^3 x)$. We can
then apply the machinery developed in this paper and compute the
generating functions $B(x,y)$ and $C(x,y)$. For the last three
entries in Table~\ref{tab:excludedminor}, the main problem is
computing $B(x,y)$ and this was done in \cite{gn} and \cite{k33};
these correspond to the planar-like case. In the remaining cases
$T(x,z)$ is either analytic or has a simple singularity coming from
the term $\log(1-xz^2)$, and they correspond to the
series-parallel-like case.
In Table~\ref{tab:constants} we present the fundamental constants
for the classes under study. For a given class $\mathcal{G}$ they
are: the growth constants $\rho^{-1}$ of graphs in $\mathcal{G}$;
the growth constant $R^{-1}$ of 2-connected graphs in
$\mathcal{G}$; the asymptotic probability $p$ that a random graph in
$\mathcal{G}$ is connected; the constant $\kappa$ such that $\kappa
n$ is the asymptotic expected number of edges for graphs in
$\mathcal{G}$ with $n$ vertices; the analogous constant $\kappa_2$
for 2-connected graphs in $\mathcal{G}$; the constant $\beta$ such
that $\beta n$ is the asymptotic expected number of blocks for
graphs in $\mathcal{G}$ with $n$ vertices; and the constant $\delta$
such that $\delta n$ is the asymptotic expected number of cut
vertices for graphs in $\ensuremath{\mathcal{G}}$ with $n$ vertices.
%
The values in Table~\ref{tab:constants} have been computed with
\texttt{Maple} using the results in sections \ref{se:asympt}
and~\ref{se:laws}.
{\renewcommand{\arraystretch}{1.2}
\begin{table}[htb]
\small
$$
\renewcommand{\arraystretch}{1.2}
\begin{array}{|l|c|c|c|c|c|c|c|}
\hline
\hbox{Class} & \rho^{-1} & R^{-1} & \kappa & \kappa_2 & \beta & \delta & p \\
\hline
\ensuremath{\mathrm{Ex}}(K_4) & 9.0733& 7.8123& 1.61673& 1.71891 & 0.149374& 0.138753& 0.88904\\
\ensuremath{\mathrm{Ex}}(W_4) & 11.5437& 10.3712& 1.76427& 1.85432 & 0.107065& 0.101533& 0.91305\\
\ensuremath{\mathrm{Ex}}(W_5) & 14.6667& 13.5508& 1.90239& 1.97981 & 0.0791307& 0.0760808& 0.93167\\
\ensuremath{\mathrm{Ex}}(K_5^-) & 15.6471& 14.5275& 1.88351& 1.95360 & 0.0742327& 0.0715444& 0.93597\\
\ensuremath{\mathrm{Ex}}(K_3 \times K_2) & 16.2404& 15.1284&1.92832& 1.9989 & 0.0709204 & 0.0684639 & 0.93832\\
\hbox{Planar} & 27.2269& 26.1841& 2.21327& 2.2629 & 0.0390518&
0.0382991& 0.96325 \\
\ensuremath{\mathrm{Ex}}(K_{3,3}) & 27.2293& 26.1866& 2.21338& 2.26299 & 0.0390483& 0.0382957& 0.963262\\
\ensuremath{\mathrm{Ex}}(K_{3,3}^+) & 27.2295& 26.1867& 2.21337& 2.26298 & 0.0390481&
0.0382956& 0.963263 \\
\hline
\end{array}
$$\bigskip
\caption{Constants for a given class of graphs: $\rho$ and $R$ are
the radius of convergence of $C(x)$ and $G(x)$, respectively;
constants $\kappa,\kappa_2,\beta,\delta$ give, respectively, the
first moment of the number of: edges, edges in 2-connected graphs,
blocks and cut vertices; $p$ is the probability of connectedness. }
\label{tab:constants}
\end{table}
}
\section{Critical phenomena} \label{se:critical}
We have seen that the estimates for the number of planar graphs with
$\mu n$ edges have the same shape for all values $\mu \in (1,3)$.
This is also the case for series-parallel graphs, where $\mu \in
(1,2)$ since maximal graphs in this class have only $2n-3$ edges. It
is natural to ask if there are classes in which there is a
\textit{critical phenomenon}, that is, a different behaviour
depending on the
edge density. We have not found such phenomenon for `natural'
classes of graphs, in particular those defined in terms of forbidden
minors. But we have been able to construct examples of critical
phenomena by a suitable choice of the family $\mathcal{T}$ of
3-connected graphs, as we now explain.
Let $\mathcal{T}$ a family of 3-connected graphs whose function
$T(x,z)$ has a singularity on $z$ of exponent $5/2$. We have seen that the
singular exponents of the associated functions $B(x)$, $C(x)$ and $G(x)$ depend
on the existence of branch-points before $T$ becomes singular. We
have obtained families of graphs for which the singular exponents of
$B(x)$, $C(x)$ and $G(x)$ depend on the particular value of $y_0$.
Now we have two sources for the main singularity of $B(x,y)$ for a
given value of $y$: either (a) it comes from the singularities of
$T(x,z)$; or (b) it comes from a branch point of the equation
defining $D(x,y)$. For planar graphs the singularity always comes
from case~(a), and for series-parallel graphs always from case~(b).
If there is a value $y_0$ for which the two sources coalesce, then
we get a different singular exponent depending on whether $y<y_0$ or
$y>y_0$. The most important consequence in this situation is that
there is a critical edge density $\mu_0$, such that below $\mu_0$
the largest block has linear size, and above $\mu_0$ it has
sublinear size, or conversely.
Here are some examples.
\begin{itemize}
\item If $\mathcal{T}$ is the family of \emph{3-connected cubic planar
graphs}, then $B(x,y)$ has singular exponent $5/2$ when
$y<y_0\approx 0.07422$, and $3/2$ when $y>y_0$. The corresponding
critical value for the number of edges is $\mu_0 \approx 1.3172$.
\item If $\mathcal{T}$ is the family of \emph{planar triangulations
(maximal planar graphs)}, then $B(x,y)$ exponent $3/2$ when
$y<y_0\approx 0.4468 $, and $5/2$ when $y>y_0$. The corresponding
critical value for the number of edges is $\mu_0 \approx 1.8755$.
\item This example shows that more than one critical value may
occur. This is done by adding a single dense graph to the family
$\mathcal{T}$ in the last example. Let $\mathcal{T}$ be the family
of \emph{triangulations plus the exceptional graph $K_6$}. Then
there are two critical values $y_0 \approx 0.4469$ and $y_1 \approx
108.88$, and the corresponding critical edge densities are $\mu_0
\approx 1.8756$ and $\mu_1 \approx 3.4921$. This last value is close
to $7/2$; this is the maximal edge density, which is approached by
taking many copies of $K_6$ glued along a common edge. It turns out
that $B(x,y)$ has exponent $3/2$ when $y<y_0$, $5/2$ when
$y_0<y<y_1$, and again $3/2$ for $y_1 < y$.
\end{itemize}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,465 |
\section{Introduction}
Over the last few years, throughout the field of high energy physics (HEP), we have witnessed
an enormous effort committed to migrating many popular procedural Monte Carlo Generators into
their C++ equivalents designed using object-oriented methodologies.
Well-known examples are the
GEANT \cite{Agostinelli:2002hh},
HERWIG \cite{Bahr:2008pv} and
PYTHIA \cite{Sjostrand:2007gs} Monte Carlo Generators.
This reflects a radical change in our approach to scientific computing.
Along with the eternal requirement that the modeled physics be correct and extensively
validated with external data, the evolving nature of computing in HEP has introduced
new requirements.
These requirements relate to the way large HEP software systems are developed and maintained,
by wide geographically-spread collaborations over a typical time-span of $\sim$25 years
during which they will undergo many (initially unforeseen) extensions and modifications to
accommodate new needs. This puts a stress on software qualities such as re-usability, maintainability,
robustness, modularity and extensibility. Software engineering provides many well proven techniques
to address these requirements and thereby improves the quality and lifetime of HEP software.
In neutrino physics, the requirements for a new physics generator are more challenging
for three reasons: the lack of a `canonical' procedural generator, theoretical and phenomenological
challenges in modeling few-GeV neutrino interactions, and the rapidly evolving experimental and
theoretical landscape.
Neutrinos come from many sources and a variety of experiments have
been mounted to measure their properties. These experiments have
complicated detectors composed of many elements and the neutrinos
have many flavors and a wide energy spectrum (from $\sim$ 1 MeV
to $\sim$ 1 PeV). Our long-term goal
is for GENIE to become a `canonical' neutrino event generator
with wide applicability.
The origins of the code come from the Soudan experiment \cite{Gallagher:1998qg} and recent
application has been primarily to MINOS \cite{Michael:2006rx}. Thus, emphasis has been given
to the few-GeV energy range, the challenging boundary between the
non-perturbative and perturbative regimes. These are relevant for
current and near future long-baseline precision neutrino oscillation
experiments using accelerator-made beams, one of the focuses
of high energy physics. GENIE development over the next five years will be driven
by the upcoming generation of accelerator experiments including T2K \cite{Wark:2008zz},
NoVA \cite{Ayres:2004js}, Minerva \cite{Schulte:2006db}
MicroBooNE \cite{Soderberg:2009rz} and ArgoNEUT \cite{Soderberg:2009qt}.
These developments are well underway and the code is being used successfully in each of these
experiments. The present version provides comprehensive neutrino
interaction modelling in the energy from $\sim$100 MeV to a few hundred GeV.
Results can be obtained and will be qualitatively correct for any nuclear target.
GENIE\footnote {
\em GENIE \em stands for
\em\underline{G}\em enerates
\em\underline{E}\em vents for
\em\underline{N}\em eutrino
\em\underline{I}\em nteraction
\em\underline{E}\em xperiments
} is a ROOT-based \cite{Brun:1997pa} Neutrino MC Generator.
It was designed using object-oriented methodologies and developed entirely in C++ over a
period of more than three years, from 2004 to 2007.
Its first official physics release (v2.0.0) was made available on August 2007 and, at the time
of writing this article, the latest available version was v2.4.4. It also describes
v2.6.0 which will be released shortly.
GENIE has already been adopted by the majority of neutrino experiments, including
those using the JPARC and NuMI neutrino beamlines, and will be an important physics tool
for the exploitation of the world accelerator neutrino program.
The project is supported by a group of physicists from all major neutrino experiments
operating in this energy range, establishing GENIE as a major HEP event generator collaboration.
Many members of the GENIE collaboration have extensive experience in developing
and maintaining the legacy Monte Carlo Generators that GENIE strives to replace,
which guarantees knowledge exchange and continuation. The default set of physics models in GENIE have
adiabatically evolved from those in the NEUGEN \cite{Gallagher:2002sf} package,
which was used as the event generator by numerous experiments over the past decade.
This article will discuss the paradigm shift brought about by GENIE in neutrino physics simulations.
In Sec. \ref{sec:bridge} we describe the unique challenges facing neutrino simulations in more detail.
Section \ref{sec:physics}
gives a brief overview of the physics models available in GENIE. Section \ref{future} gives
a brief discussion of upgrades in progress.
Section \ref{sec:design} describes
the object-oriented design of GENIE.
Section \ref{sec:apps} describes the GENIE applications and utilities available
for simulation and analysis tasks. Section \ref{sec:collab} describes the structure of the GENIE
collaboration and
Section \ref{sec:avail} describes code availability,
distribution, supported platforms, external dependencies, releases, and license.
\section{Neutrino Interaction Simulation: Challenges and Significance}
\label{sec:bridge}
Neutrinos have played an important role in particle physics since their discovery
half a century ago. They have been used to elucidate the structure of the electroweak
symmetry groups, to illuminate the quark nature of hadrons, and to confirm our models of
astrophysical phenomena. With the discovery of neutrino oscillations
using atmospheric, solar, accelerator, and reactor neutrinos, these elusive particles
now take center stage as the objects of study themselves. Precision measurements of the lepton mixing matrix,
the search for lepton charge-parity (CP) violation, and the determination of the neutrino masses and
hierarchy will be major efforts in HEP for several decades.
The cost of this next generation of experiments will be
significant, typically tens to hundreds of millions of dollars.
A comprehensive, thoroughly tested neutrino event generator package
plays an important role in the design and execution of these experiments, since this tool is used
to evaluate the feasibility of proposed projects and estimate their physics impact, make
decisions about detector design and optimization, analyze the collected data samples, and
evaluate systematic errors. With the advent of high-intensity neutrino beams
from proton colliders, we have entered a new era of
high-statistics, precision neutrino experiments which
will require a new level of accuracy
in our knowledge, and simulation, of neutrino interaction physics \cite{Harris:2004iq}.
While object-oriented physics generators in other fields of high energy physics were
evolved from well established legacy systems, in neutrino physics no such `canonical' MC exists.
Until quite recently, most neutrino experiments developed their own neutrino event generators.
This was due partly to the wide variety of energies, nuclear targets, detectors, and physics topics being
simulated.
Without doubt these generators,
the most commonly used of which have been GENEVE \cite{Cavanna:2002se}, NEUT \cite{Hayato:2002sd},
NeuGEN \cite{Gallagher:2002sf}, NUANCE \cite{Casper:2002sd} and NUX \cite{Rubbia:2001},
played an important role in the design and exploration of the previous and current generation
of accelerator neutrino experiments.
Tuning on the basis of unpublished data from each group's own experiment has not been unusual
making it virtually impossible to perform a global, independent evaluation for the state-of-the-art
in neutrino interaction physics simulations.
Moreover, limited manpower and the fragility of the overextended software architectures
meant that many of these legacy physics generators were not keeping
up with the latest theoretical ideas and experimental measurements.
A more recent development in the field has been the direct involvement of theory groups in the
development of neutrino event generators, such as the NuWRO \cite{Juszczak:2005zs}
and GiBUU \cite{Leitner:2006ww} packages, and the inclusion of neutrino scattering in
the long-established FLUKA hadron scattering package \cite{Fasso:2003xz}.
Simulating neutrino interactions in the energy range of interest to current and near-future experiments poses significant challenges.
This broad energy range bridges the perturbative
and non-perturbative pictures of the nucleon and a variety of scattering mechanisms are important.
In many areas, including elementary cross sections, hadronization models, and nuclear physics, one is required
to piece together models with different ranges of validity in order to generate events over all of the available phase space.
This inevitably introduces challenges in merging and tuning models, making sure that double counting and discontinuities
are avoided. In addition there are kinematic regimes which are outside the stated range of validity of all available models,
in which case we are left with the challenge of developing our own models or deciding which model best extrapolates into
this region.
An additional fundamental problem in this energy range is a lack of data. Most simulations have been tuned to bubble chamber
data taken in the 70's and 80's. Because of the limited size of the data samples (important exclusive channels
might only contain a handful of events), and the limited coverage
in the space of ($\nu/\overline{\nu}, E_\nu$, A), substantial uncertainties exist in numerous aspects of
the simulations.
The use cases for GENIE are also informed by the experiences of the developers and users
of the previous generation of procedural codes.
Dealing with these substantial model uncertainties has been an important analysis challenge for many recent experiments.
The impact of these uncertainties on physics analyses have been evaluated in detailed systematics studies and in some
cases the models have been fit directly to experimental data to reduce systematics. These `downstream' simulation-related
studies can often be among the most challenging and time-consuming in an analysis.
To see the difficulties facing the current generation of neutrino experiments, one can
look no further than the K2K and MiniBooNE experiments. Both of these experiments have
measured a substantially different Q$^2$ distribution for their quasielastic-like events
when compared with their simulations, which involve a standard Fermi Gas model nuclear model
\cite{Ishida:2002xd,miniBoone:2007ru}.
The disagreement between nominal Monte Carlo and data is quite large - in the lowest Q$^2$
bin of MiniBooNE the deficit in the data is around 30\% \cite{miniBoone:2007ru}. It seems likely that the discrepancies seen by both experiments have a common origin.
However the two experiments have been able to obtain internal consistency with very different
model changes - the K2K experiment does this by eliminating the charged current (CC) coherent contribution in the
Monte Carlo \cite{Sanchez:2006hp} and the MiniBooNE experiment does this by modifying certain parameters in their
Fermi Gas model \cite{miniBoone:2007ru}. Another example of the rapidly evolving nature of this field
is the recently reported excess of low energy electron-like events by the MiniBooNE collaboration \cite{AguilarArevalo:2008rc}.
These discrepancies have generated significant new theoretical
work on these topics over the past several years \cite{Paschos:2005km,
Singh:2006bm,AlvarezRuso:2007tt,Bodek:2007wb,Harvey:2007rd,Buss:2007ar,Benhar:2005dj,Amaro:2006if}.
The situation is bound to become even more interesting, and complicated, in the coming decade,
as new high-statistics experiments begin taking data in this energy range.
Designing a software system that can be responsive to this rapidly
evolving experimental and theoretical landscape is a major challenge.
In this paper we will describe the ways in which the GENIE neutrino event generator addresses these
challenges. These solutions rely heavily on the power of modern software engineering, particularly
the extensibility, modularity, and flexibility of object oriented design, as well as the combined
expertise and experience of the collaboration with previous procedural codes.
\section{Neutrino Interaction Physics models in GENIE}
\label{sec:physics}
The set of physics models used in GENIE incorporates the dominant scattering mechanisms
from several MeV to several hundred GeV and are appropriate for any neutrino
flavor and target type. Over this energy range, many different physical
processes are important.
These physics models can be
broadly categorized into nuclear physics models, cross section models, and hadronization models.
The neutrino-nucleus interaction involves a large variety of processes, all of which
must be modeled to get an accurate description of the experimental signature
of any detector and its many components. Since most theoretical models
describe a small subset of these processes, GENIE must include many models.
The broad energy range and the many nuclei to be covered force choosing
models that have very broad applicability.
The particle in the nucleus with which the neutrino interacts depends strongly on
the energy. At high energies ($E_\nu >$10 GeV) neutrinos interact with a single quark
inside a nucleon (neutron or
proton); the code must model this interaction and the distribution of the residual quarks.
At lower energies, the struck particles are neutrons and protons. The neutrino tends to
strike a single nucleon (impulse approximation) which is affected by the nuclear medium
in which it resides.
In the high energy regime, the large body of neutrino-nucleon data is sufficient for
development of a full model. At lower energies, neutrino-nucleon data has been
used for the basic process and nuclear models developed for other probes (especially
electrons) are adopted.
The 2 recent major developments have been in the transition region
and the final state interactions (FSI) model.
In physics model development for GENIE we have been forced
to pay particular attention to this `transition region', as for few-GeV
experiments it dominates the event rate.
In particular the boundaries between regions of validity of
different models need to be treated with care in order to
avoid theoretical inconsistencies,
discontinuities in generated distributions, and double-counting.
Treatment of FSI involves many aspects of nuclear physics and strong interactions.
Many events where particles are produced by the neutrinos have their
topologies and kinematics altered. There are many effects to include
and some dispute about the right techniques to use. Therefore, FSI
treatment is one of the largest differences among models of the neutrino-nucleus
interaction.
In this brief section we will describe the models available in GENIE
and the ways in which we combine models to cover
regions of phase space where clear theoretical or empirical guidance is lacking.
\subsection{Nuclear Physics Model}
The importance of the nuclear model depends strongly on the kinematics of the reaction.
Nuclear physics plays a large role in every aspect of neutrino scattering simulations
at few-GeV energies and introduces coupling between several aspects of the simulation.
The relativistic Fermi gas (RFG) nuclear model is used for all processes.
GENIE uses the version of Bodek and Ritchie which has been modified
to incorporate short range nucleon-nucleon correlations \cite{Bodek:1981wr}. This is simple,
yet applicable across a broad range of target atoms and neutrino energies. The best
tests of the RFG model come from electron scattering
experiments \cite{Moniz:1971mt}. At high energies, the nuclear model requires
broad features due to shadowing and similar effects.
At the lower end of the GENIE energy range, the impulse
approximation works very well and the RFG is often successful.
The nuclear medium gives the struck nucleon a momentum and average binding energy
which have been determined in electron scattering experiments.
Mass densities are taken from review articles \cite{DeJager:1987qc}. For $A<$20, the
modified Gaussian density parameterization is used. For heavier nuclei, the
2-parameter Woods-Saxon density function is used. Thus, the model can be used
for {\em all} nuclei. Presently, fit parameters for selected nuclei are used
with interpolations for other nuclei. All isotopes of a particular nucleus are assumed
to have the same density.
It is well known that scattering kinematics for nucleons in a nuclear environment are
different from those obtained in scattering from free nucleons.
For quasi-elastic and elastic scattering, Pauli blocking is applied as described in
Sec. \ref{sec:xsec}.
For nuclear targets a nuclear modification factor is included to account for
observed differences between nuclear and free nucleon structure functions
which include shadowing, anti-shadowing, and the EMC effect \cite{Bodek:2002ps}.
Nuclear reinteractions of produced hadrons is simulated using a cascade Monte Carlo
which will be described in more detail in a following section. The struck nucleus
is undoubtedly left in a highly excited state and will typically de-excite by emitting
nuclear fission fragments, nucleons, and photons.
At present de-excitation photon emission is simulated only for
oxygen \cite{Ejiri:1993rh, Kobayashi:2005ut} due to the significance of these
3-10 MeV photons in energy reconstruction at water Cherenkov detectors.
Future versions of the generator will handle de-excitation photon emission
from additional nuclear targets.
\subsection{Cross section model}
\label{sec:xsec}
The cross section model provides the calculation of the differential and total cross sections.
During event generation the total cross section is used together with the flux
to determine the energies of interacting neutrinos. The cross sections for specific
processes are then used to determine which interaction type occurs, and the
differential distributions for that interaction
model are used to determine the event kinematics.
While the differential distributions must be calculated event-by-event, the total
cross sections can be pre-calculated and stored for use by many jobs sharing the same
physics models. Over this energy range neutrinos can scatter off a variety of
different `targets' including the nucleus (via coherent scattering), individual nucleons,
quarks within the nucleons, and atomic electrons.
{\bf Quasi-Elastic Scattering:}
Quasi-elastic scattering (e.g. $\nu_\mu + n \rightarrow \mu^- + p$)
is modeled using an implementation of the Llewellyn-Smith
model \cite{LlewellynSmith:1972zm}. In this model the hadronic weak current
is expressed in terms of the most general Lorentz-invariant form factors.
Two are set to zero as they violate G-parity. Two vector form factors can be
related via CVC to electromagnetic form factors which are measured over a
broad range of kinematics in electron elastic scattering experiments.
Several different parametrizations of these electromagnetic form factors including
Sachs \cite{Sachs:1962zzc}, BBA2003 \cite{Budd:2003wb} and BBBA2005 \cite{Bradford:2006yz}
models are available with BBBA2005 being the default.
Two form factors - the pseudo-scalar and axial vector, remain. The pseudo-scalar
form factor is assumed to have the form suggested by the partially conserved axial current
(PCAC) hypothesis \cite{LlewellynSmith:1972zm},
which leaves the axial
form factor F$_A$(Q$^2$) as the sole remaining unknown quantity.
F$_A(0)$ is well known from
measurements of neutron beta decay and the Q$^2$ dependence of this form factor can
only be determined in neutrino experiments and has been the focus of a large amount
of experimental work over several decades. In GENIE a dipole form is assumed, with the
axial vector mass m$_A$ remaining as the sole free parameter with a default value of 0.99 GeV/c$^2$.
For nuclear targets, the struck a suppression factor is included from an analytic calculation of
the rejection factor in the Fermi Gas model, based on the simple requirement that the momentum
of the outgoing nucleon exceed the fermi momentum $k_F$ for the nucleus in question.
Typical values of $k_F$ are 0.221 GeV/c for nucleons in ${}^{12}$C,
0.251 GeV/c for protons in ${}^{56}$Fe, and
0.256 GeV/c for neutrons in ${}^{56}$Fe.
{\bf Elastic Neutral Current Scattering:}
Elastic neutral current processes are computed according to the
model described by Ahrens et al. \cite{Ahrens:1986xe}, where the axial form factor is given by:
\begin{equation}
G_A(Q^2) = \frac{1}{2} \frac{G_A(0)}{(1+Q^2/M_A^2)^2}(1+\eta).
\end{equation}
The adjustable parameter $\eta$ includes possible isoscalar contributions to the
axial current, and the GENIE default value is $\eta=0.12$.
For nuclear targets the same reduction factor described above is used.
{\bf Baryon Resonance Production:}
The production of baryon resonances in neutral and charged current channels is
included with the Rein-Sehgal model \cite{Rein:1981wg}.
This model employs the Feynman-Kislinger-Ravndal \cite{Feynman:1971wr} model of baryon resonances, which
gives wavefunctions for the resonances as
excited states of a 3-quark system in a relativistic harmonic oscillator potential with spin-flavor symmetry.
In the Rein-Sehgal paper the helicity amplitudes for the FKR model are computed and used to construct
the cross sections for neutrino-production of the baryon resonances.
From the 18 resonances of the original paper we include the 16 that are listed
as unambiguous at the latest PDG baryon tables and all resonance parameters have
been updated.
In our implementation of the Rein-Sehgal model interference between neighboring resonances has been
ignored. Lepton mass terms are not included in the calculation of the differential cross section, but the
effect of the lepton mass on the phase space boundaries is taken into account.
For tau neutrino charged current interactions an overall correction factor to the
total cross section is
applied to account for neglected elements (pseudoscalar form factors and lepton mass terms) in the original model.
The default value for the resonance axial vector mass m$_A$ is 1.12 GeV/c$^2$, as determined in the global fits
carried out in Reference \cite{Kuzmin:2006dh}.
{\bf Coherent Neutrino-Nucleus Scattering:}
Coherent scattering results in the production of forward going pions in both charged current
($\nu_\mu + A \rightarrow \mu^- + \pi^+ + A$) and neutral current
($\nu_\mu + A \rightarrow \nu_\mu + \pi^0 + A$) channels.
Coherent neutrino-nucleus interactions are modeled according to the Rein-Sehgal model \cite{Rein:1983pf}.
Since the coherence condition requires a small momentum transfer to the target nucleus,
it is a low-Q$^2$ process which is related via PCAC to the pion field.
The Rein-Sehgal model begins from the PCAC form
at Q$^2$=0, assumes a dipole dependence for non-zero Q$^2$, with $m_A=1.00$ GeV/c$^2$,
and then calculates the relevant pion-nucleus
cross section from measured data on total and inelastic pion scattering from protons and deuterium \cite{Yao:2006px}.
The GENIE implementation is using the modified PCAC formula described in a recent revision
of the Rein-Sehgal model \cite{Rein:2006di} that includes lepton mass terms.
{\bf Non-Resonance Inelastic Scattering:}
Deep (and not-so-deep) inelastic scattering (DIS) is calculated in an effective leading order
model using the modifications
suggested by Bodek and Yang \cite{Bodek:2002ps} to describe scattering at low Q$^2$.
In this model higher twist and target mass corrections are
accounted for through the use of a new scaling variable and modifications to the low Q$^2$
parton distributions.
The cross sections are computed at a fully partonic level (the ${\nu}q{\rightarrow}lq'$
cross sections are computed for all relevant sea and valence quarks).
The longitudinal structure function is taken into account using the Whitlow R
($R=F_{L}/2xF_{1}$) parameterization \cite{Whitlow:1990gk}.
The default parameter values are those given in \cite{Bodek:2002ps}, which are determined
based on the GRV98 LO parton distributions \cite{Gluck:1998xa}.
An overall scale factor of 1.032 is applied to the predictions of the Bodek-Yang model
to achieve agreement with the measured value of the neutrino cross section at high energy (100 GeV).
This factor is necessary since the Bodek-Yang model treats axial and vector form modifications
identically and would therefore not be expected to reproduce neutrino data perfectly. This overall
DIS scale factor needs to be recalculated when elements of the cross section model are changed.
The same model can be extended to low energies; it is the model used for the nonresonant
processes that compete with resonances in the few-GeV region.
{\bf Quasi-Elastic Charm Production:}
QEL charm production is modeled according to the Kovalenko local duality inspired model \cite{Kovalenko:1990zi}
tuned by the GENIE authors to recent NOMAD data \cite{Bischofberger:2005ur}.
{\bf Deep-Inelastic Charm Production:}
DIS charm production is modeled according to the Aivazis, Olness and Tung model \cite{Aivazis:1993kh}.
Charm-production fractions for neutrino interactions are taken from \cite{DeLellis:2002pr}, and utilize both
Peterson \cite{Peterson:1982ak}
and Collins-Spiller \cite{Collins:1984ms} fragmentation functions, with Peterson fragmentation functions being the default.
The charm mass is adjustable and is set to 1.43 GeV/c$^2$ by default.
{\bf Inclusive Inverse Muon Decay:}
Inclusive Inverse Muon Decay cross section is computed using an implementation of the Bardin and Dokuchaeva model \cite{Bardin:1986dk}
taking into account all 1-loop radiative corrections.
{\bf Neutrino-Electron Elastic Scattering:}
The cross sections for all ${\nu}e-$ scattering channels other than Inverse Muon Decay is computed according to \cite{Marciano:2003eq}.
Inverse Tau decay is neglected.
\subsubsection{Modeling the transition region}
\label{sec:transition}
As discussed, for example, by Kuzmin, Lyubushkin and Naumov \cite{Kuzmin:2005bm} one typically
considers the total ${\nu}N$ CC scattering cross section as
\begin{center}
\(\sigma^{tot} = \sigma^{QEL} \oplus
\sigma^{1\pi} \oplus \sigma^{2\pi} \oplus ...
\oplus \sigma^{1K} \oplus ... \oplus \sigma^{DIS} \)
\end{center}
In the absence of a model for exclusive inelastic multi-particle neutrinoproduction,
the above is usually being approximated as
\begin{center}
\(\sigma^{tot} = \sigma^{QEL} \oplus \sigma^{RES} \oplus \sigma^{DIS} \)
\end{center}
assuming that all exclusive low multiplicity inelastic reactions proceed primarily
through resonance neutrinoproduction.
For the sake of simplicity, small contributions to the
total cross section in the few GeV energy range, such as
coherent and elastic ${\nu}e^{-}$ scattering, were omitted from the expression above.
In this picture, one should be careful in avoiding
double counting the low multiplicity inelastic reaction cross sections.
In GENIE release the total cross sections is constructed along the same lines,
adopting the procedure developed in NeuGEN \cite{Gallagher:2002sf} to avoid double counting.
The total inelastic differential cross section is computed as
\begin{center}
\(\frac{\displaystyle d^{2}\sigma^{inel}}{\displaystyle dQ^{2}dW} =
\frac{\displaystyle d^{2}\sigma^{RES}}{\displaystyle dQ^{2}dW} +
\frac{\displaystyle d^{2}\sigma^{DIS}}{\displaystyle dQ^{2}dW}\)
\end{center}
The term $d^{2}\sigma^{RES}/dQ^{2}dW$ represents the contribution from all
low multiplicity inelastic channels proceeding via resonance production. This term
is computed as
\begin{center}
\(\frac{\displaystyle d^{2}\sigma^{RES}}{\displaystyle dQ^{2}dW} =
{\sum\limits_k} \bigl( \frac{\displaystyle d^{2}\sigma^{R/S}}{\displaystyle dQ^{2}dW} \bigr)_{k}
\cdot {\Theta}(Wcut-W)\)
\end{center}
where the index $k$ runs over all baryon resonances taken into account,
$W_{cut}$ is a configurable parameter and $(d^{2}\sigma^{RS}_{{\nu}N}/dQ^{2}dW)_{k}$ is
the Rein-Seghal model prediction for the $k^{th}$ resonance cross section.
The DIS term of the inelastic differential cross section is expressed in terms
of the differential cross section predicted by the Bodek-Yang model appropriately
modulated in the ``resonance-dominance" region $W<W_{cut}$ so that the RES/DIS
mixture in this region agrees with inclusive cross section data
\cite{MacFarlane:1983ax,Berge:1987zw, Ciampolillo:1979wp, Colley:1979rt,
Bosetti:1981ip, Mukhin:1979bd, Baranov:1979sx, Barish:1978pj, Baker:1982ty, Eichten:1973cs}
and exclusive
1-pion \cite{Lerche:1978cp, Ammosov:1988xb, Grabosch:1988gw, Bell:1978qu, Kitagaki:1986ct,
Allen:1980ti, Allen:1985ti, Allasia:1990uy, Campbell:1973wg, Barish:1978pj,Radecky:1981fn} and
2-pion \cite{Day:1984nf, Kitagaki:1986ct}
cross section data:
\begin{center}
\begin{eqnarray*}
\frac{d^{2}\sigma^{DIS}}{dQ^{2}dW} &=&
\frac{d^{2}\sigma^{DIS,BY}}{dQ^{2}dW} \cdot {\Theta}(W-Wcut) + \\
&+& \frac{d^{2}\sigma^{DIS,BY}}{dQ^{2}dW} \cdot {\Theta}(Wcut-W) \cdot {\sum\limits_m}f_{m}
\end{eqnarray*}
\end{center}
In the above expression, $m$ refers to the multiplicity of the hadronic system
and, therefore, the factor $f_{m}$ relates the total calculated DIS cross section
to the DIS contribution to this particular multiplicity channel.
These factors are computed as $f_{m} = R_{m} {\cdot} P^{had}_{m}$ where $R_{m}$ is
a tunable parameter and $P^{had}_{m}$ is the probability, taken from the
hadronization model,
that the DIS final state hadronic system multiplicity would be equal to $m$.
The approach described above couples the GENIE cross section and hadronic multiparticle
production model \cite{Yang:2007zzt}.
\subsection{Neutrino-induced hadronic multiparticle production modeling}
Neutrino-induced hadronic shower modeling is an important aspect of the
intermediate energy neutrino experiment simulations, as non-resonant inelastic
scattering becomes the dominant interaction channel for neutrino energies
as low as 1.5 GeV.
Experiments are sensitive to the details of hadronic system modeling in many
different ways.
Experiments making calorimetric measurements of neutrino energy in charged
current reactions are typically calibrated using single particle test beams, which introduces a model dependence in
determining the conversion between detector activity and the energy of neutrino-produced hadronic systems.
Physics analysis can also depend on the prediction of the hadron shower characteristics,
such as shower shapes, energy profile and particle content, primarily for event identification.
A characteristic example is a $\nu_{\mu} \rightarrow \nu_{e}$ appearance analysis, where
the evaluation of backgrounds coming from neutral current (NC) events, would be quite sensitive on the details
of the NC shower simulation and specifically the $\pi^{0}$ shower content.
It is therefore imperative that the state-of-the-art in shower modeling is included in our
neutrino interaction simulations.
GENIE uses the AGKY hadronization model which was developed for the MINOS experiment \cite{Yang:2007zzt}.
This model integrates an empirical low-invariant mass model with PYTHIA-6 at higher
invariant masses. The transition between these two models takes place over an adjustable
window with the default values of 2.3 GeV/c$^2$ to 3.0 GeV/c$^2$, so as to ensure continuity of all simulated
observables as a function of the invariant mass. For the hadronization of low-mass states the
model proceeds in two phases, first
determining the particle content of the hadronic shower, and secondly determining the 4-momenta of
the produced particles in the hadronic center of mass.
The AGKY's low mass hadronization model generates hadronic systems that typically
consist of exactly one baryon ($p$ or $n$) and any number of $\pi^{+}$, $\pi^{-}$, $\pi^{0}$,
$K^{+}$, $K^{-}$, $K^{0}$, ${\bar{K^{0}}}$ mesons kinematically possible and allowed by
charge conservation.
For a fixed hadronic invariant mass and initial state (neutrino and hit nucleon), the
algorithm for generating the hadron shower particles generally proceeds as follows:
\begin{itemize}
\item Compute the average charged hadron multiplicity $<n_{ch}>$ for a given invariant mass (W)
using empirical expressions of the
($<n_{ch}>=a_{ch}+b_{ch}*lnW^{2}$) form.
The coefficients $a_{ch}$, $b_{ch}$, which depend on the initial state,
have been determined by bubble chamber experiments \cite{Zieminska:1983bs}.
\item Compute the average total hadron multiplicity $<n_{tot}>$ as $<n_{tot}>=1.5<n_{ch}>$.
\item Using the calculated average hadron multiplicity, generate the actual
hadron multiplicity taking into account that the multiplicity dispersion is
described by the KNO scaling law, ($<n>P(n)=f(n/<n>)$) \cite{Koba:1972ng}.
P(n) is the probability of having $n$ hadrons in the
final state given an expected average of $<n>$, and $f(n/<n>)$ is a universal scaling function.
The KNO scaling is parametrized employing the Levy
\footnote{The Levy function $Levy(z;c) = 2e^{-c}c^{cz+1}/\Gamma(cz+1)$}
function with an input parameter $c_{ch}$ that depends on the initial state and is
treated as a tuning parameter.
\item Generate hadrons up to the generated hadron multiplicity taking into account the hadron
shower charge conservation and the kinematical constraints.
Protons and neutrons are produced in the ratio
2:1 for $\nu p$ interactions, 1:1 for $\nu n$ and $\bar{\nu}p$, and 1:2 for $\bar{\nu}n$ interactions.
Charged mesons are then created in order to balance charge, and the remaining mesons are generated in neutral pairs.
The probabilities for each are 31.33\% ($\pi^{0},\pi^{0}$), 62.67\% ($\pi^{+},\pi^{-}$)
and 6\% of strange meson pairs.
The probability of producing a strange baryon via associated production is determined from a
fit to $\Lambda$ production data
\cite{Jones:1992bm,Baker:1986xx,DeProspo:1994ac,Bosetti:1982vk}:
\begin{equation}
P_{hyperon}= a_{hyperon}+ b_{hyperon}\ \ln W^{2}
\end{equation}
\end{itemize}
Fig. \ref{fig:ChMult} shows the
data/model comparisons of the average charged hadron multiplicity $<n_{ch}>$
as a function of the squared hadronic invariant mass for $\nu p$ and $\nu n$ interactions.
Fig. \ref{fig:ChD} shows the
data/model comparisons of the negatively charged hadron multiplicity dispersion $D_{-}$
as a function of the average charged hadron multiplicity $<n_{-}>$
and the reduced dispersion $D_{-}/<n_{-}>$ as a function of the squared hadronic
invariant mass.
The main dynamical feature observed in the study of hadronic systems is that the baryon
tends to go backwards in the hadronic center of mass and
that the produced hadrons have small transverse momentum relative to the direction of momentum transfer.
These features are naturally described in the quark model where the baryon is formed from the diquark
remnant, which goes backwards in the center-of-mass, and transverse momentum is generated solely
through intrinsic parton motion and gluon radiation. At low invariant masses energy-momentum constraints on the
available phase space play a particularly important role.
The most pronounced kinematical feature in this region is that one of the
produced particles (proton or neutron) is much heavier than the rest (pion and kaons) and
exhibits a strong directional anticorrelation with the momentum transfer.
Our strategy, therefore, is to correctly reproduce the final state nucleon momentum,
and then perform a phase space decay on the remnant system employing, in addition,
a $p_T$-based rejection scheme designed to reproduce the expected meson transverse momentum distribution.
The nucleon momentum is generated using input $p_{T}^{2}$ and $x_{F}=(p_L^*/p_{L max}^*)$ PDFs which are parametrized
based on experimental data \cite{Derrick:1977zi,CooperSarkar:1982ay}. Once the baryon momentum is selected the remaining particles
undergo a phase space decay.
The phase space decay employs a rejection method suggested in \cite{Clegg:1981js},
with a rejection factor $e^{-A*p_{T}}$ for each meson. This causes the transverse momentum distribution of the generated mesons to fall exponentially with increasing $p^{2}_{T}$, controlled by the adjustable parameter $A$ which has a default value of 3.5 GeV$^{-1}$.
Here $p_{T}$ is the momentum component perpendicular to the current direction.
One of the remaining challenges in this model, which will be addressed in the future, is to better describe forward and
backward hemisphere multiplicity distributions. The forward/backward multiplicity distributions yield an unphysically rapid
transition to the PYTHIA values, a feature not seen in other recent hadronization models \cite{Nowak:2006xv}.
Fig. \ref{fig:ZFragm} shows the data/model comparisons of the fragmentation function for positively and negatively
charged hadrons.
2-body hadronic systems are a special case:
The hadronic system 4-momenta are generated by a simple unweighted phase space.
\begin{figure}[ht]
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width=17pc]{./chmul.eps}
\caption{
Data/model comparisons of the average charged hadron multiplicity $<n_{ch}>$
shown as a function of the squared hadronic invariant mass for
$\nu p$ and $\nu n$ interactions.
Data are from Refs. \cite{Zieminska:1983bs, Allen:1981vh}.
}
\label{fig:ChMult}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.47\linewidth}
\centering
\includegraphics[width=17pc]{./chD.eps}
\caption{
Data/model comparisons of the negatively charged hadron multiplicity
dispersion $D_{-}$ as a function of the average charged hadron multiplicity $<n_{-}>$
(up) and the reduced dispersion $D_{-}/<n_{-}>$ as a function of the squared hadronic
invariant mass (down).
Data are from Ref. \cite{Zieminska:1983bs}.
}
\label{fig:ChD}
\end{minipage}
\end{figure}
\begin{figure*}[htb]
\includegraphics[width=35pc]{./z.eps}
\caption{
Data/model comparisons of the fragmentation function for
positively and negatively charged hadrons.
Applied cuts: Squared hadronic invariant mass $W^{2}$ above 5 $GeV^{2}/c^{4}$
and squared 4-momentum transfer $Q^{2}$ above 1 $GeV^{2}/c^{2}$.
Data are from Ref. \cite{Allasia:1984ua}.
}
\label{fig:ZFragm}
\end{figure*}
\subsubsection{Intranuclear rescattering}
The hadronization model describes particle production from free targets and is
tuned primarily to bubble chamber data on hydrogen and deuterium
targets \cite{Zieminska:1983bs, Derrick:1977zi, Allen:1981vh,
Ivanilov:1984gh, Grassler:1983ks, Allasia:1984ua, Berge:1978fr, Ammosov:1978vt}.
Hadrons produced in the nuclear environment may rescatter on their way out of the
nucleus, and these reinteractions significantly modify the observable distributions
in most detectors.
It is also well established that hadrons
produced in the nuclear environment do not immediately reinteract with their
full cross section. The basic picture
is that during the time it takes for quarks to materialize as
hadrons, they propagate through the nucleus with a dramatically
reduced interaction probability.
This was implemented in
GENIE as a simple `free step' at the start of the
intranuclear cascade during which no interactions
can occur. The `free step' comes from a formation time of 0.523
fm/c according to the SKAT model \cite{Baranov:1984rv}.
Intranuclear hadron transport in GENIE is handled by a subpackage called INTRANUKE.
INTRANUKE is an intranuclear cascade simulation and has gone through numerous
revisions \cite{dytman-ladek} since the original version was developed for use by the Soudan 2
Collaboration \cite{Merenyi:1992gf}. The sensitivity of a particular experiment
to intranuclear rescattering depends strongly on the detector technology,
the energy range of the neutrinos, and the physics measurement being made.
INTRANUKE simulates rescattering of pions and nucleons in the nucleus.
In principle one would like to have a fully realistic nuclear model which
accurately describes the full range of processes to model particle production with
energies as low as 1 MeV to ensure
that the simulations are suitable for any conceivable experiment. Nuclear
simulations of this type are themselves highly complex packages and the integration
of these packages with GENIE is an area of active work. An alternative approach
is to develop a simpler nuclear model, in the context of a particular experiment,
and ensure that the relevant physics for that experiment are correctly described.
This approach has the advantage of yielding a far simpler code, which is understood
by the experimenters. This has particular advantages for the study of systematic
errors and the development of ancillary code like reweighting packages.
The current version was optimized for use by the
MINOS experiment. For this experiment the task of developing an intranuclear
rescattering model is simplified because the detector is composed largely
of a single element, iron,
and the detector is designed to make a calorimetric
energy measurement rather than track individual particles.
For the oscillation measurement of MINOS \cite{Michael:2006rx} the primary goal is
ensuring that the `missing energy' lost in the nuclear environment is being reliably
simulated. The model has applicability to almost all nuclei and a
wide range of energies.
To handle a wide energy range for neutrinos, GENIE has defined processes
for hadrons up to 1.8 GeV kinetic energy in terms of all the relevant cross sections.
For higher hadron energy, the underlying cross sections are assumed to be constant
at the 1.8 GeV value. This is a good approximation to the actual values.
The code then has a description of all hadrons coming from neutrinos at all relevant energies.
The simulation tracks particles through the nucleus in steps of 0.05 fm. For each
particle only one reinteraction is allowed, and the simulation consists of the
following steps:
\begin{enumerate}
\item
{\bf Mean Free Path:} In order to determine if the particle interacts in a
particular step the mean free path ($\lambda$) is calculated based on the local density
of nucleons ($\rho(r)$) \cite{DeJager:1987qc} and a partial wave analysis of the large body of
hadron-nucleon cross sections ($\sigma_{hN}$) \cite{Arndt:2003if}:
\begin{equation}
\lambda(r,E_h)=\frac{1}{\sigma_{hN}(E_h)\rho(r)}
\end{equation}.
Here, $\sigma_{hN}$ is the isospin averaged cross section for the
propagating hadron (with energy $E_h$) interacting with a nucleon and $\rho(r)$ is the
matter density of nucleons at the position of the propagating hadron.
We use charge densities which are well-measured and are known to be very similar
to the matter densities. The code presently tracks pions and nucleons.
Isospin relations are used to estimate $\pi^0-$nucleon reactions.
All nuclei heavier than oxygen are modeled with a Woods-Saxon
density distribution and lighter nuclei are modeled with a
modified Gaussian distribution.
One difficulty in this approach is that our treatment is using a semiclassical
model to describe a quantum mechanical process. This poses particular difficulty
in describing elastic scattering which dominates the total cross section at
low energy. This wave/particle distinction depends on energy
with lower energy hadron-nucleus scattering being more wave-like. To account for this
we increase the size of the nuclear density distribution through which the
particle is tracked by an amount
\begin{equation}f\frac{hc}{p},\end{equation}
where $f$ is an adjustable dimensionless parameter set to 0.5 for pions and
1.1 for nucleons in the current default.
We use the isospin-averaged total cross sections for pions and nucleons.
\item
{\bf Determining the Interaction Type:}
Hadron-nucleus interactions occur with different processes and each has an associated
cross section - $\sigma_{elas}$ for elastic scattering (residual nucleus in the ground state),
$\sigma_{inel}$ for inelastic
scattering (residual nucleus in excited state, typical response is single nucleon emission),
$\sigma_{cex}$ for single charge exchange (outgoing hadron changes charge, typical response
is single nucleon emission) for all hadrons. For pions, emission of 2 or more nucleons with no pion in
the final state is called absorption - $\sigma_{abs}$; for nucleons, a final state
with 3 or more nucleons is called multi-nucleon knockout - $\sigma_{ko}$.
For low energy nucleons, the knockout mechanism dominates.
At higher energies (above about 400 MeV for pions and 600 MeV for nucleons),
the probability of pion production ($\sigma_{piprod}$) becomes important.
The total cross section ($\sigma_{tot}$) is the sum of all component cross sections
and the reaction cross section ($\sigma_{reac}$) is the sum of cross section for
all inelastic reactions,
\begin{equation}
\sigma_{reac}=\sigma_{cex}+\sigma_{inel}+\sigma_{abs}+\sigma_{piprod}=\sigma_{tot}-\sigma_{elas}.
\end{equation}
This equation is specific to pions; $\sigma_{abs}$ is replaced by $\sigma_{ko}$ for nucleons.
Once it has been determined that a hadron reinteracts in the nucleus, the type of the
interaction is determined based on the measured cross sections for the above listed
processes. Cross sections for kinetic energy less than 1 GeV are used and they
are assumed constant above 1 GeV. Where data is sparse, cross section estimates are taken
from calculations of the CEM03 group \cite{Mashnik:2005ay}. Since only 1 reinteraction
is allowed, the effect of additional reactions with the rest of the nucleus must be
included here. The distribution of final states is optimized for iron, but valid
for all targets. Figure~\ref{fig:pife} shows the default INTRANUKE
compared with $\pi^+$ data for $\sigma_{tot}$ and $\sigma_{reac}$ for iron and carbon.
These are examples of many successful comparisons.
\item
{\bf Final State Products:}
Once the interaction type has been determined, the four-vectors of final
state particles need to be generated. Where possible these distributions
are parametrized from data or from the output of more sophisticated nuclear
models \cite{Mashnik:2002mw}. Simple models are used for elastic scattering angular
distributions. The quasielastic reaction mechanism is known
to dominate the final state for inelastic or charge exchange processes.
Fermi motion and binding energy are used to get a good description
of the kinematics. Whenever 3 or more particles are emitted,
distributions are according to phase space.
For MINOS, the most important issue is missing energy generated by
inelastic and absorption processes. Very low energy
hadrons and nuclear recoils are not seen, so simplifications can be made.
All states where more than 5 nucleons are emitted are treated
as though 5 nucleons (3 protons and 2 neutrons) were emitted.
These restrictions are relaxed in the most recent code versions to
better match what is seen in data.
\end{enumerate}
\begin{figure*}[htb]
\includegraphics[width=17pc]{./sigtot-pip-fe-ha.eps}
\includegraphics[width=17pc]{./sigtot-pip-c-ha.eps}
\caption{$\pi^+$ total and reaction cross sections for iron (left) and
carbon (right). Data are from Refs. \cite{Ashery:1981tq,Allardyce:1973ce,Carroll:1976hj,Clough:1974qt}. }
\label{fig:pife}
\end{figure*}
\subsection{Physics Model Tuning}
\label{sec:tuning}
The full range of models involve more than one hundred adjustable parameters, the complete set of
which are given in the Physics and User Manual \cite{genie-doc-753}.
Only the most important in the construction
of the physics models will be discussed here.
Electroweak parameters and
CKM matrix elements use the values of the Particle Data Group \cite{Yao:2006px}.
As mentioned previously, the quasi-elastic, resonance production, and DIS models employ
form factors, axial vector masses, and other parameters which have been determined by
others in their global fits \cite{Bradford:2006yz,Kuzmin:2006dh}. In order to check
the overall consistency of our model, and to verify that we have correctly implemented
the DIS model, predictions are compared to electron scattering inclusive data \cite{Gallagher:2004nq,e49a10}
and neutrino structure function data \cite{Bhattacharya:2009zz}.
The current default values for transition
region parameters are $W_{cut}$=1.7 GeV/c$^2$, $R_2(\nu p)=R_2(\overline{\nu}n)$=0.1,
$R_2(\nu n)=R_2(\overline{\nu}p)$=0.3,
and $R_m$=1.0 for all $m>2$ reactions. These are determined from fits to inclusive and exclusive
(one and two-pion) production neutrino interaction channels. For these comparisons we rely heavily
on online compilations of neutrino data \cite{Whalley:2004sz} and related fitting tools \cite{Andreopoulos:2005vd}
that allow one to include some correlated systematic errors (such as arising from flux uncertainties).
The GENIE default cross section for $\nu{\mu}$ charged current scattering
from an isoscalar target,
together with the estimated uncertainty on the total cross section, as evaluated in \cite{Adamson:2007gu} are shown
in Fig. \ref{fig:XSecErrEnvelope}.
\begin{figure}[htb]
\center
\includegraphics[width=30pc]{./sig_genie.eps}
\caption{GENIE default cross section for $\nu{\mu}$ charged current scattering
from an isoscalar target. The shaded band indicates the estimated
uncertainty on the free nucleon cross section.
Data are from
\cite{MacFarlane:1983ax} (CCFRR),
\cite{Berge:1987zw} (CDHSW),
\cite{Ciampolillo:1979wp} (GGM-SPS),
\cite{Colley:1979rt, Bosetti:1981ip} (BEBC),
\cite{Mukhin:1979bd} (ITEP),
\cite{Baranov:1979sx} (CRS, SKAT),
\cite{Barish:1978pj} (ANL),
\cite{Baker:1982ty} (BNL) and
\cite{Eichten:1973cs} (GGM-PS)
}
\label{fig:XSecErrEnvelope}
\end{figure}
The tuning of the hadronization model is accomplished using data from the BEBC \cite{Allen:1981vh}, FNAL \cite{Derrick:1981br},
and SKAT \cite{Baranov:1983st} bubble chamber experiments, and is described in more detail elsewhere \cite{Yang:2009zx}.
Multiplicity measurements include
averaged charged and neutral particle ($\pi^{0}$) multiplicities,
forward and backward hemisphere average multiplicities and correlations,topological cross sections of charged particles, and neutral - charged pion multiplicity correlations.Hadronic system measurements include fragmentation functions ($z$ distributions),
$x_{F}$ distributions,
$p^{2}_{T}$ (transverse momentum squared) distributions, and
$x_{F} - \langle p_{T}^{2} \rangle$ correlations (``seagull'' plots) \cite{Schmitz:1981sg}.
Averaged charged particle multiplicity and
dispersion parameters are taken from published values \cite{Zieminska:1983bs},
as well as our own fits \cite{Yang:2009zx}. Baryon 4-momentum distributions
are determined from fits to experimental data \cite{Derrick:1977zi,CooperSarkar:1982ay}.
The settings for PYTHIA parameters are taken as the non-default
values tuned for the NUX \cite{Rubbia:2001} generator, a high energy generator used by
the NOMAD \cite{Astier:2003gs} experiment.
The intranuclear rescattering model has been tested and tuned based on
comparisons to hadron-nucleus data. Hadron-nucleus cross sections are
calculated by `MC experiments' where a nucleus is being illuminated by
a uniform hadron beam with transverse radius larger than the nucleus size.
Figure \ref{fig:pife} shows the
comparison between INTRANUKE and data for $\pi^+-$C and $\pi^+-$Fe total and reaction
cross sections. Extrapolations to higher energy are required in many cases
as data for only $\sigma_{reac}$ is available. CEM03 \cite{Mashnik:2002mw}
results with appropriate rescaling to match data at lower energies are often used.
Although the model is tuned to hadron scattering on iron, the simplicity
of the Fermi Gas model and the A$^{(2/3)}$ scaling of the cross sections
allow the model to be applied to nearly all nuclei encountered in the simulation
as well.
Validation of the intranuclear rescattering model
using neutrino data has also been performed. This revisits the analysis
of Reference \cite{Merenyi:1992gf} which compares ANL neutrino scattering data on deuterium \cite{Radecky:1981fn}
to BEBC neutrino-neon data \cite{Angelini:1986gx}, where each have been rescaled to an atmospheric
neutrino flux. By comparing neon and deuterium final state topology fractions the
rates for pion absorption and charge exchange in neutrino-neon interactions can be determined.
INTRANUKE reproduces the measured final state topology fractions with an overall $\chi^2$
of 16.0 for 12 degrees of freedom. The rates of pion absorption and charge exchange produced by
INTRANUKE in this comparison are $18.3\pm 0.5$\% and $2.9\pm0.2$\% respectively, in good agreement with the measured
values of $22\pm5$\% and $10\pm8$\%.
In evaluating the uncertainty in the intranuclear rescattering model, several sources of
uncertainty were taken into account for their effect on the MINOS determination of the
hadronic energy scale \cite{Dytman:2008st}. These include the experimental uncertainty on the
external data that serves as the input to the model, as well as on some of the key theoretical
assumptions in the model, in particular the modeling of pion absorption reactions and the
treatment of low energy pion scattering \cite{Dytman:2008st}.
\section{Recent Developments}
\label{future}
Recent focus on development for GENIE has been on the nuclear structure
and final state interaction codes. The purpose is to make the code
well-tuned to the needs of
the upcoming generation of accelerator experiments including T2K \cite{Wark:2008zz},
NoVA \cite{Ayres:2004js}, Minerva \cite{Schulte:2006db},
MicroBooNE \cite{Soderberg:2009rz} and ArgoNEUT \cite{Soderberg:2009qt}.
We are also looking to push the validity range of GENIE down to the MeV scale
making it applicable for the study of neutrinos from reactors, supernova and
SNS \cite{Scholberg:2007zz}.
The Fermi Gas nuclear model has been shown to be wrong in detail through
interpretation of electron scattering experiments \cite{Frois:1987hk}.
Nucleon-nucleon correlations are important in kinematic regions where
the impulse approximation is unlikely to apply. The spectral function
\cite{Benhar:2006wy,Ankowski:2007uy} has become a useful model to represent the
effects of a many-body model. Developmental versions of GENIE now
contain spectral functions of Benhar for carbon, iron, and lead.
Code for calculating $(e,e^\prime)$ differential cross sections is
now in place.
A true internuclear cascade model has also been in development \cite{dytman-ladek}.
It tracks pions and nucleons through multiple reactions in the
same nucleus in which the neutrino was absorbed. Free hadron-nucleon
cross sections are used with the struck nucleon has momentum and binding
energy according to the Fermi Gas model. Interactions for protons, neutrons, and pions
are presently modeled. The hadrons are
on-shell between scatterings; the reactions are governed by
the same mean free path and Fermi gas models as for the existing
model. Thus, there is no limit on the number of particles that
can be tracked. A simple model for compound nuclear processes
is included to properly account for effects in hadron-nucleus data.
\section{Software Design Overview}
\label{sec:design}
In this section we will describe the software design of the GENIE package. We begin by discussing
the software requirements and use cases. The software framework is presented together with key classes
such as \em particles, events, \em and \em interactions\em. The hierarchical delegation of responsibility during event
generation to \em driver, thread, \em and \em module \em classes is described.
\subsection {Requirements}
The process of requirements capture and software design was carried out over a period
of several months in 2004 and 2005.
During this phase there was extensive discussion within the MINOS collaboration as well as in
conjunction with the NuINT\footnote{Neutrino Interactions in the Few-GeV Energy Range} Conference series.
Through these meetings we received input from many users of these packages as well as with several
experts who had designed, developed, and supported previous (Fortran-based) procedural codes.
Through these discussions the need for a package of this type for future neutrino experiments became very clear.
The procedural programs had often been in use for several decades and had expanded greatly beyond their initial scope.
They had often reached a critical mass where further modifications were deemed extremely difficult because of the
overall fragility of the architecture. In addition there were often strong couplings between aspects of the simulation
that made incremental improvements in a particular area difficult. Another commonly voiced concern was the lack
of documentation, particularly regarding the ways in which the models and their parameters were tuned to data.
These discussions served to illuminate several typical use cases for neutrino event generators and related tools:
\begin{itemize}
\item
For event generation in conjunction with a full detector simulation.
\item
For event generation in fast simulations, either 4-vector only or using a parametrized model of the detector response.
\item
To provide a library of cross section values for interaction rate calculations.
\item
As a source of information about the underlying models and their uncertainties.
\item
As a primary tool in the evaluation of systematic uncertainties.
\end{itemize}
These use case evaluations and discussions led to the establishment of a set of requirements
for the overall architecture as well as requirements for the documentation and ancillary tools:
\begin{itemize}
\item
Decouple physics models as much as possible from the framework code.
\item
Lower the barrier to entry for physics model developers (particularly theorists).
\item
Provide a well-tested, well-defined default configuration which provides a benchmark
for all users who are primarily interested in using the package in black box-mode.
\item
Incorporate up-to-date theoretical and experimental work and provide a flexible
framework so that it can be maintained.
\item
Incorporate state-of-the-art software engineering methodologies to support
these goals through an object-oriented design.
\item
Leverage the developments in other areas of HEP software development
(in particular ROOT class libraries).
\item
Provide the external data used to tune the package as part of the overall distribution.
\item
Provide clear documentation about how the models are tested, tuned, and validated.
\item
Provide a set of tools to facilitate the tuning of model parameters and for an independent
evaluation of how well models describe existing data.
\end{itemize}
These requirements were met in the August 2007 first release of the GENIE package.
The implementation of the physics models was cross-checked through an exhaustive
set of comparisons \cite{genie-doc-755} with one of the existing procedural codes \cite{Gallagher:2002sf}.
GENIE can be configured to be identical in its physics
content to the final version (v3.5.5) of the NEUGEN3 package, one of the legacy procedural codes
which had been used by numerous experiments for over more than a decade.
\subsection{Core Framework}
\label{sec:core}
The key requirement of the GENIE Core Framework is to transparently decouple
the high-level code focusing on physics simulations from the low-level structures
involved primarily with memory management and configuration.
The framework was developed and reviewed primarily within the MINOS experiment and,
inevitably, has been influenced by the MINOS offline software design \cite{MINOSOffline:PrivCom}.
In developing the GENIE framework we recycled, adapted and extended key
features of the MINOS offline framework. We drew heavily from the accumulated software
engineering experience encapsulated within popular software design patterns,
including the Visitor, Chain of Responsibility, Factory, Strategy and Singleton \cite{DesignPatterns:GangOfFour}.
The Core Framework is not specific to the subject matter domain of GENIE and could
be adapted and reused in other scientific computing applications.
The GENIE Event Generation Framework, to be discussed later,
is a subject matter-specific layer built on top of the Core Framework.
The framework concerns itself mainly with the properties, instantiation and memory
management of software abstractions called \em Algorithms\em,
and specifies the interfaces that underpin the interactions between the numerous
concrete \em Algorithm \em realizations.
The \em Algorithm \em is a key Core Framework abstraction. The notion of an `algorithm'
in an object-oriented system requires further clarification as it does not correspond to
its more familiar notion in the context of procedural software systems.
In the Core Framework the \em Algorithm \em encapsulates the common
behavior of all algorithmic objects. It is an abstract base class which defines exactly
how algorithmic objects are to be initialized and configured,
how they are to look up their configuration,
how they are to be identified, and how they report their status.
These are common, largely operational features that characterize a very heterogeneous
collection of algorithms such as
cross section models, hadronization models, particle decayers, form factor and
structure function models, event generation modules and threads and other types
of algorithms that can be found within GENIE.
The fact that such a common behavior is imposed upon all algorithmic objects
allows us to build a central, external, XML \cite{XML:Spec} - based algorithm configuration
system that contributes significantly to the flexibility and extensibility of GENIE.
The kind of computation to be performed, the usual identifying feature of an algorithm
in a procedural system, is a secondary characteristic at this level of abstraction.
At the next level up from the \em Algorithm \em root of an algorithm inheritance tree,
we find a standardized interface which defines how to invoke each specialized type of
calculation and retrieve its results.
Numerous such specialized algorithm interfaces exist within GENIE.
Examples include the \em GFluxI \em interface for concrete flux drivers,
the \em XSecAlgorithmI \em interface for concrete cross section models, and
the \em EventRecordVisitorI \em interface for for concrete event generation steps.
Invoking all algorithms through such standardized interfaces guarantees
scalability and ensures the seamless integration of new concrete implementations.
The algorithmic objects are stateless and their behavior is fully externally configured.
The algorithm configurations are stored in XML files. Typically, there is a single XML
configuration file per algorithm. Each file may contain multiple configuration sets
for that algorithm with each configuration set being uniquely identified by a name.
The algorithm configuration variables can be of many different types
(including booleans, integers, real numbers, strings, ROOT 1-D or 2-D histograms, ROOT n-tuples/trees
or other GENIE algorithms with their own configurations).
Each configuration variable, in a given set, is uniquely identified by a name.
During the initialization phase, all XML configuration files are parsed and each named
configuration set is stored at a type-safe `value' $\rightarrow$ `type' associative container called the
\em Registry \em.
All \em Registry \em objects instantiated in initialization phase are stored in a shared pool
called the {\em AlgConfigPool}. A unique name is being used to identify each {\em Registry} in that pool.
The name is constructed by the name of the configuration set, the name of the algorithm the configuration is intended for
and the namespace that the algorithm lives in, as `namespace::algorithm-name/configuration-name'.
At run-time each algorithmic object can look up its configuration set by accessing the corresponding \em Registry \em object.
One feature of the GENIE configuration system is especially worth noting.
Algorithm configuration sets may include other algorithms (with their own configurations, which
in turn may contain more algorithms). GENIE's extensibility and flexibility is largely due
to this feature in conjunction with the standardization of the algorithm interfaces.
In the actual GENIE code one only needs define a call sequence between abstract algorithm-types
such as, for example, that an algorithm-type specialized in generating scattering kinematics,
invokes another algorithm-type specialized in cross section calculations which, in its turn,
should invoke another algorithm type specialized in form factor calculations. Once that call
sequence has been defined in the code, many concrete realizations may come into being purely at
the configuration level
by specifying the names of the concrete algorithms and the names of their configuration sets.
Typically, pre-configured instances of GENIE algorithms are accessed through an algorithm
factory \cite{DesignPatterns:GangOfFour} which is responsible for instantiating each
algorithm (upon request) and allowing it to look up its configuration. The factory
typically owns and manages the list of all instantiated concrete algorithms.
Since algorithms are stateless objects, further requests for an instantiated
concrete algorithm results in the previously instantiated algorithm being returned
rather than a new one being created.
By default all instantiated concrete algorithms and configurations are stored within shared pools
designed as singletons \cite{DesignPatterns:GangOfFour}.
As these are shared pools, modifications have global effects. For example, modifying
a low-level algorithm configuration modifies all call sequences that include that algorithm.
This is desirable in most contexts, such as for example for the consistent propagation of physics
parameter changes throughout GENIE.
There are certain situations, however, such as fitting or event reweighting applications, where this may
be not be a desirable feature. The GENIE Core Framework allows algorithms to clone and assume ownership
of the entire sequence of sub-algorithms they depend upon, along with each sub-algorithm's configuration
registries. That cloned call-sequence of algorithmic objects is stored in a local rather than a shared pool.
In this way, concrete top-level algorithms behave as self-contained capsules and can be re-configured
in isolation without affecting other GENIE components.
\subsection{Event generation framework: Particles, Events and Interactions}
In this section three key framework classes, the \em GHepParticle\em,
\em GHepRecord\em, and the \em Interaction \em classes, are described.
GENIE is using the natural system of units $(\hbar=c=1)$ so almost every simulated quantity is expressed
in powers of [GeV]. Exceptions are the event vertex in the detector coordinate system (in SI units) and
particle positions in the hit nucleus coordinate system (in fm).
Different units may be employed when native GENIE event descriptions are converted to
experiment-specific formats in accordance with the format specification.
\subsubsection{Particles}
The basic output unit of the event generation process is a `particle'.
This is a term used to describe both particles and nuclei
appearing in the initial, intermediate or final state,
as well as generator-specific pseudo-particles used for facilitating book-keeping
of the generator actions.
Each such `particle' generated by GENIE is an instance of the \em GHepParticle \em class.
These objects contain information with particle-scope including:
particle ID and status codes, PDG mass, charge, name,
indices of mother and daughter particles marking possible associations with other particles in the same event,
4-momentum, 4-position in the target nucleus coordinate system,
polarization vector,
and other properties.
The \em GHepParticle \em class includes methods for setting and querying these properties.
GENIE has adopted the standard PDG particle codes \cite{Yao:2006px}.
For ions it has adopted a PDG extension, using the 10-digit code 10LZZZAAAI
where
L is the number of strange quarks
ZZZ is the total charge,
AAA is the total baryon number,
and I is the isomer number
(I=0 corresponds to the ground state).
GENIE-specific pseudo-particles have PDG code $>=$ 2000000000 and can convey important
information about the underlying physics model.
Pseudo-particles generated by other specialized programs that may be called by GENIE
(such as PYTHIA-6) are allowed to retain the codes specified by that program.
GENIE obtains particle data (including particle names, codes, masses, widths, decay
channels and more) using the ROOT's {\it TDatabasePDG}. This particle data-base manager
object is initialized with the constants used in PYTHIA-6. The data-base has been augmented
by the GENIE authors to include baryon resonances, nuclei and GENIE-specific pseudo-particles.
Details are given in Ref. \cite {genie-doc-753}.
GENIE marks each particle with a status code.
This signifies the position of a particle in a time-ordering
of the event and helps navigation within the event record.
Most generated particles are marked as one of the following:
\begin{itemize}
\item `initial state', typically the first two particles of the event record corresponding to the incoming neutrino and the nuclear target.
\item `nucleon target', corresponding to the hit nucleon (if any) within the nuclear target.
\item `intermediate state', typically referring to the remnant nucleus, fragmentation intermediates such as quarks, diquarks,
or intermediate pseudo-particles.
\item `hadron in the nucleus', referring to a particle of the primary hadronic system,
defined as the particles emerging from the primary interaction vertex before any possible
re-interactions in the nucleus.
\item `decayed state', such as for example unstable particles that have been decayed.
\item `stable final state' for the relatively long-lived particles emerging from the nuclear targets.
\end{itemize}
All particles generated by GENIE during the simulation of a single neutrino interaction
are stored in a dynamic container representing an `event'.
\subsubsection{Events}
Events generated by GENIE are stored in a custom, STDHEP-like event record called a \em GHEP \em record.
Each \em GHEP \em event record, an instance of the \em GHepRecord \em class,
is a ROOT TClonesArray container of \em GHepParticle \em objects representing individual particles.
Other than being a container for the generated particles,
the event record holds additional information with event-, rather than particle-, scope
such as the cross sections for the selected event,
the differential cross section for the selected event kinematics,
the event weight,
a series of customizable event flags,
and interaction summary information (described in the next section).
Additionally, the event record includes a host of methods for querying / setting event properties
including many methods that allow querying for specific particles within the event.
Examples include methods to return the target nucleus, the final state primary lepton,
or a list of all stable descendants of any intermediate particle.
The event record features a `spontaneous re-arrangement' feature
which maintains the compactness of the daughter lists at any given time.
This is necessary for the correct interpretation of the stored particle associations as
the daughter indices correspond to a contiguous range.
The particle mother and daughter indices for all particles in the event record
are automatically updated as a result of any such spontaneous particle rearrangement.
The event generation itself is built around the \em GHEP \em event record using the
Visitor design pattern \cite{DesignPatterns:GangOfFour}.
The interaction between the \em GHEP \em event record and the event generation
code will be outlined in the following sections.
The \em GHEP \em structure is highly compatible with the event structures used in most HEP generators.
That allows us to call other generators, such as PYTHIA-6,
as part of an event generation chain and convert / append their output into the current \em GHEP \em event.
Additionally the \em GHEP \em events can be converted to many other formats for facilitating
the GENIE interface with experiment-specific offline software systems and cross-generator comparisons.
\subsubsection{Interactions}
The \em GHEP \em record represents the most complete description of a generated event.
Certain external heavy-weight applications such as
specialized event reweighting schemes or
realistic detector simulation chains using the generator as the physics front-end
require all of the detailed particle-level information.
However, many of the actual physics models employed by the generator,
such as cross section, form factor, or structure function models, require a much smaller subset of
information about the event.
An event description based on simple summary information,
typically including a description of the initial state, the process type and the scattering kinematics,
is sufficient for driving the algorithmic objects implementing these physics models.
In the interest of decoupling the physics models
from event generation and the particle-level event description,
GENIE uses an \em Interaction \em object to store summary event information.
Whenever possible, algorithmic objects implementing physics models
accept a single \em Interaction \em object as their sole source of information about an event.
This enables the use of these models
both within the event generation framework
but also within a host of external applications such as
model validation frameworks, event re-weighting tools and user physics analysis code.
An \em Interaction \em object is an aggregate, hierarchical structure,
containing many specialized objects holding
information for the initial state (\em InitialState \em object),
the event kinematics (\em Kinematics \em object),
the process type (\em ProcessInfo \em object) and
potential additional information for tagging exclusive channels (\em XclsTag \em object).
Instantiating \em Interaction \em objects for driving physics models
is streamlined using the `named constructor' C++ idiom.
They can be serialized into unique string codes which,
within the GENIE framework, play the role of the `reaction codes' of the old procedural systems.
These string codes are used extensively for mapping information to interaction types.
Two examples include mapping interaction types to pre-computed cross section splines
or mapping interaction types to specialized event generation code.
Each generated event has an \em Interaction \em summary object already attached to it.
\subsection{Event generation processing units: Modules, Threads and Drivers}
On an operational level the responsibility for generating events is shared
between event generation \em drivers, threads \em and \em modules. \em Tasks are delegated
from event generation drivers to threads, and from threads to modules.
Event generation drivers can include multiple threads, and threads can include multiple modules.
Event generation drivers are responsible for generating events for a particular user-defined
situation. These can be as simple as monoenergetic neutrinos interacting off a single target,
to complex situations involving the output of realistic beam-line simulations and full
detector geometry descriptions.
Threads are responsible for generating the physics of particular classes of events, for instance
charged-current quasielastic. Modules carry out a single step in that generation process.
\subsubsection{Event generation modules}
An event generation module is a key event generation abstraction.
Each event generation module encapsulates a well defined event generator
operation which, in physics terms, can be any of a very diverse set of actions.
Examples include
selecting the scattering kinematics, generating the final state primary lepton
or the primary hadronic system,
transporting hadrons within the target nucleus, and decaying unstable particles.
Operationally, event generation can be seen as a series of well-defined processing
steps operating on the \em GHEP \em event record.
The act of operating on the
event record defines an interface that is encapsulated by the \em EventRecordVisitorI \em
abstract class.
As it is indicated by the interface name,
the Visitor design pattern is being employed \cite{DesignPatterns:GangOfFour}.
Concrete event generation modules,
implementing the \em EventRecordVisitorI \em interface,
`visit' the event record.
The event record then invokes each attached module and is modified as a result.
Due to the diversity of the event processing operations that must be considered by GENIE,
we formed the event generation module abstraction focusing on the common operational
aspect of (potentially) modifying the event record.
This represents the most generic way
of thinking about event generation and guarantees that any future physics addition,
especially ones not envisioned at this stage of the generation evolution,
can be trivially embedded into the existing framework.
Treating the event generator modules uniformly and standardizing on the event generation module interface
allows us to build a flexible and extensible system where modules can be dynamically
plugged in/out of the event generation or interchanged.
Examples can further clarify the utility of this abstraction:
a module handling a set of particle decays can be unplugged
to inhibit those decays, or a module handling intra-nuclear hadron transport may be swapped with another module
performing the same operation using a different physics strategy.
Whenever possible event generation modules are written in a generic way,
containing code implementing just the neutrino event generation mechanics.
The actual physics model itself is specified in the generation module configuration.
This decoupling of mechanics from models greatly simplifies code development, transparency,
and physics validation, simplifying the overall structure and
reducing the amount of code that needs to be actively developed and scrutinized between successive releases.
An example will clarify this factorization:
The module selecting the kinematics for deep-inelastic neutrino-nucleon interactions
does not contain the actual code for the deep-inelastic differential cross section. It merely contains code to
calculate the allowed kinematical phase space for the process, select a point in that phase space using a
Monte Carlo acceptance / rejection method, and update the \em GHEP \em record accordingly.
The actual differential cross section model used during the Monte Carlo selection
is an external physics model invoked by the event generation module. The module itself can
be recycled many times by instructing it to call a different cross section model each time.
As a result of that factorization, multiple call sequences can be defined purely at the configuration level
without code duplication.
\subsubsection{Event generation threads}
An \em event generation thread \em is an ordered sequence of
processing steps, encapsulated by event generation modules, that can be applied to an
empty \em GHEP \em event record to completely generate some class of physics events.
This process defines an
interface that is encapsulated by the \em EventGeneratorI \em abstract class.
Within the GENIE event generation framework the structures containing a comprehensive set of
instructions for generating a class of physics events are concrete \em EventGeneratorI \em objects.
GENIE defines a comprehensive set of event generation threads responsible for generating
event types at the level of fundamental interactions.
The complete set of these event generation threads comprises GENIE's
full `physics content' for event generation.
As an event generation thread can generate a single class of events only, there are
usually multiple threads in use.
The class of physics events generated by a thread
can have an arbitrary granularity, from a single interaction
corresponding to a particular process type with a given final state
to very broad event categories.
Each thread contains an \em InteractionList \em object, a container
containing a list of
the \em Interaction \em objects the thread can generate.
The \em InteractionList \em plays a crucial role in
identifying the responsibilities of each thread within the GENIE framework.
Once an event type to be generated has been selected, a corresponding \em Interaction \em object
is instantiated. Following the Chain of Responsibility design pattern \cite{DesignPatterns:GangOfFour},
GENIE attempts to match the
\em Interaction \em object with an element of the \em InteractionList \em containers for all active threads.
The first thread found that is able to handle that event type is handed the responsibility
to generate the event.
Additionally, event generation threads include an instance of the cross section algorithm
that can be used for selecting the event kinematics or for computing the probability for
a particular neutrino to interact.
This is another example of separating mechanics from models
and serves to greatly simplify the dynamic mapping between event types and cross section
models.
Once a list of threads has been loaded into the generator, many high-level event generation
operations became trivial. Compiling the list of all event types that can be generated
by GENIE in its current configuration simply involves looping over the active threads
and adding the corresponding \em InteractionList \em objects.
Selecting an event type to be generated from that master list involves looping over
its \em Interaction \em objects and, for each element, identifying the responsible thread,
requesting its corresponding cross section model and invoking it by
passing the \em Interaction \em object as argument.
Once an event type has been selecting generating the event simply involves looking up
the responsible thread and delegating responsibility to it.
During event generation an invoked thread maintains a modification history of the event record.
If a tried
event generation path leads to a dead-end, the current event generation module throws an exception and aborts.
The event generation thread catches that exception and, depending on information stored at it, may rerun the event
using a snapshot of the event record taken N steps back, in the hopes of taking an
alternative path and avoiding the encountered dead-end.
If a configurable maximum number of exceptions is caught, or if any thrown exception specifies explicitly that
generation of the current event is to be aborted altogether, the thread sets the appropriate error flags and makes
sure that the remaining processing steps are skipped.
The user, via options set in the event generation driver, may choose to keep certain types of these events so as
to examine their type and frequency, though the default behavior of GENIE is
to discard these events and only write out physical, fully formed events.
Error handling within each active thread greatly adds to the robustness and fault-tolerance
of the package, which is especially valued in large-scale, CPU-intensive, experiment-specific Monte Carlo
production runs involving hundreds of CPU cores over many weeks.
Advanced users can modify the default event generation threads by removing / adding event generation modules,
or they can define their own uniquely named threads for handling new processes or handling existing processes in new ways.
\subsubsection{Event generation driver classes}
GENIE provides two event generation driver classes.
These drivers collect the user inputs, instantiate and configure
all required event generation components,
and oversee communications
between these components, the computing environment, and
the user.
The two driver classes support two different types of functionality:
\begin{itemize}
\item
Instances of the \em GEVGDriver \em class can handle event generation for a
given initial state corresponding to an arbitrary neutrino / target pair.
\item
Instances of the \em GMCJDriver \em class can be used for more complicated
simulations involving arbitrarily complex, realistic beam flux simulations
and detector geometry descriptions.
This driver object concerns itself mostly with driving the flux and detector
geometry navigation drivers and integrating those with the GENIE event generation framework.
It represents a significantly more complex and CPU-intensive event generation case
but relies entirely on a host of \em GEVGDriver \em instantiations, one for each
possible initial state in that Monte Carlo job,
in order to obtain neutrino interaction physics modeling capabilities and
generate event kinematics.
\end{itemize}
\begin{figure*}[htb]
\center
\includegraphics[width=32pc]{./gfmw.eps}
\caption{A UML diagram depicting the GENIE event generation framework. See text for details.}
\label{fig:fmw}
\end{figure*}
\section{GENIE Event Generation Applications and Utilities}
\label{sec:apps}
GENIE is being used by a host of precision-era neutrino experiments and
provides off-the-shelf components for generating neutrino interactions under the
most realistic assumptions.
The event generation driver classes described in Section \ref{sec:design}
are encapsulated within driver
applications which present the user with a command-line or graphical interface,
instantiate and configure those driver classes, call the event generation methods to
generate the requested number of events, and push those events through a persistency
manager.
In experiment-specific GENIE-based event generation drivers utilizing the
\em GMCJDriver \em one can integrate
the GENIE neutrino interaction modeling with detailed flux and detector geometry descriptions.
This is a non-trivial operational capability that older procedural neutrino generators
typically lacked, requiring significant development effort from experiments.
The flux descriptions are typically derived from experiment-specific beam-line simulations
while the detector geometry descriptions are typically derived from engineering drawings
mapped into the GEANT4 \cite{Agostinelli:2002hh}, ROOT \cite{Brun:1997pa} or
GDML \cite{Chytracek:2006be} geometry description languages.
Obviously, flux and detector geometry descriptions can take many forms, driven by
experiment-specific choices.
GENIE standardizes the geometry and flux driver interfaces by defining the operations
that GENIE needs to perform on the geometry and flux descriptions and the essential flux and
geometry information needed for the generation of events.
Concrete implementations of the interfaces allow experiments to extend GENIE's event
generation capabilities and make it possible to seamlessly integrate new geometry
descriptions and beam fluxes into user applications.
In this section we will describe in some detail the flux and geometry interfaces.
We will briefly describe applications built from these drivers as well as
GENIE utilities to evaluate and display cross section information, make comparisons
to external data, and facilitate model tuning.
\subsection{Neutrino flux drivers}
In GENIE every concrete flux driver implements the \em GFluxI \em interface.
The interface defines what neutrino flux information is needed by the event
generation drivers and how that information is to be obtained.
Each concrete flux driver includes methods to:
\begin{itemize}
\item Declare the neutrino flavors that can generate events.
This information is used for initialization purposes, in order to construct a list
of all possible initial states in a given event generation run.
\item Declare the maximum energy.
Again this information is used for initialization purposes, in order to calculate
the maximum possible interaction probability in a given event generation run.
Since neutrino interaction probabilities are tiny,
GENIE scales all interaction probabilities
in a particular event generation run so that the maximum possible interaction probability is 1.
That maximum interaction probability corresponds to the total interaction probability (summed
over nuclear targets and process types) for a maximum energy neutrino following a trajectory
that maximizes the density-weighted path-lengths for each nuclear target in the geometry.
GENIE adjusts the MC run normalization accordingly to account for this probability renormalization.
\item Generate a flux neutrino and specify its pdg code, its weight (if any),
its 4-momentum and 4-position.
The 4-position is given in the detector coordinate system (as specified by the input geometry).
Each such flux neutrino is propagated towards the detector geometry but is not required to
cross any detector volume. GENIE will take that neutrino through the geometry,
calculate density-weighted path-lengths for all nuclear targets in the geometry,
calculate the corresponding probabilty for scattering off each nuclear target and
decide whether that flux neutrino should interact. If it interacts, an appropriate
\em GEVGDriver \em will be invoked to generate the event.
\item Notify that no more flux neutrinos can be thrown.
Flux drivers that use the output of beam-line simulations, so-called `flux files',
are configured to recycle these flux files multiple times in a given run since
most neutrino flux entries do not produce an interaction.
The flag allows GENIE to
properly terminate the event generation run once this limit is reached, irrespective
of the accumulated number of events, protons on target, or other metric of exposure.
\end{itemize}
The above correspond to the common set of operations/information that GENIE expects to
be able to perform/extract from all concrete flux drivers.
Specialized drivers may define additional information that can be utilized in experiment-specific
applications. One typical example of this is to `pass-through'
information about the flux neutrino parents placed in the flux files by the beamline simulation,
such as the parent meson PDG code, its 4-momentum, and its 4-position at the production and decay points.
At the time of writing this article, GENIE already includes a host of concrete
flux drivers allowing GENIE to be used in many realistic, experiment-specific situations.
More specifically, it includes an interface to the JPARC neutrino beam simulation \cite{JNUBEAM:PrivCom}
used by Super-Kamiokande \cite{Fukuda:2002uc}, nd280 \cite{Wark:2008zz}, and INGRID \cite{Wark:2008zz}
and an interface to the NuMI beam simulation \cite{Anderson:1998zz}
used by MINOS \cite{Michael:2008bc}, NOvA \cite{Ayres:2004js}, MINERvA \cite{Schulte:2006db},
MicroBooNE \cite{Soderberg:2009rz} and ArgoNEUT \cite{Soderberg:2009qt}.
It includes drivers for the BGLRS \cite{Barr:2004br} and the FLUKA \cite{Battistoni:2001sw} atmospheric fluxes.
Moreover, it includes a generic flux driver,
describing a cylindrical neutrino flux of arbitrary 3-D direction and radius,
with a configurable radial dependence,
which can be used for describing a flux containing a number of different neutrino species whose
(relatively normalized) energy spectra are described as 1-D histograms.
GENIE, obviously, also includes the trivial case of a monoenergetic flux typically employed
in physics benchmarking calculations.
Concluding this section
it is worth re-emphasizing the fact that new concrete flux drivers (describing the neutrino flux
from other beam-lines) can be easily developed and they can be effortlessly and seamlessly
integrated within the GENIE event generation framework.
\subsection{Geometry drivers}
In GENIE every concrete geometry driver implements the \em GeomAnalyzerI \em interface.
The interface specifies what information about the input geometry is relevant to the
event generation and how that information is to be obtained.
Each concrete geometry driver implements methods to:
\begin{itemize}
\item Declare the list of target nuclei that can be found in the geometry.
This information is used for initialization purposes, in order to construct a list
of all possible initial states in a given event generation run.
\item Compute the maximum density-weighted path-lengths for each nuclear target in the geometry.
Again, this is information used for initialization purposes. The computed `worst-case'
trajectory is used to calculate the maximum possible interaction probability in a particular
event generation run. This maximum interaction probability is used internally to normalize
all computed interaction probabilities in that run.
\item Compute density-weighted path-lengths for all nuclear targets, for a trajectory of given 4-momentum and starting 4-position.
This allows GENIE to calculate probabilities for each flux neutrino to be scattered off every nuclear target
along its path through the detector geometry.
\item Generate a vertex along a trajectory of given 4-momentum and starting 4-position on a volume containing a given nuclear target.
This allows GENIE to place a neutrino interaction vertex within the detector geometry once an interaction of a flux
neutrino off a selected nuclear target has been generated.
\end{itemize}
GENIE currently contains a concrete geometry driver able to handle the ROOT-based detector geometry descriptions
typically used by most neutrino experiments. Detector geometry descriptions based on GEANT or GDML can be
converted into ROOT-based descriptions and used by the same GENIE geometry driver as well.
GENIE also includes a driver for more trivial geometry descriptions corresponding to a single nuclear
target or a target mix (a set of nuclear targets, each with its corresponding weight fraction) at a fixed
position. This simpler geometry driver may be used in simulating fixed initial states for benchmarking calculations
or in experimental situations where a relatively uniform detector is being illuminated by a spatially
uniform neutrino beam. An example of the latter would be a detector placed far enough
from the beam-line instrumentation so as to see a point-like neutrino source.
Again it is worth re-emphasizing that any new detector geometry description can be seamlessly
integrated with the GENIE event generation framework by means of developing an appropriate GENIE geometry driver.
\subsection{Event generation outputs}
The generated events are stored in the ROOT file format. The typical output of an event generation run is a single
ROOT file which contains an event tree with a single branch and a single leaf per event containing the generated
\em GHEP \em record.
User-defined branches to write out experiment-specific information
may be added to that tree with the user-defined information `linked' to the corresponding generated neutrino event.
The output ROOT file contains directories storing all GENIE configuration information for the MC run and a snapshot
of the running environment for later reference.
GENIE provides a persistency manager object which can be employed within the event generation driver applications
to write out the event tree.
\subsection{Event generation applications}
GENIE is distributed with many event generation applications and utilities,
many of which are straightforward wrappers for the
components described above.
Users interact with these applications through simple command-line interfaces,
user-created XML configuration files, and environmental variables.
The \em gevgen \em application can be used in simulating given initial states
for benchmarking calculations or simple
experimental setups for which histogram-based flux descriptions and simple geometry descriptions in terms
of a target mix are adequate. Experiment-specific event generation applications,
such as \em gT2Kevgen \em and \em gNuMIevgen\em,
employ the detailed JPARC and NuMI beam-line simulations and the ROOT-based detector geometry descriptions of
the corresponding experiments and are used by a large fraction of the GENIE user base.
On a MacBookPro running MAC OS X 10.5.6, with a 2.16 GHz Intel Core 2 Duo and
1 GB DDR2 SDRAM at 667 MHz, and with all event generation threads enabled,
GENIE simulates around 70 events/sec for a $\nu_{\mu}+Fe^{56}$ initial state at $E_{\nu} = 1 GeV$.
The speed is 5 events/sec with the detailed nd280 ROOT-based detector geometry description
(40 nuclear targets) and the detailed JNUBEAM-based JPARC neutrino beam simulation
(4 neutrino species).
\subsection{Utilities}
Event generation for a realistic experimental setup typically requires of the order of $\sim 10^{8} - 10^{9}$ differential
cross section evaluations just in order to select an interaction to be generated.
It is therefore very practical to perform the numerical
differential cross section integrations at a different stage and save the data for building cross section splines.
The event generation components can recycle these splines for performing fast numerical interpolations,
greatly improving the event generation efficiency.
GENIE provides a utility, \em gmkspl\em, to generate all required cross section splines for the intended set of neutrino
flavors and nuclear targets over the required energy range and write the data in the XML format expected by the event
generation components.
For many use-cases it is convenient to analyze the output \em GHEP \em ROOT event tree and either write-out simpler, flat
n-ntuples containing summary information or convert it to a format expected by an experiment-specific application, such as
a detector-level simulation that doesn't use the GENIE I/O.
GENIE provides an event tree converter, \em gntpc \em, which writes out a host of alternative plain text, XML or bare-ROOT
formats currently is use by GENIE-based applications in client experiments.
Several tools exist for the purposes of validating and tuning the physics models in GENIE.
The ultimate source of data for many of these comparisons is the DURHAM Neutrino Scattering
Data Resource \cite{Whalley:2004sz}, an online resource with large compilations of neutrino data from
many different experiments.
Access to this data in the GENIE package is done through the NuValidator \cite{Andreopoulos:2005vd}
program, a GENIE add-on that can
be optionally installed. Distributed with the NuValidator in XML format are
data from the DURHAM database, electron
scattering data from the Jefferson Lab database \cite{e49a10}, and publicly available
lepton scattering structure function data \cite{Gehrmann:1999xn}.
Electron scattering data and lepton structure function
data are important in evaluating any scheme for handling the perturbative / non-perturbative
transition region described in Sec. \ref{sec:transition} \cite{Gallagher:2004nq}.
This data is then available for a variety of applications. The NuValidator includes a simple GUI interface
allowing one to select data to display together with the GENIE prediction, with full ability to set model
parameters to new values.
The data can also be accessed for physics model parameter fits
possibly including new experimental data as well as external, historical data.
In addition to event generation and validation the flexibility of the GENIE framework
also simplifies many downstream analysis tasks. The evaluation of generator-related
systematic errors can be greatly simplified through event reweighting
\cite{dobson-ladek}.
As mentioned in Sec. \ref{sec:core},
GENIE's ability to allow algorithm configuration using local pools, redundancy of
information between \em GHEP \em event records and \em Interaction \em objects,
and ability to identify algorithms/configurations in generated output
all support the development of experiment-specific event reweighting programs.
Details on the application described above and on a host of other utilities
may be found in the GENIE Physics and User Manual \cite{genie-doc-753}.
\section{Collaboration Organization}
\label{sec:collab}
A significant organizational challenge for the GENIE project is defining the
working relationship with experimental collaborations.
Having many experiments use the same neutrino
event generator is a new development in this field and the exact nature of these
working relationships will evolve over time.
However some realities and goals are clear.
Good two-way communication is essential both for immediate issues like
bug reporting and support and for longer term issues like planning of upcoming
physics releases. Experiments often set target dates to begin production
of Monte Carlo samples based on publication plans and expectations for data-taking.
Meeting these deadlines is often a high priority for these collaborations. Since the
time-scale for production of large Monte Carlo samples is roughly similar to the
timescale for production of new GENIE physics releases, it will be desirable to
synchronize, or at least be cognizant of, upcoming experimental deadlines to
as large a degree possible. Discussions with collaborations will also focus
on priorities for physics model improvements.
One conduit for these collaborations will be through experimental liaisons who serve
as the main contact within an experiment on GENIE issues.
This person can then report on the experiment's experiences and deadlines,
and can help present the experiment's priorities for model improvements to the
GENIE collaboration for evaluation. When effort within the GENIE collaboration to
incorporate new model work is not available, these liaisons can assist in providing
and organizing effort from their collaborations.
Physics model development is partitioned into subtasks including the cross section model,
the hadronization model, and the intranuclear rescattering model. These components
are all relatively self-contained and have validation procedures independent of the others.
Overall tuning and validation of a production release is the responsibility of a separate
working group of the collaboration. This task is undertaken on a roughly yearly timescale.
This exercise finalizes the model set and determines values of all parameters based on
fits to external data.
This process also determines parameter errors which can then be used by experiments in the
evaluation of generator-related systematic errors.
A default tune - a self consistent set of physics models and parameter values -
will be specified by the collaboration for every major release, along with
information about the uncertainties on model parameters and possible correlations.
It is possible for experiments, using the validation and parameter fitting tools
that are provided as part of the GENIE package,
to have their own `tuning' of the generator. This would be desirable from
an experiment's perspective as they have access to new data that, due to the aforementioned
uncertainties, will probably not be in complete agreement with the default tuning.
It is hoped that the development of new models by specific collaborations as part of
ongoing analyses will be speedily adopted
into GENIE for the benefit of other users, though the decision
of when to make their new models public will be left to the discretion of the
user collaborations. It will, however,
be important that model improvements be merged back
into the default tuning on a regular basis so as to prevent the fragmentation of the
base set of physics models into many different experiment-tuned versions which then
evolve independently over many years.
The collaboration organizes its effort internally through occasionall meetings
of physics model and tuning/validation working groups, phone meetings, blogs,
and web-based document databases.
The GENIE web site \cite{GENIE} is the central repository
for all information related to the package.
An extensive Physics and User Manual \cite{genie-doc-753}, a web-based source
code Reference Manual \cite{GENIE:Doxygen} as well as a support mailing list
are available to GENIE users.
Hands-on tutorials on GENIE have been given on several occasions at
meetings in the U.S., Europe, and Japan, and material from these workshops with
introductory and advanced tutorial examples are available on the GENIE web site.
\section{Code Availability}
\label{sec:avail}
\subsection{Version control and distribution}
GENIE is available from its CVS repository hosted at STFC's Rutherford Appleton Laboratory
from where one can access the development version and a series of `frozen' production-quality releases.
The repository is physically located on AFS space and, in read-only mode,
can be accessed anonymously. Read-write mode access to the code repository
is provided to the GENIE collaborators via SSH and public key authentication.
A transition to a SubVersion repository is planned for the near future.
Up-to-date details on how to access the source code are given at the
GENIE web-site \cite{GENIE} and at the Physics and User Manual \cite{genie-doc-753}.
\subsection{Supported platforms and external dependencies}
GENIE is known to build on many platforms, including all popular LINUX
distributions and MAC OS X,
and has no OS proprietary dependency.
As of GENIE v2.5.1, external dependencies for a minimal installation that can
be used for physics MC production include the
\em ROOT Class Libraries \em \cite{Brun:1997pa},
the \em LHAPDF parton density function library \em \cite{Whalley:2005nh},
the \em PYTHIA-6 \em LUND MC \cite{Sjostrand:2006za}
and two fairly common utilities:
the \em libxml2 \em XML parser and the \em log4cpp \em error logger.
\subsection{Versioning scheme and release lifetime}
GENIE versions are numbered as `$i.j.k$', where $i$, $j$ and $k$ are the
major, minor and revision indices respectively. The corresponding CVS tag is `$R-i\_j\_k$'.
When a number of significant functionality improvements or additions have been made,
the major index is incremented.
The minor index is incremented in case of significant fixes or minor feature additions.
The revision number is incremented for minor bug fixes and updates.
Versions with even minor number correspond to validated, physics production releases.
Versions with odd minor number correspond to the development version of `candidate'
releases tagged during the validation stage preceding a physics production release.
Tagged versions always have an even revision number. Odd revision numbers correspond to
the CVS head.
Because of the effort invested by the experimental communities in generating large
samples and understanding the impact of the simulation changes into their physics results,
physics production releases are nominally supported for a long term of approximately 18 - 24 months.
\subsection{License}
GENIE is now distributed under the GPLv3 license agreement \cite{GPLv3}.
\section{Conclusions}
GENIE provides a modern and versatile platform for
a universal, `canonical' Neutrino Interaction Physics Monte Carlo
whose validity will extend to all nuclear targets and neutrino flavors over a
wide range of energies from MeV to PeV scales.
Currently, it includes state-of-the-art neutrino interaction physics modeling in the few-GeV
energy range which is relevant for the current and near future
long-baseline precision neutrino experiments using accelerator-made beams.
The software was designed using object-oriented methodologies and developed entirely in
C++ over a period of more than three years, from 2004 to 2007.
The design of the package decouples the mechanics of event generation from
the physics models, providing modularity, extensibility, and flexibility. The package supports
the full life-cycle of generator-related activities,
from event generation using detailed detector geometries and flux information,
to final analysis tasks such as reweighting and systematic error evaluation.
The data, programs, and procedures used to validate and tune the package are all distributed with
the package itself, allowing users the ability to easily extend the package and evaluate new
models.
The project is supported by a group of physicists from all major experiments
operating in this energy range, establishing GENIE as a major HEP event generator collaboration.
GENIE has already been adopted by many neutrino experiments, including
those using the JPARC and NuMI neutrino beamlines, and will be an important physics tool
for the worldwide accelerator neutrino program.
\section*{Acknowledgements}
This work was supported by the UK Science and Technology Facilities Council / Rutherford Appleton Laboratory,
the US Department of Energy, the US National Science Foundation, and the Tufts University Summer Scholars Program.
The authors would like to express our gratitude to
G.Irwin (Stanford),
B.Viren (Brookhaven Lab),
S.Kasahara (Minnesota) and
N.West (Oxford)
for contributing to the early stages of the evolution of GENIE
through example, by developing the MINOS offline framework, and
through their inputs and criticisms during the GENIE design reviews.
We would also like to thank our MINOS collaborators, in particular
R.Gran (UMD),
K.Hofmann (Tufts),
M.Kim (Pittsburgh),
M.Kordosky (UCL),
W.A.Mann (Tufts),
J.Morfin (Fermilab) and
S.Wojcicki (Stanford),
for their contributions in the development, tuning and validation of
the default set of physics models in GENIE.
We would also like to thank
E. Paschos (Dortmund),
S. Mashnik (Los Alamos),
A.Bodek (Rochester),
O.Lalakulich (Dortmund, Giessen) and
T.Leitner (Giessen)
for providing important models and results.
We also express our gratitude and recognition to
C.Reed (NIKHEF),
K.Scholberg, C.Little, R.Wendell (Duke),
A.Habig, R.Schmidt (UMN Duluth) and
D.Markoff (NCCU)
for their ongoing effort
to extend the GENIE validity range down to the MeV energy scale.
Also we would like to thank
C.Backhouse (Oxford),
S.Boyd (Warwick),
J.J. Gomez Cadenas (IFIC Valencia),
Y.Hayato (ICRR),
J.Holeczek (Silesia),
Z.Krahn (Minnesota),
H.Lee (Rochester),
J.Lagoda (L'Aquila),
S.Manly (Rochester),
B.Morgan (Warwick),
D.Orme (Imperial),
G.Perdue (Fermilab),
T.Raufer (RAL),
D.Schmitz (Fermilab),
E.Schulte (Rutgers),
J.Sobczyk (Wroclaw),
A.Sousa (Oxford),
J.Spitz (Yale),
P.Stamoulis (Athens),
R.Tacik (Regina),
H.Tanaka (UBC),
I.Taylor (Imperial),
R.Terri (QMUL),
V.Tvaskis (Victoria),
Y.Uchida (Imperial),
S.Wood (JLAB),
S.Zeller (LANL),
L.Zhu (Hampton)
and many others who have used early versions of GENIE
for their help in improving the build system, fixing bugs, and
contributing comments and tools.
\bibliographystyle{elsart-num}
\hyphenation{Post-Script Sprin-ger}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,012 |
Внутрисуставная травма — это травма в полости самой кости. Линия излома находится, или частично, или полностью внутри кости. При таком повреждении воспаляется не только кость, но и весь сустав.
Признаками внутрисуставной травмы являются отёк, очень сильная боль и нарушение работоспособности сустава, который поврежден. Смещенные отломки в большинстве случаев находятся не сразу. Если внутрисуставная травма находится в области крупных суставов, то может появиться кровоизлияние внутрь сустава.
Лечение должно происходить в травматологии. Там врачи попытаются максимально восстановить отломки. Нужное лечение определяется конкретной внутрисуставной травмой. Если осколки смещены, врачи используют или скелетное вытяжение, или оперативный способ лечения. В основном, лечат оперативным способом.
Недостаточно качественное лечение, или лечение не в полном объеме, может привести к неправильному срастанию костей и появлению еще более серьезных проблем. В более чем 90 % случаев, лечение протекает без осложнений и в течение 3 месяцев от травмы почти не остается и следа при надлежащем лечении.
Примечания
Медицина
Хирургия
Травматология
Травмы | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,336 |
\section{Introduction}
\label{Introduction}
Over the last several years, the cryptocurrency market has attracted substantial interest from both institutional and retail investors. The market has experienced significant growth in total assets, accompanied by commensurate levels of volatility. Following the COVID-19 and BitMEX market crash in early 2020, cryptocurrency prices had a strong rally and then a subsequent decline in prices across the market. Given the differing views on their long-term viability, understanding the changing nature of cryptocurrencies' returns and volatility as the market grows in size is a timely priority for investors and policymakers alike. Our paper meets this purpose by analysing the collective dynamics of cryptocurrencies over time regarding returns, volatility, and market size.
Our paper builds on a long literature studying the dynamics of financial markets.
Across several fields, researchers have been interested in the time-varying nature of financial market behaviours for some time, particularly correlations \cite{Fenn2011,Mnnix2012,Vicente2006,Wang2013,GangJinWang2012,Mikiewicz2021,Pan2007}. The application of classical statistical techniques, such as ARCH and GARCH models \cite{Lamoureux1990,Chu2017,Kumar2019} and other parametric models, can encounter difficulties due to the non-stationary nature of financial markets. Certain models for descriptive analysis or portfolio selection that perform well during extended bull market periods can suffer during sudden market crises. The literature has split into two approaches to this problem. Statistical researchers have introduced explicit methodologies to model non-stationarity \cite{Dahlhaus1997}, while nonlinear dynamics researchers have taken a more descriptive approach to the time-varying dynamics.
In the nonlinear dynamics community, such market dynamics have been studied with a range of methodologies, including chaotic systems \cite{Cai2012,Tacha2018,Szumiski2018}, clustering \cite{Heckens2020,Jamesfincovid}, sample entropy \cite{Wu2021,Chen2021}, and principal components analysis \cite{Laloux1999,Kim2005,JamescryptoEq}. Various asset classes have attracted interest, including equities \cite{Wilcox2007}, fixed income \cite{Driessen2003}, and foreign exchange \cite{Ausloos2000}. Researchers have also explored extreme behaviours \cite{Qi2019} and the propagation of structural breaks \cite{Telli2020,James2021_crypto} in price and volatility time series. The evolutionary nature of volatility, often studied via the framework of volatility clustering and regimes, has been of interest for many years \cite{Shah2019,Kirchler2007,Baillie2009,Hamilton1989,Lavielle,arjun,Lamoureux1990,Guidolin2007,Yang2018JEDC,deZeeuw2012,Kumar2019}. Overall, such financial research uses many of the same techniques from time series analysis that are used in other domains \cite{Hethcote2000,James2021_virulence,Vazquez2006,Mendes2018,Mendes2019,Rizzi2010,Shang2020,jamescovideu,Machado2020,James2021_geodesicWasserstein}.
In recent years, substantial research has focused specifically on the unique dynamics of cryptocurrencies. Various topics of interest include Bitcoin and other cryptocurrencies' price dynamics \cite{Chu2015,Lahmiri2018,Kondor2014,Bariviera2017,AlvarezRamirez2018}, fractal patterns, \cite{Stosic2019,Stosic2019_2,Manavi2020,Ferreira2020}, cross-correlation and scaling effects \cite{Drod2018,Drod2019,Drod2020,Gbarowski2019,Drod2021_entropy,Wtorek2021_entropy}. A great deal of this work has addressed how these dynamics have changed over time, in particular during market crises such as the COVID-19 market crash \cite{Wtorek2020,Corbet2020,Conlon2020,Conlon2020_2,Ji2020,Lahmiri2020,Zhang2020finance,He2020,Zaremba2020,Akhtaruzzaman2020,Okorie2020,Naeem2021,Curto2021,james2021_mobility}.
This paper prioritises understanding the changing nature of time-varying parameters that describe the market dynamics of cryptocurrencies. We are particularly interested in a descriptive analysis of these non-stationary dynamics and an identification of different patterns of behaviour, particularly around crises. Our primary parameters of interest are cryptocurrency returns $R(t)$, volatility $\Sigma(t)$, which have been well-studied and known to be highly non-stationary, as well as market size $M(t)$. This latter quantity is particularly relevant to a nascent market such as cryptocurrencies to determine if market dynamics change meaningfully as the market develops.
This work aims to study the changing dynamics of the cryptocurrency market, incorporating and building on much of the aforementioned literature. We take an interest both across our entire 2.5-year window of analysis as well as in several distinct periods, which are outlined in Section \ref{data}. In Section \ref{Market_correlations}, we begin by building on the rich literature studying cryptocurrencies' significant correlations, proposing a new framework to analyse the correlation structure of the market. In Section \ref{Collective_dynamics_fragmentation}, we investigate the relationship between the collective dynamics of the market and the time-varying total market capitalisation of all cryptocurrencies. From there in Section \ref{return_vol_size_inconsistency}, we study the relationship between market size and returns and volatility, demonstrating greater quantitative consistency between market size and volatility than size and returns. Finally, Section \ref{Volatility_persistence_regime_identification} introduces a new technique for understanding the changing spread of volatility across the cryptocurrency market as a whole, terming this \emph{volatility dispersion}. There we introduce a new quantity $\text{Var}(\mathbf{p}(t))$ to describe the spread of volatility across a market that we also observe is highly non-stationary. Overall, we propose a new methodology to understand the evolution of correlation structures, reveal new insights about the relationship between market size and attributes such as collective dynamics, returns and volatilities, and propose a new way to understanding the spread of volatility with time. Our insights are summarised in Section \ref{Conclusion}.
\section{Data}
\label{data}
In the proceeding sections, we analyse cryptocurrency data between 01-01-2019 and 30-06-2021. We study 52 cryptocurrencies that possess sufficient histories. In Section \ref{Market_correlations}, we partition our analysis into five discrete periods to explore correlation behaviours at varying times. These periods are defined as follows:
\begin{enumerate}
\item Pre-COVID: 01-01-2019 to 28-02-2020;
\item Peak COVID: 01-03-2020 to 30-05-2020;
\item Post-COVID: 31-05-2020 to 31-08-2020;
\item Bull: 01-09-2020 to 14-04-2021;
\item Bear: 15-04-2021 to 30-06-2021.
\end{enumerate}
Cryptocurrency data are sourced from \url{https://coinmarketcap.com/}. A full list of cryptocurrencies studied in this paper is available in Appendix \ref{appendix:mathematical_objects}.
\section{Temporal evolution of market correlation}
\label{Market_correlations}
Like many asset classes in financial markets, cryptocurrency returns are characterised by distributions that exhibit significant tail risk. In particular, their relative infancy and polarising views of the asset's long-term viability make their price behaviour more susceptible to extreme volatility and erratic behaviours \cite{James2021_crypto}. Figure \ref{fig:market_log_returns} displays the general volatility in log returns, and Figure \ref{fig:market_log_returns_distribution} highlights the negative skew in the returns distribution throughout our analysis window.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{Market_log_returns.png}
\caption{}
\label{fig:market_log_returns}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{Market_log_returns_distribution.png}
\caption{}
\label{fig:market_log_returns_distribution}
\end{subfigure}
\caption{Cryptocurrency market log returns, depicted (a) as a function of time and (b) as a distribution.}
\label{fig:Market_returns_distribution}
\end{figure*}
Prior research has demonstrated that correlations are most strongly positive during market crises, where many investors engage in the systematic sale of assets \cite{Sandoval2012,james2021_MJW}. This may be due to the growth of quantitatively driven asset managers and their use of algorithms that may induce simultaneous indiscriminate selling. That is, market systematic behaviours, particularly during bear markets, appear to be more pronounced due to an \emph{algorithmic herd mentality}. Behaviour has been particularly volatile since 2020, where after the COVID-19 and BitMEX crash, the cryptocurrency market experienced an unprecedented rise (and subsequent fall) in total market capitalisation. With all this in mind, our objective is to study the temporal evolution of the market's correlation structure and contrast the market's collective similarity between different periods.
\subsection{Analysis of collective correlation over time}
\label{sec:evolutionarycorrelation}
Let our period of analysis 01-01-2019 to 30-06-2021 be indexed $t=0,1,...,T$, where $T=911$. Let $c_i(t), i=1,...,N, t=0,...,T$ be the multivariate time series of cryptocurrency daily closing prices. We first generate a multivariate time series of log returns, $R_i(t), t=1,...,T$, as follows:
\begin{align}
\label{eq:logreturns}
R_{i}{(t)} &= \log \left(\frac{c_i{(t)}}{c_i{(t-1)}}\right).
\end{align}
Our primary objects of study in this section are correlation matrices of log returns data across specified periods. These periods may roll forward with time or remain static. Let $a \leq t \leq b$ be such a period of analysis, an interval of $S=b-a + 1$ days. We standardise the cryptocurrency log returns over such a period by defining $\tilde{R}_i(t) = [R_i(t) - \langle R_i \rangle] / \sigma(R_i), a \leq t \leq b$, where $\langle . \rangle $ is an average and $\sigma(.)$ is a standard deviation operator, each computed over the same interval. The correlation matrix $\Psi$ is then defined as follows: let $\tilde{R}$ be a $N \times S$ matrix defined by $\tilde{R}_{it}=\tilde{R}_i(t), i=1,...,N, t=a,...,b$ and define
\begin{align}
\label{eq:corrmatrix}
\Psi = \frac{1}{S} \tilde{R} \tilde{R}^T.
\end{align}
Explicitly, individual entries are defined by
\begin{align}
\label{eq:rhodefn}
\Psi_{ij}=\frac{\sum_{t=a}^b (R_i(t) - \langle R_i \rangle)(R_j(t) - \langle{R}_j \rangle))}{\left(\sum_{t=a}^b (R_i(t) - \langle R_i \rangle)^2 \sum_{t=a}^b (R_j(t) - \langle R_j \rangle)^2\right)^{1/2}},
\end{align}
for $1\leq i, j\leq N$. All entries $\Psi_{ij}$ lie in $[-1,1]$. If we wish to explicitly note the interval over which these are defined, we may denote this matrix $\Psi^{[a:b]}.$ To quantify the total strength in correlation behaviours across the market, we compute an appropriately normalised $L^1$ norm of the matrix $\Psi$. That is, let
\begin{align}
\label{eq:L1norm}
\|\Psi\|_1 = \frac{1}{N^2} \sum_{i,j=1}^N | \Psi_{ij}|.
\end{align}
This gives the average absolute correlation of all cryptocurrencies over the interval $a \leq t \leq b$. To explore the temporal evolution of collective strength in correlation behaviours, we examine the changes in matrix $\Psi$ as our interval $[a,b]$ rolls forward. Specifically, we set $S=90$ and compute the time-varying $L^1$ norm of a $90$-day rolling window, $\nu^{\Psi}(t) = \|\Psi^{[t-S+1:t]}\|_1$. We also apply a \emph{Savitzky-Golay} filter to produce a smoothed function $\nu^{\Psi}(t)$. We then apply a recently introduced turning point algorithm \cite{james2020covidusa}, detailed in Appendix \ref{appendix:TPA}, to generate a set of non-trivial local maxima and minima in the total correlation behaviours. While some previous work explores the evolution of the first eigenvalue \cite{JamescryptoEq}, our methodology is the first we know to study the $L^1$ norm of the correlation matrix and apply a bespoke turning point algorithm to study non-trivial peak and trough propagation. The norm function $\nu^{\Psi}(t)$, its smoothed analogue $\nu_s^{\Psi}(t)$, and the detected turning points are all displayed in Figure \ref{fig:Correlation_matrix_norm}.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Rolling_crypto_correlation_norm.png}
\caption{
Time-varying correlation matrix norms, $\nu^{\Psi}(t)$ and its smoothed counterpart $\nu_{s}^{\Psi}(t)$, as defined in Section \ref{sec:evolutionarycorrelation}. Six non-trivial local maxima and minima are annotated, detected at 03-02-2020, 08-05-2020, 13-08-2020, 29-11-2020, 13-04-2020 and 30-06-2020.
}
\label{fig:Correlation_matrix_norm}
\end{figure*}
Examining Figure \ref{fig:Correlation_matrix_norm} reveals six non-trivial local maxima and minima in the overall magnitude of correlations. Of particular note are the local maximum identified on 08-05-2020 and the minimum on 13-08-2020. These dates reflect the COVID-19 market crash and the subsequent recovery in the cryptocurrency market, respectively. Clearly, correlations are most significant during the crash and weakest during the subsequent growth in the market. The local maximum on 30-06-2020 reflects a significant increase in correlation behaviours during the latter part of our analysis window. This is most likely indicative of the aggressive sell-off in cryptocurrency assets from approximately mid-April until the present. Again, we see a striking pattern where the general growth of the cryptocurrency market is inversely related to the collective strength of correlations.
This is broadly consistent with general economic intuition. There is a substantial amount of literature in the quantitative finance and applied mathematics communities that highlights that an increase in correlation among financial securities is observed during times of crisis \cite{JamescryptoEq}. This corresponds to spikes in the first eigenvalue in the correlation matrix - indicating increases in collective market behaviour. When money flows out of the cryptocurrency market, through losses and sale of assets, this would lead to increased correlations and a spike in the first eigenvalue. Thus, the inverse relationship we identify is consistent with what one would expect based on prior research \cite{Fenn2011}.
However, our findings go further and may have fruitful implications for investment managers. Identifying such peaks and troughs in correlation behaviours may provide a decision support tool as to whether they are in a market environment where security selection is of greater or less importance. In market environments where correlations are strongly positive, it may be more difficult for managers to produce return streams that exhibit lower market beta. The approximate periodicity we observe in local maxima and minima may encourage cryptocurrency investors to engage in cyclical patterns of more bullish and bearish investing to avoid exposure in riskier periods.
\subsection{Comparison of particular periods}
\label{sec:intervalscorrelation}
To further elucidate the findings of the previous section, we partition our analysis window into five periods and analyse the correlations separately within each of these windows. Let $\Psi^{\text{Pre}}$, $\Psi^{\text{Peak}}$, $\Psi^{\text{Post}}$, $\Psi^{\text{Bull}}$ be the correlation matrices obtained across the entire time intervals specified in Section \ref{data}. In the notation of Section \ref{sec:evolutionarycorrelation}, $\Psi^{\text{Pre}}=\Psi^{[1:59]}$, and so on.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Correlation_distributions.png}
\caption{
Kernel density estimates of correlation matrix entries corresponding to our five time partitions: Pre-COVID, Peak COVID, Post-COVID, Bull and Bear. The Peak COVID correlation matrix has the highest average correlation, indicating extraordinarily high correlations during this period.
}
\label{fig:Correlation_matrix_distributions}
\end{figure*}
Figure \ref{fig:Correlation_matrix_distributions} displays kernel density estimates of the entries of our 5 correlation matrices. The findings are generally consistent with Figure \ref{fig:Correlation_matrix_norm}. The distribution of elements $\Psi_{ij}$ in the four matrices $\Psi^{\text{Pre}}$, $\Psi^{\text{Post}}$, $\Psi^{\text{Bull}}$ and $\Psi^{\text{Bear}}$ are highly similar. All four distributions have means in the range $0.38$ to $0.46$ and exhibit similar variability around their means. By contrast, $\Psi^{\text{Peak}}$ has a mean value of $0.78$ - highlighting the spike in correlations during the COVID-19 crisis. One finding of slight surprise is the Bear partition exhibiting correlations more like the Pre-COVID, Post-COVID and Bull windows than the Peak COVID window. It is quite possible that if the dates of the Bear period were brought closer to the present day, including beyond 30-06-2021, this distribution would yield a significantly higher average correlation score. The means and standard deviations of the entries of all five matrices are recorded in Table \ref{tab:table_correlation_matrix}.
The observed increase in correlations during market crises is consistent with prior research that has been published in the literature. However, one new observation is the stark contrast in correlation behaviours in periods directly adjacent to an acute period of crisis. One could imagine that the distribution of cryptocurrency correlations transitions gradually downward following a market crisis. Instead, periods directly after such crises exhibit correlations that are more similar to relatively stable market conditions.
\begin{table}[ht]
\centering
\begin{tabular}{ |p{2cm}|p{2.8cm}|p{2.9cm}|}
\hline
Period & Mean of entries $\Psi_{ij}$ & Standard deviation \\
\hline
Pre-COVID & 0.456 & 0.164 \\
Peak COVID & 0.784 & 0.166 \\
Post-COVID & 0.421 & 0.182 \\
Bull & 0.383 & 0.135 \\
Bear & 0.421 & 0.182 \\
\hline
\end{tabular}
\caption{Mean and standard deviation of correlation matrix entries across the five periods, as analysed in Section \ref{sec:intervalscorrelation}.}
\label{tab:table_correlation_matrix}
\end{table}
\section{Collective dynamics and market size}
\label{Collective_dynamics_fragmentation}
In the previous section, we observed weaker correlation behaviours while the market grew and stronger correlation behaviours when the market was in crisis or contracting. In this section, we take a closer examination of the underlying market effect of collective dynamics and how this relates to the current market size.
First, we apply principal components analysis (PCA) to our time-varying correlation matrices. This procedure learns the linear map $\Omega$ such that our standardised returns matrix $\tilde{R}$ is transformed into a matrix of uncorrelated variables $Z$, that is, $Z = \Omega \tilde{R}$. The rows of $Z$ represent the principal components (PCs) of the matrix $\tilde{R}$, while the rows of $\Omega$ consist of the principal component coefficients. The matrix is ordered such that the first row is along the axis of most variation in the data. Its corresponding eigenvalue $\lambda_1$ is thus of substantial practical importance, quantifying the greatest extent of variance in the data. It has been referred to as representative of the collective strength of the market \cite{Fenn2011}. All subsequent PCs, subject to the constraint that they are mutually orthogonal, maximise the variance along their respective axes. Continuing this procedure effectively diagonalises the correlation matrix, $\Psi = E D E^{T}$, where $D$ is a diagonal matrix of eigenvalues and $E$ is an orthogonal matrix. By (\ref{eq:rhodefn}), $\Psi$ is a symmetric positive semi-definite matrix with all eigenvalues real and non-negative, so we may order them $\lambda_1 \geq ... \geq \lambda_N \geq 0$. Each $\lambda_i$ quantifies the extent of variance along the $i$th principal component axis. Thus, we may normalise the eigenvalues by defining $\tilde{\lambda}_i = \frac{\lambda_i}{\sum^{N}_{j=1} \lambda_j}$ to determine the proportion of all variance accounted for by each step in the PCA.
This quantity is related to the norm of the correlation matrix defined in Section \ref{sec:evolutionarycorrelation}. Indeed, by the spectral theorem, the largest (absolute value) eigenvalue of a symmetric matrix coincides with the matrix' \emph{operator norm} \cite{RudinFA}. That is,
\begin{align}
|\lambda_1|=\|\Psi\|_{op}=\max_{x \in \mathbb{R}^N - \{0\}} \frac{\|\Psi x\|}{\|x\|}.
\end{align}
Next, every diagonal entry of $\Psi$ is equal to 1, so the trace of $\Psi$ is equal to $N$, and thus $\sum^{N}_{j=1} \lambda_j=N$. Hence $\tilde{\lambda}_1 = \frac{1}{N} \|\Psi\|_{op}$. Thus both $\tilde{\lambda}_1$ and $\|\Psi\|_1$, as defined in (\ref{eq:L1norm}), are appropriately normalised norms, with values in $[0,1].$ Succinctly put, $\tilde{\lambda}_1$ is a normalised operator norm while $\|\Psi\|_1$ is a normalised $L^1$ norm. For the remainder of this section, our central object of study is the changing value of $\tilde{\lambda}_1$, which represents both the first proportion of explanatory variance, as well as a normalised operator norm of the matrix.
Just like Section \ref{sec:evolutionarycorrelation}, we set $S=90$ and analyse a 90-day rolling window. Let $\tilde{\lambda}_1(t)$ be the normalised first eigenvalue of the matrix $\Psi^{[t-S+1:t]}$. We plot this over our analysis window in Figure \ref{fig:Collective_strength_vs_market_size}. There are two primary findings of interest. First, consistent with earlier experiments, there is a spike in $\tilde{\lambda}_1(t)$ during early 2020. This reflects the highly correlated behaviours of cryptocurrencies and indiscriminate selling during the COVID-19 pandemic. Subsequently, $\tilde{\lambda}_1(t)$ declines until May 2021. This most evident decline corresponds to the cryptocurrency bull market, where total cryptocurrency assets grew by several orders of magnitude. Next, there is a significant rise in $\tilde{\lambda}_1(t)$ towards the end of our analysis window, corresponding to higher collective strength in the market. This increase occurred contemporaneously with the aggressive sell-off in cryptocurrency assets, suggesting that there may be a relationship between the size of the market and the strength of the underlying collective dynamics. Broadly, the trajectory of $\tilde{\lambda}_1(t)$ is highly similar to that of $\nu^\Psi(t)$ as depicted in Figure \ref{fig:Correlation_matrix_norm}.
To investigate further, we quantitatively incorporate the size of the cryptocurrency market changing over time. Let $M_i(t), i=1,...,N, t=0,...,T$ be the multivariate time series of cryptocurrency market sizes $M_i(t)$ over our analysis window. Due to the significant volatility exhibited by the market, we compute a rolling average of the entire market, defined by \\ $\tilde{M}(t) = \frac{1}{S} \sum^t_{k=t-S+1} \sum^N_{i=1} M_i(k), t= S,...,T$. We include the plot of this varying over time in the same figure, Figure \ref{fig:Collective_strength_vs_market_size}.
While previous work \cite{JamescryptoEq} has studied the first eigenvalue in isolation and compared its properties between cryptocurrency and equity markets, this work is the first we know of to examine its relationship with market size over time. In particular, we reveal an inverse relationship between the size of the market and the first eigenvalue of the correlation matrix $\tilde{\lambda}_1(t)$. To quantify this observation, we compute the correlation between the rolling size of the cryptocurrency market, $\tilde{\lambda}_1(t)$, and the collective strength of the market, $\tilde{M}(t)$. The correlation between these two series is computed to be $\rho^{\tilde{M}, \tilde{\lambda}_1} = -0.122 $. While this cannot show causation, it suggests a possibility that as the market grows in size, the strength of collective dynamics may decline.
This would be a noteworthy finding with important implications for the future of the cryptocurrency market, especially given the divided views on the future of the market's viability. Suppose one of two contrived scenarios exist; a ``bull case'' where cryptocurrency prices recover and cryptocurrency becomes a systemically important asset class, and a ``bear case'' where prices continue to decline and cryptocurrencies lose the interest of institutional investors. Our findings indicate that in the bull case, behaviours may become increasingly fragmented and heterogeneous, and there will be opportunities for skilful security selection to generate portfolio alpha. In the bear case, where prices decline and the size of the market decreases, behaviours may become more homogeneous. This could mean fewer opportunities for alpha generation through security selection, as correlations will be strongly positive.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{MarketSize_vs_CollectiveStrength.png}
\caption{$\tilde{\lambda}_1(t)$, which represents the collective strength of the market and $\tilde{M}(t)$, which represents the size of the cryptocurrency market, as defined in Section \ref{Collective_dynamics_fragmentation}. An inverse relationship is observed with correlation $\rho^{\tilde{M}, \tilde{\lambda}_1} = -0.122$.}
\label{fig:Collective_strength_vs_market_size}
\end{figure*}
\section{Inconsistency analysis between market size, returns and volatility}
\label{return_vol_size_inconsistency}
We now extend our study of cryptocurrency market sizes to incorporate their relationship with returns and volatility behaviours individually. Specifically, we investigate the consistency among cryptocurrencies between the attributes of market size, returns and volatility, and how this changes over time. For this purpose, we define three distance matrices and appropriately normalise them.
Let our full period of analysis 01-01-2019 to 30-06-2021 be indexed $t=0,1,...,T$, where $T=911$. Let $M_i(t)$, $t=0,...,T$, be the multivariate time series of market size on each day and let $R_i(t), t=1,...,T$, be the multivariate time series of log returns, as defined by (\ref{eq:logreturns}). Let $S=90$ days and let $\sigma_i(t), t=S,...,T$ be the multivariate time series of 90-day rolling volatility. At each $t$, this is defined as the standard deviation of the log returns of the prior 90 days. Now, we may construct distance matrices for each $t=S,...,T$ as follows:
\begin{align}
\label{eq:marketdiff}
D^{M}_{ij}(t) = \frac{1}{S} \left| \sum^t_{k=t-S+1} [M_i(k) - M_j(k)] \right|; \\
\label{eq:returndiff}
D^{R}_{ij}(t) = \left|\sum^t_{k=t-S+1} [R_i(k) - R_j(k)] \right| ; \\
D^{\Sigma}_{ij}(t) = \left| \sigma_i(t) - \sigma_j(t) \right|.
\end{align}
Thus, $D^M(t), D^R(t)$ and $D^\Sigma (t)$, respectively, measure discrepancy between cryptocurrencies with respect to average market size, total returns, and rolling volatility, each over the 90-day period concluding on day $t$, just like our study of correlation norms and market size in Sections \ref{sec:evolutionarycorrelation} and \ref{Collective_dynamics_fragmentation}. We now convert these three distance matrices into affinity matrices whose elements lie in $[0,1]$:
\begin{align}
A^{M}_{ij}(t) = 1 - \frac{D_{ij}^{M}(t)}{\max_{kl}\{ D^{M}_{kl}(t) \}}; \\
A^{R}_{ij}(t) = 1 - \frac{D_{ij}^{R}(t)}{\max_{kl}\{ D^{R}_{kl}(t) \}}; \\
A^{\Sigma}_{ij}(t) = 1 - \frac{D_{ij}^{\Sigma}(t)}{\max_{kl}\{ D^{\Sigma}_{kl}(t) \}}.
\end{align}
These affinity matrices are appropriately normalised and can be compared directly to study the consistency between cryptocurrency market size and returns or volatility. We generate two \emph{inconsistency matrices} as follows:
\begin{align}
\text{INC}^{M,R}(t) =A^{M}(t) - A^{R}(t); \\
\text{INC}^{M,\Sigma}(t) =A^{M}(t) - A^{\Sigma}(t).
\end{align}
Larger absolute values of the entries of $\text{INC}^{M,R}$ indicate that the relationship between two cryptocurrencies regarding market size and returns is quite different, analogously for $ \text{INC}^{M,\Sigma}$. To study the degree of consistency between these attributes in totality across our collection, we compute the $L^1$ norm of the resulting inconsistency matrices and study how these norms evolve over time. That is, for $t=S,...,T$, we compute an analogous quantity as defined in (\ref{eq:L1norm}):
\begin{align}
\nu_{M,R}^{INC}(t) = \| INC^{M,R}(t) \|; \\
\nu_{M,\Sigma}^{INC}(t) = \| INC^{M,\Sigma}(t) \|.
\end{align}
Figure \ref{fig:CCA_time} displays the time-varying inconsistency norms. It is clear that throughout our period of analysis, there is greater consistency between volatility and size than returns and size, as indicated by the smaller values of $\nu_{M,\Sigma}^{INC}(t)$. This is what one would expect in a more established asset class such as equities. For instance, large-cap equities typically exhibit lower volatility than small-cap equities, creating some consistency between market size and volatility. By contrast, returns and size are shown to be significantly more inconsistent, highlighting that the relationship between size and returns is less clear than that between size and volatility. Furthermore, it suggests that the size of cryptocurrencies is by no means a good representation of the future expectation of returns.
Previous work \cite{James2021_crypto} has used inconsistency matrices to study the extent of inconsistency between returns and volatility. There, we identified the most anomalous individual cryptocurrencies in these attributes. In this work, we take quite a different direction. First, our inconsistency matrices are time-varying. Next, rather than identifying individual cryptocurrencies, we study temporal trends in the collective extent of inconsistency by examining the inconsistency matrices' norms as a function of time. Further, we compare the inconsistency between returns and market size with volatility and market size, incorporating a new parameter. Our finding is also new and of interest, namely that size-returns is more inconsistent than size-volatility. Such a finding could be of great interest to risk managers looking to find factors or exposures their portfolios are most in need of diversifying away from.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{Inconsistency_matrix_norms.png}
\caption{Time-varying inconsistency matrix norms between market size and returns, $\nu_{M,R}^{INC}$, and size and volatility, $\nu_{M,\Sigma}^{INC}$, as defined in Section \ref{return_vol_size_inconsistency}. The size-volatility inconsistency is essentially always lower, indicating more consistency in relationships between cryptocurrencies regarding their size and volatility than their size and returns.}
\label{fig:CCA_time}
\end{figure*}
\section{Temporal changes and dispersion of volatility}
\label{Volatility_persistence_regime_identification}
Having identified greater consistency in volatility and market size behaviours, we now more closely examine the structure of collective volatility over time. The behaviour of volatility and the general identification of regimes in financial markets is a topic of great interest. Many parametric statistical models, such as regime-switching models, assume a fixed number of volatility regimes for a candidate modelling problem. Often the selection of this number is quite arbitrary. Like all parametric models, if assumptions are misspecified, the resulting estimates can be highly inaccurate.
We take a different approach to the analysis of collective volatility behaviours and detect a new phenomenon that we term \emph{volatility dispersion}. To do so, set our window length as $S=90$ days. For each $t=S,...,T$, we may consider the 90-day rolling volatilities $\sigma_i(t)$, as discussed in the previous section. For a fixed $t$, we normalise the vector $(\sigma_1(t),...,\sigma_N(t)) \in \mathbb{R}^N$ by its total sum to produce a probability vector $\mathbf{p}(t)$ that measures the concentration of volatility across a selection of cryptocurrencies. That is, let
\begin{align}
p_i(t) = \frac{\sigma_i(t)}{\sum_{j=1}^N \sigma_j(t)}.
\end{align}
For example, if $\mathbf{p}(t)=\frac{1}{N}(1,1,...,1) \in \mathbb{R}^N$, this indicates that the 52 cryptocurrencies have identical volatility measured over the past 90-days, while a value of $(1,0,...,0)\in \mathbb{R}^N$ indicates that all volatility is observed in the first currency, with none in any of the others. Thus, $p_i(t)$ is a measure not of the absolute size of volatility but the proportional contribution of one cryptocurrency to the total volatility of the collection.
With these probability vectors $\mathbf{p}(t),t=S,...,T$, we define a $(T-S +1) \times (T-S +1)$ distance matrix using the Wasserstein distance between these distributions at all points in time. That is, let $d^W$ be the $L^1$-Wasserstein metric \cite{DelBarrio}, and let
\begin{align}
D^{vol}(s,t) = d^{W} (\mathbf{p}(s), \mathbf{p}(t)) \quad \forall s,t, \in [S,...,T].
\end{align}
We then apply hierarchical clustering to our distance matrix, $D^{vol}(s,t)$, and study the resulting dendrogram Figure \ref{fig:Volatility_dendrogram}. It is worthy to note that the Wasserstein distance, unlike the $L^1$ norm between vectors, does not distinguish probability vectors based on their order; it is essentially a distance between vectors as sets (possibly with repetition). We make this choice because we are not interested in distinguishing periods where a particular cryptocurrency is highly volatile, but whether the volatility is, broadly speaking, spread out or concentrated among the collection as a whole.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.91\textwidth}
\includegraphics[width=\textwidth]{_vol__wasserstein_Dendrogram.png}
\caption{}
\label{fig:Volatility_dendrogram}
\end{subfigure}
\begin{subfigure}[b]{0.91\textwidth}
\includegraphics[width=\textwidth]{Volatility_distribution_variance.png}
\caption{}
\label{fig:Volatility_variance}
\end{subfigure}
\caption{In (a), we perform hierarchical clustering on the matrix $D^{vol}$ between normalised vectors of rolling volatility. The dendrogram groups dates $s,t \in [S,...,T]$ according to similarity between their corresponding vectors $\mathbf{p}(s)$ and $\mathbf{p}(t)$. Two clusters are observed, with the secondary cluster associated to times $t$ observed during COVID-19. In (b), we elucidate this finding more closely, plotting the variance of the probability vector $\mathbf{p}(t)$ over time. The variance of this vector is lower during COVID-19, indicating vectors where all volatilities are closer together. That is, the total volatility of the market is spread out more over all the constituent cryptocurrencies. We term this increased volatility dispersion.}
\label{fig:Volatility_Wasserstein}
\end{figure*}
Figure \ref{fig:Volatility_dendrogram} highlights several interesting findings. First, two volatility clusters are identified, one dominant cluster of volatility behaviours (with two subclusters) and a smaller cluster that is highly different to the rest of the collection. The latter cluster consists of probability vectors $\mathbf{p}(t)$ generated at times within the COVID-19 market crash. Interestingly, this cluster does not display overwhelming self-similarity; instead, it is relatively diffuse but exhibits significant differences with the concentrated subcluster of the majority cluster. The anomalous behaviours of the COVID-19 market crash are further demonstrated, with such profound distances to other (highly variable) periods in the market over the past several years.
To further investigate the nature of this split, we perform a closer analysis of the probability vectors $\mathbf{p}(t)$ over time. Considered as a distribution over $[0,1]$, we compute the time-varying \emph{intra-volatility variance} \\ $\text{Var}(\mathbf{p}(t))$. We display this as a function of time in Figure \ref{fig:Volatility_variance}. This supports the separation of behaviours seen in Figure \ref{fig:Volatility_dendrogram}. The COVID-19 market crash exhibits markedly lower variance among the individual rolling volatilities $p_i(t)$. Interestingly, this is also observed towards the end of our analysis window. A lower value of the variance of $\mathbf{p}(t)$ means that the contribution to the variance of each individual cryptocurrency is more uniform across the collection. That is, the proportion of the market's total volatility is more spread out among all the cryptocurrencies during the COVID-19 market crisis. While it is predictable that the absolute volatility would spike during a crisis, it is an unexpected finding that there would be less deviation between the different volatilities of individual cryptocurrencies - essentially, everything is similarly volatile together.
In Appendix \ref{sec:proofs}, we present two theoretical results on the distances $D^{vol}(s,t)$ between normalised volatility vectors and intra-volatility variance $\text{Var}(\mathbf{p}(t))$. These propositions identify the \emph{uniform distribution of volatility} $\mathbf{p}_0=\frac{1}{N}(1,1,...,1) \in \mathbb{R}^N$ and the \emph{one-shot distributions of volatility} $\mathbf{q}_k=(0,...,0,1,0,...,0)$ as the two extremal possible spreads of volatility. The uniform has the lowest intra-volatility variance, a one-shot has the greatest intra-volatility variance, and the greatest possible value of the Wasserstein distance is between the uniform and a one-shot distribution. These propositions demonstrate that, in a precise sense, our study of volatility dispersion investigates the extent that rolling volatility vectors sit between two extremes: the case where all the volatility of the market is uniformly distributed across every asset, and a case where all volatility is concentrated in a single asset.
Fund managers are often warned of the potential issues when over-fitting models to study parameters such as volatility. The volatility dispersion framework we introduce in this section supports the ``Occam's Razor'' principle familiar to many in statistical learning. Rather than trying to capture the true complexity of the volatility process, using just two regimes to capture low and high volatility periods, may work best. However, the interpretations of volatility dispersion are new and different from those gained from traditional volatility clustering, indicating periods where volatility is relatively uniform and less avoidable regardless of asset selection.
\section{Conclusion}
\label{Conclusion}
In Section \ref{Market_correlations}, we study the time-varying evolution of correlations among our collection of cryptocurrencies, and explicitly compare the distribution of correlation coefficients over five discrete time windows. The most notable findings were the spike in correlations during the COVID-19 market crash and the drop in correlations during the subsequent bull market run in late 2020 to early 2021. Broadly, both experiments in this section allude to a clear association between strong collective correlation among cryptocurrencies and periods of declining value in the market.
In Section \ref{Collective_dynamics_fragmentation}, we investigate the aforementioned association more closely. First, we explore the time-varying explanatory variance of the correlation matrix's first eigenvalue $\tilde{\lambda}_1(t)$, where a noticeable spike is seen during the COVID-19 pandemic. Next, we directly compare $\tilde{\lambda}_1(t)$ with the rolling size of the cryptocurrency market, and show a negative correlation therein. This suggests that as cryptocurrency assets rise, the strength of collective dynamics may weaken. Thus, in a scenario where cryptocurrency assets rise significantly and the asset class gains further prominence, the strength of collective dynamics may decline - leading to more heterogeneous behaviours. This would place greater importance on high-quality security selection when investing in cryptocurrencies.
In Section \ref{return_vol_size_inconsistency}, our experiments reveal greater consistency in volatility and market size behaviours, wherein cryptocurrencies similar in market size are more likely to exhibit commensurate levels of volatility. By contrast, there is less consistency between cryptocurrency size and returns - suggesting that the cryptocurrencies' size do not provide a good indication of future expected returns.
Finally, in Section \ref{Volatility_persistence_regime_identification}, we study the structure of volatility behaviours over time, applying hierarchical clustering to the distances between distributions of rolling volatility at all points in time. Our technique suggests that there are two volatility patterns - times where the total volatility of the market is more dispersed across the entire collection, and times where it is more concentrated in fewer particular cryptocurrencies. We reveal that the COVID-19 market crash not only features higher volatilities in general, but that the total volatility is more evenly spread across all individual cryptocurrencies. This technique could be used as an accompanying tool to estimate the number of regimes in more traditional, parametric regime-switching models in the econometric and statistical modelling literature. Volatility dispersion also provides independent confirmation of and a new approach to studying increased heterogeneity of market dynamics during crises, complementing our study of collective correlations. It is effectively a more direct measure of assets being uniformly volatile together than the first eigenvalue of the correlation matrix.
Many possibilities exist for future work building upon the techniques and findings of this paper. First, one could study the associations between market size and the strength of collective dynamics in the cryptocurrency market with alternative methodologies. For example, suitable distances between normalised trajectories could replace the use of correlations. Second, one could compare studied relationships between the size of the cryptocurrency market and its underlying dynamics with those of more traditional asset classes. Third, given the cryptocurrency market's relative infancy, our findings may turn out to be transient; if the market continues to grow, perhaps the inverse relationship between total market size and collective dynamics will not hold in future crises.
Several of our new methodologies and findings may have particular promise in future research and applications. Our time-varying analysis of the total extent of inconsistency between parameters could reveal suitable predictors to incorporate in trading strategies either aimed at maximising returns or minimising volatility. Our volatility dispersion analysis is also promising. Unlike typical methods of volatility clustering or regime-switching models, we compare similarity between all windows of time, not just adjacent periods. In this paper, our analysis has been entirely descriptive, but future work could employ it in a predictive context where we expect volatility to be rather uniform across the market. Such times could prompt an entire withdrawal of funds to safe-haven assets such as gold or cash, as a uniform spread of volatility could mean any investment in the cryptocurrency market would carry significant risk. Finally, volatility dispersion could be applied to other financial and economic securities beyond cryptocurrencies. For instance, one could identify clusters of macroeconomic behaviour using data such as interest rates, GDP, inflation, unemployment, and others. One could explore the dispersion of these factors individually, or identify clusters of economic behaviour with a higher dimensional distance measure where a variety of metrics are incorporated.
Overall, this paper reveals several key relationships between cryptocurrencies' collective dynamics, market size, returns, and volatility and analyses these behaviours over time. During COVID-19 and towards June 30, 2021, correlation behaviours are stronger, and volatility is more uniformly spread across the entire market. Individual cryptocurrencies' market sizes are shown to be more consistent with volatility than returns, while the total market size is inversely associated with the quantifiable strength of collective dynamics. Both the lack of consistency between market size and returns and the high correlations across the market during crises present significant challenges to investors aiming to select optimal portfolios of cryptocurrencies on either a long or short-term basis.
\begin{acknowledgements}
The authors would like to thank Georg Gottwald for some helpful discussions.
\end{acknowledgements}
\section*{Declarations}
\textbf{Funding}: no specific funding was received for this manuscript.
\noindent \textbf{Conflicts of interest}: the authors have no conflicts of interest to report.
\noindent \textbf{Availability of data and material}: all data are publicly available at \url{https://coinmarketcap.com/}
\noindent \textbf{Authors' contributions}: each author is an equal first author, playing an equal role in every aspect of the manuscript.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,150 |
Q: How to overload new/delete operators in VxWorks 7 with Gnu compiler I'm trying to create a VxWorks7 Image Project (VIP) that includes my application which overloads new and delete. When I build the VIP and application separately with the app as a Downloadable Kernel Module (DKM) it builds and runs fine by booting the VIP on the target and downloading the App DKM separately with Workbench4. However if I try to build the VIP and the DKM together as a single bootable VIP I get multiple define errors for the new and delete operators from Workbench during the build as follows:
C:/BW/Vehicle/builds/cx20X0Up32BitDebugVsb/krnl/gnu_standard\libgnucplus.a(_x_gnu_delaop.o): In function `operator delete[](void*)':
(.text+0x0): multiple definition of `operator delete[](void*)'
C:/BW/Vehicle/builds/Vehicle/cx20X0Up32BitDebugVsb_SANDYBRIDGEgnu/Vehicle_partialImage/Debug/Vehicle_partialImage.o:C:/BW/Alcatraz/Vehicle/src/IRL/Util/heap.cpp:886: first defined here
C:/BW/Vehicle/builds/cx20X0Up32BitDebugVsb/krnl/gnu_standard\libgnucplus.a(_x_gnu_delop.o): In function `operator delete(void*)':
(.text+0x0): multiple definition of `operator delete(void*)'
C:/BW/Vehicle/builds/Vehicle/cx20X0Up32BitDebugVsb_SANDYBRIDGEgnu/Vehicle_partialImage/Debug/Vehicle_partialImage.o:C:/BW/Alcatraz/Vehicle/src/IRL/Util/heap.cpp:841: first defined here
C:/BW/Vehicle/builds/cx20X0Up32BitDebugVsb/krnl/gnu_standard\libgnucplus.a(_x_gnu_newaop.o): In function `operator new[](unsigned int)':
(.text+0x0): multiple definition of `operator new[](unsigned int)'
C:/BW/Vehicle/builds/Vehicle/cx20X0Up32BitDebugVsb_SANDYBRIDGEgnu/Vehicle_partialImage/Debug/Vehicle_partialImage.o:C:/BW/Alcatraz/Vehicle/src/IRL/Util/heap.cpp:813: first defined here
C:/BW/Vehicle/builds/cx20X0Up32BitDebugVsb/krnl/gnu_standard\libgnucplus.a(_x_gnu_newop.o): In function `operator new(unsigned int)':
(.text+0x0): multiple definition of `operator new(unsigned int)'
C:/BW/Alcatraz/Vehicle/builds/Vehicle/cx20X0Up32BitDebugVsb_SANDYBRIDGEgnu/Vehicle_partialImage/Debug/Vehicle_partialImage.o:C:/BW/Alcatraz/Vehicle/src/IRL/Util/heap.cpp:808: first defined here
collect2.exe: error: ld returned 1 exit status
WindRiver support offered the solution to make the following declarations in the source file where the new and delete operators are overloaded. This is supposed to signal the compiler/linker to omit the library version of new/del operators.
int ___x_gnu_newaop_o = 1;
int ___x_gnu_newop_o = 1;
int ___x_gnu_delaop_o = 1 ;
int ___x_gnu_delop_o = 1;
Doing this I still get the same multiply defined errors as above and WindRiver support hasn't had any viable suggestions. Has anyone had experience trying to overload global ::new and ::delete in VxWorks7 using Gnu compiler?
Here is link to the issue on WindRiver Support 66370. Not sure if it has public access.
A: I've run into the similar situation with redefining malloc/free functions for debug purposes. Perhaps, my solution is crude, but it's simple and efficient: I just renamed standard functions to "malloc_original" and "free_original". Thus all calls to malloc and free were linked to new implementation only, while new versions of malloc and free were calling original functionality if necessary. Here is how:
*
*Locate library with original functionality. In your case its libgnucplus.a
*Library is just an archive with objects. Extract them with ar -x libgnucplus.a
*List symbols in objects, linker was complaining about (_x_gnu_delaop.o, _x_gnu_delop.o etc) using nm objectName.o. Find operators' names, they will have some name mangling
*If objects export nothing except for undesired operators and you don't want to keep original implementation, it should be OK to create libgnucplus.a from all obj files except these, so you can skip other steps
*Otherwise run objcopy --redefine-sym operatorName__WithMangling=operatorNameOriginal__WithMangling objFile.o. I did it on pure C functions, so there were no mangling, but I'm sure, mangling won't be a great obstacle.
*Put modified obj files back into lib: ar rvs libgnucplus.a objFile1.o objFile2.o ...
*Have fun
I don't deny, that approach is quite dirty and have some drawbacks. For example modified toolchain implies, that its upgrade will require re-doing all the same steps; the other one is that developer not aware of the situation (which isn't a rare situation in long lasting projects) will have really hard time figuring details out. In my case, it was used for temporary debugging memory troubles, so no moral aspects involved :)
A: Turns out the multiple define after trying the Wind River proposed workaround was due to libraries with circular references and also from using the workaround specifying all overloads when only some were used. I am now able to build without issues using the following and without resorting to a modified standard library which is what we used previously with VxWorks 6.x:
// ======== SPECIAL CASE NEW/DELETE OPERATOR OVERLOAD FOR GNU ========
// The following ___x_gnu_????.o global variable definitions are special
// case indicators to Gnu compiler when building the application into an
// integrated VIP (VxWorks Image Project). They indicate which new and
// delete operators are being overloaded. Doing this avoids a multiple
// definition build error for new/delete operators. This multiple
// definition error is only an issue when building application as an
// integrated VIP and not when app is downloaded separate from VIP as a
// Downloadable Kernel Module (DKM). It is important to only include
// ___x_gnu_????_o variables for the specific operators being
// overloaded. Defining a ___x_gnu_????_o variable for an operator that
// is not actually overloaded will cause a multiple define error also.
// This solution to overloading new/delete was obtained directly from
// Wind River support and is described in case #66370 and as of this
// date is not described anywhere in Wind River documentation.
// link to case #66370 below. -- 2017Jan18jdn
//
// https://windriver.force.com/support/apex/CaseReadOnly?id=5001600000xKkTYAA0
int ___x_gnu_newaop_o = 1; // Indicates overload of new [] operator
int ___x_gnu_newop_o = 1; // Indicates overload of new operator
int ___x_gnu_delaop_o = 1 ; // Indicates overload of delete [] operator
int ___x_gnu_delop_o = 1; // Indicates overload of delete operator
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,275 |
<?php
include_once(dirname(__FILE__).'/../../vision.php');
include '../zahlavi.php';
panel(''.$jazyk['admin_121'].'');
?>
<form method="POST" action="addscript.php" >
<?php echo $jazyk['admin_122']; ?><br /><input class="textbox" type="text" name="nazev2" size="20"><br /><br />
<?php echo $jazyk['admin_123']; ?><br />
<?php
$FCKeditor = new FCKeditor('novinka');
$FCKeditor->BasePath = 'fckeditor/';
$FCKeditor->Value = $myvalue;
$FCKeditor->Create();
?><br /><br />
<input type="submit" value="<?php echo $jazyk['admin_064']; ?>" class="button">
</form>
<?php
include '../zapati.php';
?>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,575 |
package org.kie.workbench.common.stunner.bpmn.project.backend.query;
import java.net.URI;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.kie.workbench.common.services.refactoring.model.query.RefactoringPageRow;
import org.mockito.Mock;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.junit.MockitoJUnitRunner;
import org.mockito.stubbing.Answer;
import org.slf4j.Logger;
import org.uberfire.ext.metadata.model.KObject;
import org.uberfire.ext.metadata.model.KProperty;
import org.uberfire.io.IOService;
import org.uberfire.java.nio.file.FileSystem;
import org.uberfire.java.nio.file.FileSystemNotFoundException;
import org.uberfire.java.nio.file.Path;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
@RunWith(MockitoJUnitRunner.class)
public class BPMNProcessIdsResponseBuilder {
@Mock
private KObject kObject1;
@Mock
private KObject kObject2;
@Mock
private KProperty kProperty1;
@Mock
private KProperty kProperty2;
@Mock
private IOService ioService;
@Mock
private Path validPath;
@Mock
private FileSystem fileSystem;
@Mock
private Logger logger;
@Mock
private FileSystemNotFoundException exception;
@Test
public void testJBPM8828() {
final FindBpmnProcessIdsQuery findBpmnProcessIdsQuery = new FindBpmnProcessIdsQuery();
final List<KObject> kObjects = new ArrayList<>();
kObjects.add(kObject1);
kObjects.add(kObject2);
final List<KProperty<?>> kProperties1 = new ArrayList<>();
kProperties1.add(kProperty1);
final List<KProperty<?>> kProperties2 = new ArrayList<>();
kProperties2.add(kProperty2);
final String invalidFile = "ProcessA1";
final String validFile = "ProcessB1";
final URI validURI = URI.create(validFile);
when(kObject1.getProperties()).thenReturn(kProperties1);
when(kObject1.getKey()).thenReturn(invalidFile);
when(kObject2.getProperties()).thenReturn(kProperties2);
when(kObject2.getKey()).thenReturn(validFile);
when(kProperty1.getName()).thenReturn(findBpmnProcessIdsQuery.getProcessIdResourceType().toString());
when(kProperty1.getValue()).thenReturn(invalidFile);
when(kProperty2.getName()).thenReturn(findBpmnProcessIdsQuery.getProcessIdResourceType().toString());
when(kProperty2.getValue()).thenReturn(validFile);
when(validPath.getFileName()).thenReturn(validPath);
when(validPath.toUri()).thenReturn(validURI);
when(validPath.getFileSystem()).thenReturn(fileSystem);
String exceptionString = "FILE_SYSTEM_NOT_FOUND_EXCEPTION";
when(exception.toString()).thenReturn(exceptionString);
Set<String> attribViews = new HashSet<>();
when(fileSystem.supportedFileAttributeViews()).thenReturn(attribViews);
when(ioService.get(any(URI.class))).thenAnswer(
new Answer<Path>() {
public Path answer(InvocationOnMock invocation) throws FileSystemNotFoundException {
Object[] args = invocation.getArguments();
URI uri = (URI) args[0];
if (uri.getPath().compareTo(invalidFile) == 0) {
throw exception;
}
return validPath;
}
}
);
AbstractFindIdsQuery.BpmnProcessIdsResponseBuilder responseBuilder =
new AbstractFindIdsQuery.BpmnProcessIdsResponseBuilder(ioService,
findBpmnProcessIdsQuery.getProcessIdResourceType());
responseBuilder.LOGGER = logger;
List<RefactoringPageRow> response = responseBuilder.buildResponse(kObjects);
RefactoringPageRow pageRow = response.get(0);
Map<String, Path> pageRowValue = (Map<String, Path>) pageRow.getValue();
verify(logger, times(1)).error(exceptionString);
assertEquals(1, response.size(), 0);
assertEquals(1, pageRowValue.size(), 0);
assertTrue(validPath.compareTo(pageRowValue.get(validFile)) == 0);
}
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 673 |
Home / Rome Attractions / Private tour of Rome 4 hours with Mercedes Maybach S 580 4 matic
Private tour of Rome 4 hours with Mercedes Maybach S 580 4 matic
See Rome with the eyes of those born there ... each tour is modeled on the client's tastes, like a tailored suit, I have been doing these tours for 30 years and they are never the same, we go from the wonders of the ancient Roman empire to the Baroque Rome of the Popes, mixing the uses and culture of modern Romans , a unique experience ....
Entry/Admission - Piazza Navona
Entry/Admission - Trevi Fountain
Entry/Admission - Piazza del Popolo
Entry/Admission - Piazza Farnese
Entry/Admission - Piazza di Spagna
Entry/Admission - Piazza Venezia
Stop At: Colosseum, Piazza del Colosseo, 00184 Rome Italy
The Roman gladiator circus ...
Stop At: Roman Forum, Largo della Salara Vecchia 5/6, 00186 Rome Italy
The Roman Forum (in Latin Forum Romanum, although the Romans referred to it more often as Forum Magnum or simply Forum) is an archaeological area of Rome enclosed between the Palatine Hill, the Capitol, Via dei Fori Imperiali and the Colosseum, consisting of the stratification of the remains of those buildings and monuments of heterogeneous eras that for most of Rome's ancient history represented the political, legal, religious and economic center of the city of Rome, as well as the nerve center of the entire Roman civilization
Stop At: Trevi Fountain, Piazza di Trevi, 00187 Rome Italy
The Trevi Fountain is the largest and one of the most famous fountains in Rome. It was built on the facade of Palazzo Poli, by Nicola Salvi
Stop At: Pantheon, Piazza della Rotonda, 00186 Rome Italy
The Pantheon, in classical Latin Pantheum, is a building in ancient Rome located in the Pigna district in the historic center, built as a temple dedicated to all past, present and future deities. It was founded in 27 BC from the arpinate Marco Vipsanio Agrippa, son-in-law of Augustus
Stop At: Piazza di Spagna, Rome Italy
Piazza di Spagna, with the Spanish Steps, is one of the most famous squares in Rome. It owes its name to the palace of Spain, seat of the Iberian state embassy to the Holy See
Stop At: Piazza del Popolo, 00187 Rome Italy
The square and its door are an excellent example of architectural "stratification", a phenomenon that has occurred due to the continuous alternations of popes that involved modifications and reworkings of building and road works. Three churches overlook the square
Stop At: Piazza Navona, 00186 Rome Italy
Piazza Navona is one of the most famous monumental squares in Rome, built in the monumental style by the Pamphili family at the behest of Pope Innocent X with the typical shape of an ancient stadium
Stop At: Piazza Farnese, 00186 Rome Italy
Palazzo Farnese dominates the famous Piazza Farnese, embellished by two twin fountains by Girolamo Rainaldi and in which the Swedish national church of S. Brigida stands out. Piazza Farnese is a small oasis in the tourist chaos of Rome, a corner where you can stop for a few minutes to rest enjoying the calm and harmony of the square
Stop At: Piazza Venezia, 00187 Rome Italy
Piazza Venezia is a famous square in Rome. It is located at the foot of the Capitol, where five of the most important streets of the capital cross: via dei Fori Imperiali, via del Corso, the axis via C. Battisti-via Nazionale, the axis via del Plebiscito-corso Vittorio and via of the Teatro di Marcello
Rome Hop-on Hop-off Sightseeing Tour
Vatican Museums, St. Peter's Basilica and Sistine Chapel Small-Group Tour
Colosseum Arena Floor Access & Roman Forum Skip The Line
Rome by Night Walking Tour
Private Golf Cart Tour by Night
Underground Rome: Beneath the Streets Small-Group Tour | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,384 |
In collaboration with the International Film festival of Oostende 2015 FFO15 the SBC will bring post production people and international cinematographers around the table to engage in a conversation regarding postproduction problems and solutions.
The Masterclass is intended for DOP's, film students, producers will be held in flemish, but french and english questions will be answerd too.
This will create an unique opportunity to share our knowledge about color grading, VFX and other postproduction matters.
Data wrangler, DIT, Post-productie digital Lab : Who's responsable for what?
Managing color spaces on set.
2K, 4K, 8K what's next??
Collaboration between the postproduction company's like grading and VFX.
The emotions of a lens: Cinematographers talking.
February 16, 2014 leslie charreau Comments Off on The emotions of a lens: Cinematographers talking. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,963 |
Q: Is showing almost surely convergence equivalent to lim sup = lim inf on a set with probability 1? I know there are a lot of questions and answers concerning a.s. convergence on StackExchange, but I didn't find any addressing this in particular. What I am wondering is if you are given a problem of the variety that defines some sequence of random variables, say $X_n$ and you are asked to show that it converges almost surely, is this equivalent to showing that $P( \{ \omega \in \Omega: \lim \sup X_n(\omega) = \lim \inf X_n(\omega) \}) = 1$? ie, we are not given some $X$ and asked to show $X_n$ converges to $X$ almost surely, so is the best approach to show that the limit exists on a set with probability 1?
A: As commenters said, the equality of $\liminf$ and $\limsup$ indeed implies the existence of limit, with the caveat about infinite limits: having $\lim=+\infty$ is usually considered diverging to $\infty$.
Is this the best approach when $X$ is not known? Depends. If you are using tools that naturally fit the language of $\liminf$ and $\limsup$ (Fatou's lemma), then probably yes. But often, what you are really doing is showing that $\{X_n(\omega)\}$ is a Cauchy sequence for a.e. $\omega$. Then the talk of upper/lower limits is not necessary, and would be a distraction.
Also consider that when $X$ is vector-valued, we don't have upper and lower limits at all.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,619 |
Q: OkHttp: A simple GET request: response.body().string() returns unreadable escaped unicode symbols inside json can't convert to gson When sending a request in Postman, I get this output:
{
"valid": false,
"reason": "taken",
"msg": "Username has already been taken",
"desc": "That username has been taken. Please choose another."
}
However when doing it using okhttp, I get encoding problems and can't convert the resulting json string to a Java object using gson.
I have this code:
public static void main(String[] args) throws Exception {
TwitterChecker checker = new TwitterChecker();
TwitterJson twitterJson = checker.checkUsername("dogster");
System.out.println(twitterJson.getValid()); //NPE
System.out.println(twitterJson.getReason());
System.out.println("Done");
}
public TwitterJson checkUsername(String username) throws Exception {
HttpUrl.Builder urlBuilder = HttpUrl.parse("https://twitter.com/users/username_available").newBuilder();
urlBuilder.addQueryParameter("username", username);
String url = urlBuilder.build().toString();
Request request = new Request.Builder()
.url(url)
.addHeader("Content-Type", "application/json; charset=utf-8")
.build();
OkHttpClient client = new OkHttpClient();
Call call = client.newCall(request);
Response response = call.execute();
System.out.println(response.body().string());
Gson gson = new Gson();
return gson.fromJson(
response.body().string(), new TypeToken<TwitterJson>() {
}.getType());
}
Which prints this:
{"valid":false,"reason":"taken","msg":"\u0414\u0430\u043d\u043d\u043e\u0435 \u0438\u043c\u044f \u0443\u0436\u0435 \u0437\u0430\u043d\u044f\u0442\u043e","desc":"\u0414\u0430\u043d\u043d\u043e\u0435 \u0438\u043c\u044f \u0443\u0436\u0435 \u0437\u0430\u043d\u044f\u0442\u043e. \u041f\u043e\u0436\u0430\u043b\u0443\u0439\u0441\u0442\u0430, \u0432\u044b\u0431\u0435\u0440\u0438\u0442\u0435 \u0434\u0440\u0443\u0433\u043e\u0435."}
and then throws a NullPointerException when trying to access a twitterJson. Debugger shows that object as being null.
TwitterJson:
@Generated("net.hexar.json2pojo")
@SuppressWarnings("unused")
public class TwitterJson {
@Expose
private String desc;
@Expose
private String msg;
@Expose
private String reason;
@Expose
private Boolean valid;
public String getDesc() {
return desc;
}
public String getMsg() {
return msg;
}
public String getReason() {
return reason;
}
public Boolean getValid() {
return valid;
}
...
How can I fix the encoding issues with okhttp?
A: It is because the response object can be consumed only once. OKHTTP says that in their documentation. After the execute is invoked, you are calling the response object twice. Store the result of response.body().string() to a variable and then do the convert into GSON.
If I were to use a hello world example...
private void testOkHttpClient() {
OkHttpClient httpClient = new OkHttpClient();
try {
Request request = new Request.Builder()
.url("https://www.google.com")
.build();
Call call = httpClient.newCall(request);
Response response = call.execute();
System.out.println("First time " + response.body().string()); // I get the response
System.out.println("Second time " + response.body().string()); // This will be empty
} catch (IOException e) {
e.printStackTrace();
}
}
The reason it is empty the second time is because the response object can be consumed only once. So you either
Return the response as it is. Do not do a sysOut
System.out.println(response.body().string()); // Instead of doing a sysOut return the value.
Or
Store the value of the response to a JSON then convert it to GSON and then return the value.
EDIT: Concerning Unicode characters. It turned out since my location is not an English-speaking country, the json i was accepting was not in English as well. I added this header:
.addHeader("Accept-Language", Locale.US.getLanguage())
to the request to fix that.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 764 |
Gaither wins barnburner, will face Leto in 7A-8 final
Gaither sophomore RHP Austin Grause pitched 6 1/2 innings in relief, allowing just one run in the Cowboys' 7-5 win over Hillsborough in the 7A-8 district semifinal on Tuesday night. (Photo by Mike Camunas)
By Mike Camunas
TAMPA — Gaither and Leto will face off in the Class 7A-8 district final on Thursday, as the host Cowboys (12-11) won a wild one: a 7-6 barnburner over Hillsborough, which came after the top-seeded Falcons dispatched King 8-2.
"I'm just happy we get to go to regionals again," Leto coach J.J. Pizzio said when asked which team — Gaither or Hillsborough — he'd rather face in the district final. "You know, you're just playing with house money on Thursday."
The Falcons (20-6) led just 2-1 in the bottom of the fifth inning in the first game, but added four runs to widen the lead over the Lions (7-19). Freshman Coltin Pizzio and sophomore Damien Breton were both 2-for-3 with two runs scored, and senior Javy Hernandez had a two-run double in the win.
"We knew that if we stuck to our game plan and hit balls to the right, then we'd be successful — that's always our plan, and we never want to get to trying too hard and start pulling it," Pizzio added.
In the second game, the Terriers (8-15) and the Cowboys got off to a wild start, with each team scoring five runs in the first inning. Juan Jamie-Nunez had an RBI double and Dwayne House drove in two, yet Gaither came right back thanks to RBI singles by Frank Perez and Willie Jackson, and a bases clearing, two-out double by Adison Dubin.
Gaither would take the lead thanks to K'Wality Williams, who drove in two with a double in the very next inning.
Jamie-Nunez would cut the lead to one thanks to an RBI single in the fourth, but the Cowboys turned to usual starter, sophomore RHP Austin Grause, to shut down the Terriers.
"A lot of hits in the first inning, but I was glad we were able to come back and make it a whole new ballgame," Gaither coach Nelson North said. "We rolled the dice a little bit and hoped to get some innings out of (starter C.J. Hanson), but Grause has been our starter all year and he came in and did a great job."
North was also impressed by not only K'Wality Williams' game Thursday, but his whole season. The go-ahead hit in the second was big, but also was the final play of the game, as the second baseman made an over the shoulder catch in shallow centerfield that could have been a bloop single and tied the game with a Terriers' runner on third.
"We've had a lot of injuries and adversity throughout the year," North added, "and (Williams) was a guy who started off the year as a backup and came in and started and done a great, great job for us."
- Mike Camunas is a longtime veteran journalist who is always seeking true stories, trained under J. Jonah Jameson and takes better photos of Spider-Man than Peter Parker. Follow Mike on Twitter @MikeCamunas | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,670 |
Q: Hide container div below menu bar div using jquery and css I have a fixed header and menu bar and there is a container div when i scroll down the container div does not hide itself below the menu bar as shown in image below is the jquery code i am using. Please help to solve my issue.
var header= $('.header');
var start_div = $(header).offset().top;
var menu_div = $('.menu');
var menu = $(menu_div ).offset().top;
$.event.add(window, "scroll", function() {
var p = $(window).scrollTop();
$(header).css('position',((p)>start_div ) ? 'fixed' : 'static');
$(header).css('top',((p)>start_div ) ? '0px' : '');
$(header).css('width','840px');
$(header).css('min-height','108px');
});
$.event.add(window, "scroll", function() {
var p = $(window).scrollTop()+100;
$(menu_div).css('position',((p)>menu) ? 'fixed' : 'static');
$(menu_div).css('top',((p)>menu) ? '110px' : '');
$(menu_div).css('width','575px');
$(menu_div).css('height','57px');
});
A: Unless I'm missing something you don't need jQuery or even JS to do that.
Check the snippet (codePen here)
html,
body {
width: 100%;
height: 100%;
background-color: white;
}
.header-wrapper {
top: 0;
right: 0;
left: 0;
position: fixed;
height: 160px;
background-color: white;
}
.header {
background-color: cyan;
height: 100px;
width: 100%;
}
.menu {
width: 100%;
height: 50px;
background-color: green;
margin-top: 10px;
}
.content {
color: #fff;
background-color: black;
margin-top: 170px; /* same as height of header + nav + margins + 10px for coolness*/
}
<body>
<div class="header-wrapper">
<div class="header">Blue Header</div>
<div class="menu">Green Menu</div>
</div>
<div class="content">
My content<br>
<br><br><br><br><br><br><br><br><br><br><br><br><br><br>
blabla
<br><br><br><br><br><br><br><br><br><br><br><br><br><br>
blabla
<br><br><br><br><br><br><br><br><br><br><br><br><br><br>
</div>
</body>
A: Use the css z-index property.
.header, .menu {
z-index: 2
}
.container {
z-index: 1
}
http://www.w3schools.com/cssref/pr_pos_z-index.asp
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,751 |
Q: Laravel Validation - how to return the same category and subcategory that had been selected? Good evening, I'm developing a form that contains categories and subcategories, the subcategories only appear after the category has been selected. So far so good, when the form is submitted and contains some validation error handled by the validation method, the system returns to the form screen with all the fields except the sub category field that is requested when the category is chosen. ..
How do I do when an error occurs and return to the form screen come with the same subcategory options that had been selected?
Note: I have a provider that sends the categories to the view, but I do not know how to retrieve the category_id to send the subcategories, nor do I know if this would be the best way ...
A: try this
<select class="form-control" name="warehouse_id" id="_source">
<option disabled="" selected="">Selecciona Almacén</option>
@foreach($warehouses as $warehouse
<option value="{{ $warehouse->id }}" @if(old('warehouse_id') == $warehouse->id) {{ "selected=''" }} @endif>{{ $warehouse->name }}</option>
@endforeach
</select>
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 971 |
Q: What are partials in ruby on rails I am trying to figure out how partials works in rails.
Here is code :
<%= render "layouts/appended_pages", pages: {partial: "items",
collection: @results,
locals: {user_flag: false}} %>
I understand that this render will render _appended_pages but what will do part with pages: ? how it works
A: Partials allow you to easily organize and reuse your view code in a Rails application. Partial filenames typically start with an underscore ( _ ) and end in the same .html.erb extension as your views. It helps so you dont have to re-write your code over and over
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,518 |
Lidia Morawska (* 1952 in Tarnów, Polen) ist eine polnisch-australische Physikerin. Sie forscht unter anderem zu den Wirkungen von sehr feinen Teilchen in der Luft auf die menschliche Gesundheit. 2019 wies die Forschungsgruppe um Lidia Morawska im Rahmen eines Forschungsprojekts zu Viren in Aerosolen nach, dass bei kurzen oder mittleren Entfernungen ein erhebliches Infektionsrisiko durch Mikrotröpfchen besteht. 2020 empfahl sie daraufhin Schutzmaßnahmen wie das Lüften und die Anwendung von Luftfiltern, um die Übertragung von COVID-19 zu vermindern.
Leben und Karriere
Lidia Morawska wurde in Tarnów geboren und lebte von ihrem zweiten Lebensjahr bis zum Abitur in Przemyśl, Polen. Ihre Eltern waren Zofia Jaskuła und der Yachtkapitän Henryk Jaskuła, der im Mai 1980 als erster Pole und dritter Seefahrer der Geschichte allein die Erde umrundete, ohne Häfen anzulaufen.
Gegen den Rat ihrer Mutter, die um die Sicherheit der Tochter fürchtete, entschied sich Lidia Morawska für das Studium der Kernphysik und promovierte 1982 an der Jagiellonen-Universität in Krakau mit einer Arbeit über Radon und seine Zerfallsprodukte. Nach dem Studium beschäftigte sie sich aber nicht mit Kernreaktorphysik, sondern mit Umweltstrahlung. Bis 1987 war sie am Institut für Physik und Nukleartechniken an der Akademie für Bergbau in Krakau. Mit einem Stipendium der Internationalen Atomenergiebehörde, für ein einjähriges Studium an einer kanadischen Universität, ging sie mit ihrer Familie nach Kanada. Aus familiären Gründen blieb sie länger in Kanada und Forschungsaufenthalte führten die Physikerin in den Jahren 1987 bis 1991 an die McMaster-Universität in Hamilton und an die Universität Toronto. Einer ihrer Kollegen war Australier und steckte sie mit seiner Begeisterung für sein Heimatland an. So bewarb sie sich an mehreren australischen Universitäten, darunter auch an der Queensland University of Technology, an der sie ihr weiteres Leben als Forscherin verbrachte. 1991 ernannte man sie dort zur Senior Lecturer, 2003 zur Professorin. Dort gründete Morawska das Environmental Aerosol Laboratory (deutsch: Labor für Aerosole in der Umwelt), das 2002 unter dem Namen International Laboratory for Air Quality and Health (ILAQH) zum WHO Collaborating Centre for Air Quality and Health (deutsch: WHO-Kooperationszentrum für Luftqualität und Gesundheit) wurde und wo sie seither forscht. In ihrer langjährigen Zusammenarbeit mit der WHO leistete Morawska zu allen Luftqualitätsrichtlinien Beiträge.
Familie
Morawskas erster Ehemann ist verstorben. Sie ist in zweiter Ehe mit einem Portugiesen verheiratet.
Forschungsschwerpunkte
Global Burden of Diseases studies
Von 2012 an arbeitete sie in internationalen Forschungsprojekten wie den Global Burden of Disease Studies mit, in deren Rahmen Luftverschmutzung als Krankheitsrisiko untersucht wird.
Ultrafine particle research
Morawskas Projekt Ultrafine Particles from Traffic Emissions and Children's Health beschäftigte sich mit den Auswirkungen von Situationen, in denen Menschen ultrafeinen Partikeln ausgesetzt waren, die Fahrzeuge in die Luft ausstießen. Es zeigten sich Zusammenhänge mit Entzündungen im Bereich der Atemwege, aber auch systemischen Entzündungen, also schädlichen Auswirkungen auf die Gesundheit. 2015 überzeugten diese Ergebnisse die WHO und auch verschiedene Staaten, ihre Standards zu überarbeiten, mit denen Kinder vor ultrafeinen Partikeln geschützt werden sollten. Ein Ergebnis war die Veränderung von Richtlinien zur Luftqualität, in die nunmehr Empfehlungen zu ultrafeinen Partikeln aufgenommen wurden.
Covid-19
Zusammen mit ihrem Kollegen Donald K. Milton forschte sie zur Übertragung von Viren in Aerosolen. Die Forschungsgruppe wies 2019 nach, dass bei kurzen oder mittleren Entfernungen ein erhebliches Infektionsrisiko durch Mikrotröpfchen besteht, und empfahl Schutzmaßnahmen, um die Übertragung durch die Luft zu mindern. Im Juli 2020 publizierte sie zusammen mit 239 internationalen Forschern einen entsprechenden Appell. Regelmäßiges Lüften vor allem in Krankenhäusern, Altenheimen und Schulen, Luftreinigung und das Vermeiden von überfüllten Verkehrsmitteln und anderen Innenräumen wurde dringend angeraten.
Ehrungen und Mitgliedschaften (Auswahl)
2006–2021: Zweite Vorsitzende der Guideline Development Group der WHO
Seit 2010: Mitherausgeberin der Zeitschrift Science of the Total Environment
Seit 2020: Fellow der Australian Academy of Science
2020: Ausgezeichnung von The Australian Research Magazine als eine der 40 Top-Wissenschaftler Australiens
2021: Time Magazine, Auszeichnung als eine der hundert einflussreichsten Persönlichkeiten des Jahres 2021 (Time 100)
Publikationen (Auswahl)
Indoor environment: airborne particles and settled dust. Wiley-VCH, ISBN 3527305254.
Wei Huang, Lidia Morawska: Face masks could raise pollution risks. In: Nature. Band 574, London 2019, S. 29–30.
Lidia Morawska, Junji Cao: Airborne transmission of SARS-CoV-2: The world should face the reality. In: Environment International, Band 139, 2020, Article 105730.
Einzelnachweise
Physiker (21. Jahrhundert)
Hochschullehrer (Queensland)
Polnischer Emigrant
Emigrant in Australien
Pole
Australier
Geboren 1952
Frau | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,742 |
Q: What version of Windows 7 Ultimate do I need? I completely wiped out my widows xp and removed it from the hard drive. I want to get windows 7 Ultimate, but the prices are all over the ball park. So is it the OS program that I need? Will I need something else? Yeah, I'm confused. Any help would be great
A: There are three classes of pricing for Windows. In order of cost:
*
*Upgrade. The upgrade options runs about $173. Even though you removed the Windows XP install, you should still have the license. That means you should be eligible. However, some Windows XP editions are not eligible and you didn't share which edition you had used previously.I remember reading this, but I can't find a reference so I'm striking it for now.
*OEM. The OEM version runs about $175 and is technically only for distribution with new hardware. You probably don't qualify. That said, it will probably install, run and validate just fine.
*Retail. The full retail version runs about $275. You will be eligible for this edition, but of course you must pay an extra $100 to get it. For that $100 you also get the right to transfer it to a new machine some day, a privilege that is not included with either of the other editions.
If you find prices lower than this, it's probably not a legitimate offer.
*All prices current at time of original post, but are likely to have changed since.
A: Answer to your question is yes you need the OS version because windows 7 is an OS. If your specific about other parts please ask.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,713 |
John Daly is a journalist, news anchor, writer, author, spokesperson, and TV host. He is best known as a pioneer in reality TV for hosting the ground-breaking show Real TV, the first all-video news magazine show.
To broaden his knowledge base on the financial industry and the economy, Daly worked as a development officer for BNY Mellon's Wealth Management Group. Daly holds a license in life and health insurance as well.
John was also tapped by Tavis Smiley to host his own show, based on Informed Not Inflamed, on Blog Talk Radio (www.blogtalkradio.com/johndalytv) covering the media, new media, and media bias in news, sports, and entertainment. The first month, the show had more than a quarter million listeners.
Daly has hosted numerous radio shows covering politics, the media, news, and sports. He has filled in for conservative talker Heidi Harris on KDWN 970 AM in Las Vegas even though he is a self-described Screaming Moderate.
Previous to his stint in Las Vegas, Daly worked at WFSB TV 3 in Hartford, Connecticut, his hometown, where he also covered politics and the economy while anchoring the morning and noon newscasts, sometimes working fourteen hours a day. He covered the election campaign of the state's attorney general who pulled off the upset; his name Joe Lieberman. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,515 |
Granular formula raises the pH of spa water.Features:Effective water balance controlEasy to useTo Raise pH Level:When the pH value of the water drops below 7.2 add the following amounts of pH UP while the pump is in operation for each 500 gallons of water. When the pH is between 6.8 and 7.2 add 2 tbs. of pH UP. When the pH is below 6.8 add 4 tbs. of pH UP. Allow 4 to 6 hours for circulation then retest. Repeat above procedures if pH remains low.If the pH value is continually low the water should be tested for total alkalinity. If the total alkalinity is low it should be corrected by adding ALKALINITY UP.Compatible with:Chlorine bromine ozone and biguanide sanitizers.Product Size: 1 lb.
Salt Water / Ozone Pump Seal. PS-3866. Pre-1998 only.
Circuit Board. Balboa 51429. EL2001 Mach 2 Jacuzzi H276. Analog Duplex.
Jet Insert. Hydro-Air. Caged Freedom Series. Directional. | {
"redpajama_set_name": "RedPajamaC4"
} | 333 |
\section{Introduction}
\IEEEPARstart{C}{onvolutional} Neural Networks (CNNs) have made revolutionary breakthroughs on various computer vision tasks. For example, single-label image recognition~(SLR), as a fundamental vision task, has surpassed human-level performance~\cite{he2015delving} on large-scale ImageNet. Unlike SLR, multi-label image recognition~(MLR) needs to predict a set of objects or attributes of interest present in a given image. Meanwhile, these objects or attributes usually have complex variations like spatial location, object scale and occlusion~\emph{etc}\onedot. Nonetheless, MLR still has wide applications such as scene understanding~\cite{shao2015deeply}, face or human attribute recognition~\cite{liu2015deep,li2016human} and multi-object perception~\cite{wei2015hcp} \emph{etc}\onedot. These make MLR become a practical and challenging task. In recent years, a significant amount of learning approaches have been proposed to dealing with multi-label data~\cite{zhang2013review}.
MLR can be simply addressed by using SLR framework to predict whether each category object presents or not. Recently, there are many works using deep CNNs to improve the performance of MLR. These works can be roughly divided into three types: spatial information~\cite{wei2015hcp,yang2016exploit}, visual attention~\cite{chen2018recurrent,wang2017multi,zhu2017learning,guo2019visual} and label dependency~\cite{wang2016cnn,chen2018order,chen2019multi,chenlearning}.
Since the goal of MLR is to predict a set of object categories instead of producing accurate spatial locations of all possible objects, we argue that it is not necessary to waste computation resource for hundreds of object proposals in HCP~\cite{wei2015hcp} or consume labor cost for the bounding box annotation of objects in Fev+Lv~\cite{yang2016exploit}. RARL~\cite{chen2018recurrent} and RDAL~\cite{wang2017multi} introduce a reinforcement learning module and a spatial transformer layer to localize attentional regions, respectively, and sequentially predict label distribution based on generated regions. The main problem of these two methods is that the generated attentional regions are always category-agnostic and it is also difficult to guarantee the diversity of these local regions. In fact, we should ask the number of attentional regions to be as small as possible while maintaining the high diversity. Recently, MLGCN~\cite{chen2019multi,chen2021learning} and SSGRL~\cite{chenlearning} try to model the label dependency with graph CNN to boost the performance of MLR. However, in this paper, we aim to improve the performance of MLR with only image semantics.
In order to exploit the semantic information of image, let us recall how we humans recognize multiple objects appeared in an image. Firstly, people may have a glimpse of a given image to discover some possible object regions from a global view. Then, these possible object regions guide the eye movements and help to make decisions on specific object categories following a region-by-region manner. In other words, most of time we humans difficultly recognize multi-objects using a single glance but at least two steps from a global view to local regions. In fact, there actually exists evidence in cognitive science that global visual processing precedes local reaction in visual perception~\cite{navon1969forest}. Also, such global-to-local mechanism is supported by studies in neurobiology~\cite{hegde2008time} and psychology~\cite{flevaris2014attending}. In this paper, we wonder if machines can acquire the learning ability like humans to recognize multi-objects.
Inspired by this observation, we propose a novel multi-label image recognition framework with Multi-Class Attentional Regions~(MCAR) as illustrated in Fig.~\ref{fig:pipeline}. This framework contains a global image stream, a local region stream, and a multi-class attentional region module. Firstly, the global image stream takes an image as the input for a deep CNN and learns global representations supervised by the corresponding labels. Then, the multi-class attentional region module is used to discover possible object regions with the information from the global stream, which is similar to the way we recognize multiple objects. Finally, these localized regions are fed to the~\emph{shared} CNN to obtain their predicted class distributions using the local region stream. The local region stream can recognize objects better since it flexibly focuses on details of each object which helps to alleviate the difficulty of recognition for these objects at different spatial locations and object scales.
The contributions of this paper can be summarized as follows.
\squishlist
\item Firstly, we present a multi-label image recognition framework that can efficiently and effectively recognize multi-objects following a global to local manner. To the best of our knowledge, the learning mechanism of global to local in a unified model is the first time being proposed to find possible regions for multi-label images.
\item Secondly, we propose a simple but effective multi-class attentional region module which includes three steps: generation, selection, and localization. In practice, it can dynamically generate a small number of attentional regions while keeping their diversity as high as possible.
\item Thirdly, we achieve new state-of-the-art results on three widely used benchmarks with only a single model. Our method provides an affordable computation cost and needs no extra parameters.
\item In addition, we also extensively demonstrate the effectiveness of the proposed method under different conditions like global pooling strategy, input size and network architecture.
\squishend
The rest of this paper is organized as follows. We first review the related work in Section~\ref{rws}.
Then, Section~\ref{mcarf} proposes our approach, including two-stream framework, MCAR module (from global to local) and two-stream learning. After that, the experiments are reported in Section~\ref{exps}. Finally, Section~\ref{discuss} presents discussions and the conclusion is given in Section~\ref{cons}.
\section{Related Works}\label{rws}
Recently, many efforts have been devoted into multi-label image recognition, using spatial information~\cite{wei2015hcp,yang2016exploit}, visual attention~\cite{chen2018recurrent,wang2017multi,zhu2017learning,guo2019visual} and label dependency~\cite{wang2016cnn,chen2018order,chen2019multi,chenlearning}. In this section, we briefly
review these related approaches.
\noindent \textbf{Spatial Information.}
How to utilize the spatial information of image is very crucial for almost all visual recognition tasks such as image recognition~\cite{lazebnik2006beyond,he2015spatial}, object detection~\cite{girshick2014rich} and semantic segmentation~\cite{zhao2017pyramid,chen2017rethinking}. It is closely related to how to design (or learn) effective features. The reason is that objects usually present with different scales at different spatial locations. HCP~\cite{wei2015hcp} uses EdgeBox~\cite{zitnick2014edge} or BING~\cite{cheng2014bing} to generate hundreds of object proposals for each image using a like RCNN~\cite{girshick2014rich} method, and aggregates prediction scores of these proposals to obtain the final prediction. However, a large number of proposals usually bring a huge computation cost. Fev+Lv~\cite{yang2016exploit} generates proposals using bounding box annotations. Their approach combined the local proposal features and global CNN features to produce the final feature representations. It reduces the number of proposals but introduces the labor cost of annotation.
\noindent \textbf{Visual Attention.}
Attention mechanism has been widely used in many vision tasks, such as visual tracking~\cite{bazzani2011learning}, fine-grained image recognition~\cite{fu2017look}, image captioning~\cite{xu2015show}, image question answering~\cite{anderson2018bottom}, and semantic segmentation~\cite{hong2016learning}. RARL~\cite{chen2018recurrent} uses a recurrent attention reinforcement learning module~\cite{mnih2014recurrent} to localize a sequence of attention regions and further predict label scores conditioned on these regions. Instead of reinforcement learning in RARL, RDAL~\cite{wang2017multi} introduces a spatial transformer layer~\cite{jaderberg2015spatial,yu2019delta} for localizing attentional regions from an image and an LSTM unit to sequentially predict the category distribution based on features of these localized regions. Unlike RARL and RDAL, SRN~\cite{zhu2017learning} and ACfs~\cite{guo2019visual} combine attention regularization loss and multi-label loss to improve performance. Specifically, SRN~\cite{zhu2017learning} captures both spatial semantic and label correlations based on the weighted attention map, while ACfs~\cite{guo2019visual} enforces the network to learn attention consistency that the classification attention map should follow the same transformation when input image is spatially transformed.
\noindent \textbf{Label Dependency.}
In order to exploit label dependency, CNN-RNN~\cite{wang2016cnn} jointly learns image feature and label correlation in a unified framework composed of a CNN module and an LSTM layer. The limitation is that it requires a pre-defined label order for model training. Similar to~\cite{wang2016cnn},~\cite{xu2020joint} also jointly learn multi-label classifiers with both spatial object relationships and semantic label correlations.
Order-Free RNN~\cite{chen2018order} relaxes the label order constraint via learning visual attention model and a confidence-ranked LSTM. But it requires an explicit module for removing duplicate prediction labels and needs a threshold for stopping the sequence outputs. In order to alleviate the issues, PLA~\cite{yazici2020orderless} proposes two alternative losses which dynamically order the labels based on the prediction label sequence of an LSTM model. Recently, SSGRL~\cite{chenlearning} directly uses a graph convolutional network to model the label dependency among all labels.
There have been some other attempts on multi-label researches, such as multi-label image retrieval~\cite{7438833}, multi-label dictionary learning~\cite{jing2016multi}, zero-shot~\cite{ji2020deep,ji2020deep} and few-shot~\cite{9207855} multi-label classification. While in this paper, we deliberately avoid using any information from label dependency and aim to improve the performance of multi-label recognition with only image semantic information. We leave them as future works to further boost recognition performance or extend application fields by integrating the label correlation and other paradigms to our framework.
\section{MCAR Framework}\label{mcarf}
In this section, we firstly present a two-stream framework which contains a global image stream and a local region stream. Then, we elaborate the multi-class attentional region module, which tries to bridge the gap between global and local views. Finally, we present the optimization details of our framework.
\begin{figure*}[t]
\centering
{\includegraphics[width= 0.9\textwidth]{MCAR}}
\caption{The pipeline of our MCAR framework for multi-label image recognition. MCAR firstly feeds an input image into a deep CNN model to extract its global feature representation through the global image stream. Then, the multi-class attentional region module roughly localizes possible object regions by integrating that information from the global stream. Finally, these localized regions are fed to the shared CNN to obtain their predicted class distributions through the local region stream. At the inference stage, MCAR aggregates predictions from global and local streams with category-wise max-pooling and produces the final prediction.}\label{fig:pipeline}
\end{figure*}
\subsection{Two-Stream Framework}
\noindent \textbf{Global Image Stream.}
Given an input image $I \in \mathbb{R}^{h\times w\times 3}$, where $h$, $w$ are the image's height and width. Let's denote its corresponding label as $\vec y={[y^1, y^2, \cdots, y^C]}^T$, where $y^i$ is a binary indicator. $y^i=1$ if image $I$ is tagged with label $i$, otherwise $y^i=0$. $C$ is the number of all possible categories in this dataset.
We assume that $A ={\mathcal F} (I; \vec \theta)$ is the activation map of the last convolutional layer of a CNN, where $\theta$ denotes the parameters of the CNN and $A \in \mathbb{R}^{h^\prime \times w^\prime \times d^\prime}$. Then, a global pooling function $\mathcal {P}(\cdot)$ encodes the activation map $A$ to a single vector $\vec f \in \mathbb{R}^{ 1 \times 1\times d^\prime}$, \emph{i.e}\onedot, $ \vec f = \mathcal {P}(A) $. Here $\vec f$ can be considered as a global feature representation of the image $I$. In order to get it's prediction score, a 1$\times$1 fully convolutional layer transfers $\vec f$ to $\vec x \in \mathbb{R}^{C}$ by
\begin{equation}\label{eq:linear}
\vec x = W^T\vec f + \vec b.
\end{equation}
We then use a sigmoid function $\sigma(\cdot)$ to turn $\vec x$ into a range $[0,1]$, that is
\begin{equation}\label{eq:sigmoid}
\vec {\hat {y}_g} = \frac {1}{1+\exp(-\vec x)},
\end{equation}
where $\vec {\hat{y}_g}$ stands for the global prediction distribution.
\noindent \textbf{Local Regions Stream.}
Local stream is, in fact, to perform a multi-instance multi-label learning~\cite{zhou2012multi}. By decomposing an image into object regions, each image becomes a bag containing several positive instances, \emph{i.e}\onedot, regions
with the target objects, and negative instances, \emph{i.e}\onedot, regions with background or other objects. We assume that $\{L_1, L_2, \cdots, L_N\}$ is a set of $N$ local regions cropped from input image $I$. These local regions are firstly resized to the input size by bilinear upsampling. Then, they are fed to the shared CNN (with the global stream) to get prediction distributions $\{\vec {\hat {y}_{L_1}}, \vec {\hat {y}_{L_2}}, \cdots, \vec {\hat {y}_{L_N}}\}$ with Eq.~\ref{eq:linear} and~\ref{eq:sigmoid}. Finally, these local region distributions are aggregated by a category-wise max-pooling operation:
\begin{equation}\label{eq:classmax}
\hat {y}_l^i = \max \big(\hat y_{L_1}^i , \hat y_{L_2}^i , \cdots, \hat y_{L_N}^i \big),
\end{equation}
where $\hat {y}_l^i$ is the $i$-th category score of the local prediction $\vec {\hat {y}_l}$. The subscript $l$ means the distribution is from $N$ local regions.
Note that \emph{the global and local streams share the same network without introducing additional parameters}. It is obviously different from the classical two-stream architecture which usually contains two parallel subnetworks. The inputs of our two-stream are the whole image and local regions from it, respectively. These local regions are dynamically generated by using the information of the global stream. Therefore, it is also different from the existing methods whose inputs are always two parallel views like video frame and optical flow in video classification~\cite{Simonyan14}.
During the training stage, we jointly train these two streams. At the early stage of learning, there may be little difference between the number of positive and negative instances (local regions). With the gradual convergence of the global stream, positive instances will dominate the local stream and thus also tend to converge. At the inference stage, we fuse the predictions from global stream ($\vec {\hat {y}_g}$) and local stream ($\vec {\hat {y}_l}$) with a category-wise max-pooling operation to generate the final predicted distribution of image $I$.
\subsection{From Global to Local}
Potential object regions are not available in image-level labels, which must be generated in an efficient manner. The desirable generation module and candidate regions should satisfy some basic principles. First, the diversity of candidate regions should be as high as possible such that they can cover all possible objects of a given multi-label image. Second, the number of these candidate regions should be as small as possible in order to ensure efficiency. In contrast, more candidate regions require more computation resources since these regions need to be fed to the shared CNN simultaneously. Last but not least, the candidate regions generation module should have a simple network architecture and few parameters to alleviate the computation cost and storage overhead.
\noindent \textbf{Attentional Maps Generation.}
The class activation mapping method~\cite{zhou2016learning} intuitively shows the discriminative image regions and helps us understand how to identify a particular category with a CNN. To obtain class-specific activation maps, we directly apply the 1$\times$1 convolutional layer to the class-agnostic activation maps $A$ from the global stream, that is
\begin{equation}\label{eq:cam}
F = W^T A + \vec b,
\end{equation}
where $F \in \mathbb{R}^{h^\prime \times w^\prime \times c}$. The class-specific activation map of the $i$-th category is denoted as $F^{i}\in \mathbb{R}^{h^\prime \times w^\prime}$ and it directly indicates the importance of the activation map at spatial leading to the classification of an image to class $i$.
The discriminative class regions of a specific $F^{i}$ are significantly different among all possible class maps $\{F^i\}_{i=1}^C$. If we employ class maps $\{F^i\}_{i=1}^C$ to localize the potential object regions then it is easy to satisfy the first principle: to increase the diversity of different proposals.
\noindent \textbf{Attentional Maps Selection.}
The number of class activation maps is equal to that of all categories associated with a dataset. For example, there are 20 and 80 categories on PASCAL VOC and MS-COCO datasets, respectively. If we use all class maps, it leads to two problems. First, the generated regions are too many to ensure efficiency. Second, a majority of regions will be redundant or meaningless because an image usually consists of a few instances.
\begin{figure}[t]
\centering
{\includegraphics[width= 0.6\columnwidth]{mcar-loc}}
\caption{The visualization of local region localization with class attentional map. We firstly decompose the class attentional map into two marginal distributions along row and column. Then, the class attentional region is localized by these two marginal distributions.}\label{fig:mcar}
\end{figure}
A fact is that the predicted distribution will be close to the ground-truth distribution with the learning of the network which is supervised by ground-truth labels. It is a reasonable assumption that the high category confidence means that the corresponding object presents on the image with a high probability. Therefore, we sort the predicted scores $\vec {\hat {y}_g}$ (whose dimension is equal to the number of classes) following a descending order and select the $topN$ class attentional maps. In experiments, we can see that a satisfied performance can be achieved when the $topN$ is a small number (such as 2 or 4) which is \emph{far less than the number of all categories}. Another benefit is that the proposed method may force network to implicitly learn label correlation if selective attentional maps don't fully cover all object categories. This is because the local stream is also supervised by the ground-truth label distribution.
\noindent \textbf{Local Regions Localization.}
We still denote $topN$ class attentional maps as $\{F^i\}_{i=1}^{topN}$ for notation simplification. Each $F^i$ is normalized to the range $[0,1]$ by a sigmoid function~(Eq.~\ref{eq:sigmoid}). Furthermore, we simply upsample $F^i$ to the input size to align the spatial semantics between $F^i$ and the input image $I$.
The value of $F^i(x,y)$ represents a probability that it belongs to the $i$-th category at spatial location $(x,y)$.
In order to efficiently localize regions of interest, we decompose each selective attentional map $F^i$ into a row and a column marginal distribution, which represents a probability distribution of objects present at the corresponding location (as shown in Fig.~\ref{fig:mcar}). We compute the marginal distribution based on the class attentional map $F^i$ over $x$ and $y$ axis, respectively, which is
\begin{equation}\label{eq:margin}
\begin{aligned}
\vec p'_x &= \max_{1\leq y \leq h}F^i(x,y), \\
\vec p'_y &= \max_{1\leq x \leq w}F^i(x,y).
\end{aligned}
\end{equation}
Then, $\vec p'_x$ and $\vec p'_y$ are normalized by min-max normalization such that the distribution is scaled to the range in $[0,1]$ , that is
\begin{equation}
\begin{aligned}
\vec p_x &= \big(\vec p'_x - \min_i ({p'_x}^i)\big)/\big (\max_i ({p'_x}^i)- \min_i ({p'_x}^i)\big), \\
\vec p_y &= \big(\vec p'_y - \min_j ({p'_y}^j)\big)/\big(\max_j ({p'_y}^j)- \min_j ({p'_y}^j) \big),
\end{aligned}
\end{equation}
where ${p'_x}^i$ represents the $i$-th element of $\vec p'_x$. In order to localize one discriminative region, we need to solve the following integer inequalities:
\begin{equation}\label{eq:pxpytau}
\begin{aligned}
&{p_x}^i \geq \tau, &s.t. ~~i&=\{1,2,\cdots,w\},\\
&{p_y}^j \geq \tau, &s.t. ~~j&=\{1,2,\cdots, h\},
\end{aligned}
\end{equation}
where $\tau \in (0,1)$ is a constant threshold. The solution of Eq.~\ref{eq:pxpytau} may be a single interval or a union of multiple ones, and each interval corresponds to the spatial location of a specific object region. The fact is that $\vec p_x$ or $\vec p_y$ may have one peak when input image only contains an object in Fig.~\ref{fig:loceg1} and also may have multiple peaks when input image consists of multiple objects of the same category at different spatial locations in Fig.~\ref{fig:loceg2} and~\ref{fig:loceg3}. However, our objective is to recognize multi-class objects in a given image, and only one discriminative region needs to be selected for each category. Therefore, some constraints have to be added such that a unique interval among multiple feasible intervals can be chosen. To achieve this goal, we pick the interval contained in the global maximum peak for the case of multiple local maximum peaks as shown in Fig.~\ref{fig:loceg2} and choose the widest interval for multiple global maximum peaks as shown in Fig.~\ref{fig:loceg3}. For all selected $topN$ class attentional maps, $topN$ discriminative regions would be generated by solving the Eq.~\ref{eq:pxpytau} conditioned on the above constraints.
\begin{figure}[t]
\centering
\subfloat[single peak]
{\includegraphics[width= 0.28\columnwidth]{loc-eg1}\label{fig:loceg1}} \hspace{15pt}
\subfloat[multiple peaks]
{\includegraphics[width= 0.28\columnwidth]{loc-eg2}\label{fig:loceg2}}\hspace{15pt}
\subfloat[multiple peaks]
{\includegraphics[width= 0.28\columnwidth]{loc-eg3}\label{fig:loceg3}}
\caption{Some examples of margin distribution. Black curves represent the margin distribution, and blue dash is the threshold $\tau$, and the best interval between two red dashes is the desirable localization. } \label{fig:loceg}
\end{figure}
\subsection{Two-Stream Learning}
Given a training dataset $\{I_i, \vec {y_i}\}_{i=1}^M$, which $I_i$ is the $i$-th image and $\vec {y_i}=[y_i^1, \cdots, y_i^C]^T$ represents the corresponding labels. The learning goal of our framework
is to find $\vec \theta$, $W$ and $\vec b$ via jointly learning global and local streams in an end-to-end manner. Thus, our overall loss function is formulated as the weighted sum of two streams,
\begin{equation}\label{eq:loss}
\mathcal L = \mathcal {L}_g + \mathcal {L}_l,
\end{equation}
where $\mathcal {L}_g$ and $\mathcal{L}_l$ represent the global and the local loss, respectively. Specifically, we adopt the binary cross entropy loss for global and local stream,
\begin{equation}
\begin{aligned}
\mathcal {L}_g &= \sum_{i=1}^{M}\sum_{j=1}^{C} y_i^j \log(\hat {y_g}_i^j) + (1- y_i^j)\log(1-\hat {y_g}_i^j)\\
\mathcal {L}_l &= \sum_{i=1}^{M}\sum_{j=1}^{C} y_i^j \log(\hat {y_l}_i^j) + (1- y_i^j)\log(1-\hat {y_l}_i^j),
\end{aligned}
\end{equation}
where $\hat {y_g}_i^j$ and $\hat {y_l}_i^j$ are the prediction scores of the $j$-th category of the $i$-th image from global and local streams, respectively. Optimization is performed using SGD and standard back propagation.
\section{Experiments}\label{exps}
In this section, we firstly report extensive experimental results and comparisons that demonstrate the effectiveness of the proposed method. Then, we present ablation studies to carefully evaluate and discuss the contribution of the crucial components in our MCAR.
\begin{table*}[t]
\centering
\caption{Comparisons of mAP, CP, CR, CF1 and OP, OR, OF1 in $\%$ of our model and state-of-the-art methods on the MS-COCO dataset. * indicates that the results are reproduced by using the open-source code~\cite{chenlearning}, and - denotes the corresponding result is not provided.}\label{table:coco2014}
\footnotesize
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c||c||c|c|c|c|c|c||c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Methods} &\multirow{2}{*}{Input Size} &\multirow{2}{*}{Backbone} &\multirow{2}{*}{mAP} & \multicolumn{6}{c||}{{All}} &\multicolumn{6}{c|}{{Top3}} \\
\cline{5-16} & & & &CP &CR &CF1 &OP &OR &OF1 &CP &CR &CF1 &OP &OR &OF1\\
\hline
CNN-RNN~\cite{wang2016cnn} &- &VGG16 &61.2 &– &– &– &– &– &– &66.0 &55.6 &60.4 &69.2 &66.4 &67.8\\
RDAL~\cite{wang2017multi} & - &VGG16 &– &– &– &– &– &– &– &79.1 &58.7 &67.4 &84.0 &63.0 &72.0\\
Order-Free RNN~\cite{chen2018order} &- &ResNet-152 &– &– &– &– &– &– &– &71.6 &54.8 &62.1 &74.2 &62.2 &67.7\\
ML-ZSL~\cite{lee2018multi} &- &ResNet-152 &– &– &– &– &– &– &– &74.1 &64.5 &69.0 &–& – &–\\
SRN~\cite{zhu2017learning} &224$\times$224 &ResNet-101 &77.1 &81.6 &65.4 &71.2 &82.7 &69.9 &75.8 &85.2 &58.8 &67.4 &87.4 &62.5 &72.9\\
ACfs~\cite{guo2019visual} &288$\times$288 &ResNet-101 &77.5 &77.4 &68.3 &72.2 &79.8 &73.1 &76.3 &85.2 &59.4 &68.0 &86.6 &63.3 &73.1 \\
PLA~\cite{yazici2020orderless} &288$\times$288 &ResNet-101 &– &80.4 &68.9 &74.2 &81.5 &73.3 &77.1 &–&–&–&–&– &–\\
ResNet-101~\cite{ge2018multi} &448$\times$448 &ResNet-101 &– &73.8 &72.9 &72.8 &77.5 &75.1 &76.3 &78.3 &63.7 &69.5 &83.8 &64.9 &73.1\\
Multi-Evidence~\cite{ge2018multi} &448$\times$448 &ResNet-101 &– &80.4 &70.2 &74.9 &85.2 &72.5 &78.4 &84.5 &62.2 &70.6 &89.1 &64.3 &74.7\\
SSGRL*~\cite{chenlearning} &448$\times$448 &ResNet-101 &81.9 &84.2 &70.3 &76.6 &85.8 &72.4 &78.6 &88.0 &63.1 &73.5 &90.2 &64.5 &75.2\\
\hline\hline
MCAR &288$\times$288 &ResNet-101 &80.5 &81.8 &69.2 &75.0 &84.9 &72.2 &78.0 &85.8 &62.6 &72.4 &88.9 &64.7 &74.9 \\
\hline\hline
Baseline &448$\times$448&ResNet-101 &77.1 &72.7 &72.3 &72.5 &77.4 &75.5 &76.5 &77.8 &63.5 &69.9 &84.0 &65.5 &73.6\\
MCAR &448$\times$448&ResNet-101 &\textbf{83.8} &\textbf{85.0} &\textbf{72.1} &\textbf{78.0} &\textbf{88.0} &\textbf{73.9} &\textbf{80.3} &\textbf{88.1} &\textbf{65.5} &\textbf{75.1} &\textbf{91.0} &\textbf{66.3} &\textbf{76.7}\\
\hline\hline
SSGRL~\cite{chenlearning} & 576$\times$576 &ResNet-101 &83.8 &\textbf{89.9} &68.5 &76.8 &\textbf{91.3} &70.8 &79.7 &\textbf{91.9} &62.5 &72.7 &\textbf{93.8} &64.1 &76.2 \\
MCAR &576$\times$576 &ResNet-101 &\textbf{84.5} &84.3 &\textbf{73.9} &\textbf{78.7} &86.9 &\textbf{76.1} &\textbf{81.1} &87.8 &\textbf{65.9} &\textbf{75.3} &90.4 &\textbf{67.1} &\textbf{77.0}\\
\hline
\end{tabular}}
\end{table*}
\subsection{Experiment Setting}
\noindent \textbf{Implementation Details.}
We perform experiments to validate the effectiveness of the proposed MCAR on three benchmarks in multi-label classification: MS-COCO~\cite{lin2014microsoft}, PASCAL VOC 2007 and 2012~\cite{everingham2010pascal}, using the open-source framework PyTorch.
Following recent MLR works, we compare the proposed method with state-of-the-arts using the powerful ResNet-50 and ResNet-101~\cite{he2016deep} models. Some popular and lightweight models, such as MobileNet-v2~\cite{sandlermobilenetv2}, are also used to further evaluate our method. In general, for each of these networks we remove the fully-connected layers before the final output and replace them with global pooling followed by a 1$\times$1 convolutional layer and a sigmoid layer. These models are all pre-trained on ImageNet and we train them using image-level labels only. The stochastic gradient descent (SGD) optimizer is used with the momentum of 0.9 and the weight decay of $0.0001$. The initial learning rate is set to 0.001 for all layers but 0.01 for the 1$\times$1 convolution, and they are decreased by a factor of 10 in the $30^{th}$ and $50^{th}$ epoch and the network is trained for 60 epochs in total.
During training, all input images are resized into a fixed size (\emph{i.e}\onedot, 256$\times$256 or 448$\times$448) with random horizontal flips and color jittering for data augmentation. In order to speed up the convergence of the network, we don't use the random crop although it can bring performance improvement but need more training time. Unless otherwise stated, we set $topN$ as 4 and $\tau$ as 0.5 in our experiments. The effects of hyper-parameters ($topN$ and $\tau$) is discussed in Section~\ref{exps:as}.
\noindent \textbf{Evaluation Metrics.}
The performance of MLR mainly employ two metrics which are
the average precision (AP) for each category and the mean average precision (mAP) overall categories. We first employ AP and mAP to evaluate all the methods. Following conventional setting~\cite{wei2015hcp,chen2019multi,chenlearning}, we also compute the precision, recall and F1-measure for comparison performance on MS-COCO dataset. For each image, we assign a positive label if its prediction probability is greater than a threshold~(0.6) and compare them with the ground-truth labels. The overall precision~(OP), recall~(OR), F1-measure (OF1) and per-category precision~(CP), recall~(CR), F1-measure (CF1) are computed as follows:
\begin{equation}
\begin{aligned}
\mathrm{OP} &= \frac{\sum_i M_c^i}{\sum_i M_p^i}, &\mathrm{OR} &= \frac{\sum_i M_c^i}{\sum_i M_g^i}, \\
\mathrm{CP}&=\frac{1}{C}\sum_i \frac{M_c^i}{M_p^i}, &\mathrm{CR}&=\frac{1}{C}\sum_i \frac{M_c^i}{M_g^i}, \\
\mathrm{OF1}&=\frac{2*\mathrm{OP}*\mathrm{OR}}{\mathrm{OP}+\mathrm{OR}},
&\mathrm{CF1}&=\frac{2*\mathrm{CP}*\mathrm{CR}}{\mathrm{CP}+\mathrm{CR}},
\end{aligned}
\end{equation}
where $M_c^i$ is the number of images correctly predicted for the $i$-th category, $M_p^i$ is the number of predicted images for the $i$-th category, $M_g^i$ is the number of ground truth images for the $i$-th category. We also compute these above metrics via another way that each image is assigned labels with top3 highest score. It is worthy to notice that these metrics may be affected by the threshold. Among these metrics, OF1 and CF1 are more stable than OP, CP, OR and CR. AP and mAP are the most important metrics which can provide a more comprehensive comparison.
\subsection{Comparisons with State-of-the-Arts}\label{cstoa}
To verify the effectiveness of our method, we compare the proposed method with state-of-the-arts on MS-COCO~\cite{lin2014microsoft} and PASCAL VOC 2007 \& 2012~\cite{everingham2010pascal}.
\noindent \textbf{MS-COCO.}
MS-COCO~\cite{lin2014microsoft} is a widely used dataset to evaluate multiple tasks such as object detection, semantic segmentation and image caption, and it has been adopted to evaluate multi-label image recognition recently.
It contains 82,081 images as the training set and 40,137 images as validation set and covers 80 object categories.
Compared to VOC 2007~\& 2012~\cite{everingham2010pascal}, both the size of training set and the number of object categories are increased. Meanwhile, the number of labels of different images, the scale of different objects and the number of images in each category vary considerably, which makes it more challenging.
\begin{figure*}[t]
\centering
\vspace{0pt}
{\includegraphics[width= 0.95\textwidth]{base-mcar}}
\vspace{0pt}
\caption{AP (in $\%$) of each category of our proposed framework and the ResNet-101 baseline on MS-COCO dataset. Our MCAR has significant improvements on almost all categories, especially for some difficult categories such as ``toaster" and ``hair drier". } \label{fig:cocap}
\end{figure*}
\noindent \textbf{Results on MS-COCO.}
The results on MS-COCO are reported in Table~\ref{table:coco2014}. When the input size is 448$\times$448 (the most common setting in MLR), our method is already comparable to the state-of-the-art SSGRL~\cite{chenlearning} which uses additional label dependency and larger input to boost performance. Moreover, if we simply resize the input image to 576$\times$576 during the testing stage while still using the model weights trained with 448$\times$448 inputs, our method achieves 84.5\% mAP which outperforms the SSGRL by 0.7\%.
In order to fairly compare with the SSGRL, we re-implement the experiment with 448$\times$448 input following the same setting as described in the SSGRL. In Table~\ref{table:coco2014}, we can see that our method significantly beats the SSGRL and improves it by 1.9 points (83.8\% vs. 81.9\%). Note that PLA~\cite{yazici2020orderless} models label correlation through exploiting LSTM model. Using the same input size~(288$\times$288), our method gets higher F1 scores than PLA, which further indicates that it is very important to exploit image semantics for multi-label image recognition.
The performance of our method is also significantly better than that of Multi-Evidence~\cite{ge2018multi}, and it improves CF1 by 3.1\%, OF1 by 1.9\%, CF1-top3 by 4.5\%, OF1-top3 by 2.0\%.
Note that our baseline ResNet-101 model achieves 77.1\% mAP, which should be close to that of the baseline of Multi-Evidence~\cite{ge2018multi} because of nearly the same F1-measures.
In comparison to the baseline, our method is 6.7\% higher in mAP~(83.8\% vs. 77.1\%).
Meanwhile, we show the AP performance of each class for further comparison with the baseline model in Fig~\ref{fig:cocap}. It is obvious that our method has significant improvements on almost all categories, especially for some difficult categories such as ``toaster" and ``hair drier". In short, MCAR outperforms all state-of-the-art methods and significantly surpasses the baseline by a large margin even though it does not need a large number of proposals or label dependency information. This further demonstrates the effectiveness of the proposed method for large-scale multi-label image recognition.
\begin{table*}[t]
\centering
\caption{Comparisons of AP and mAP in $\%$ of our model and state-of-the-art methods on the PASCAL VOC 2007. $*$ indicates methods using larger input size (576$\times$576).}\label{table:voc07}
\footnotesize
\resizebox{\textwidth}{!}{
\begin{tabular}{|@{\,}c@{\,}||@{\,}c@{\,}|| *{20}{@{\,}c@{\,}}||@{\,}c@{\,}|}
\hline
Methods &Backbone &aero &bike &bird &boat &bottle &bus &car &cat &chair &cow &table &dog &horse &mbike &person &plant &sheep &sofa &train &tv &mAP\\
\hline\hline
CNN-RNN~\cite{wang2016cnn} &VGG16 &96.7 &83.1 &94.2 &92.8 &61.2 &82.1 &89.1 &94.2 &64.2 &83.6 &70.0 &92.4 &91.7 &84.2 &93.7 &59.8 &93.2 &75.3 &\textbf{99.7} &78.6 &84.0\\
VGG+SVM~\cite{Simonyan15} &~VGG16\&19~ &98.9 &95.0 &96.8 &95.4 &69.7 &90.4 &93.5 &96.0 &74.2 &86.6 &87.8 &96.0 &96.3 &93.1 &97.2 &70.0 &92.1 &80.3 &98.1 &87.0 &89.7\\
Fev+Lv~\cite{yang2016exploit} &VGG16 &97.9 &97.0 &96.6 &94.6 &73.6 &93.9 &96.5 &95.5 &73.7 &90.3 &82.8 &95.4 &97.7 &95.9 &98.6 &77.6 &88.7 &78.0 &98.3 &89.0 &90.6\\
HCP~\cite{wei2015hcp} &VGG16 &98.6 &97.1 &98.0 &95.6 &75.3 &94.7 &95.8 &97.3 &73.1 &90.2 &80.0 &97.3 &96.1 &94.9 &96.3 &78.3 &94.7 &76.2 &97.9 &91.5 &90.9\\
RDAL~\cite{wang2017multi} &VGG16 &98.6 &97.4 &96.3 &96.2 &75.2 &92.4 &96.5 &97.1 &76.5 &92.0 &87.7 &96.8 &97.5 &93.8 &98.5 &81.6 &93.7 &82.8 &98.6 &89.3 &91.9\\
RARL~\cite{chen2018recurrent} &VGG16 &98.6 &97.1 &97.1 &95.5 &75.6 &92.8 &96.8 &97.3 &78.3 &92.2 &87.6 &96.9 &96.5 &93.6 &98.5 &81.6 &93.1 &83.2 &98.5 &89.3 &92.0\\
SSGRL*~\cite{chenlearning} &ResNet-101 &99.5 &97.1 &97.6 &97.8 &82.6 &94.8 &96.7 &98.1 &78.0 &\textbf{97.0} &85.6 &97.8 &98.3 &96.4 &98.1 &\textbf{84.9} &96.5 &79.8 &98.4 &92.8 &93.4\\
\hline
Baseline &ResNet-101 &99.0&97.9&97.2&97.6&80.2&93.6&96.0&98.0&81.8&92.0&84.6&97.5&97.2&95.3&97.9&81.8&94.6&84.1&98.2&93.6 &92.9\\
MCAR &ResNet-101 & \textbf{99.7} &\textbf{99.0}& 98.5&\textbf{98.2}&\textbf{85.4}&\textbf{96.9}&\textbf{97.4}&\textbf{98.9}&\textbf{83.7}& 95.5&\textbf{88.8}&\textbf{99.1}& 98.2& 95.1&\textbf{99.1}& 84.8&\textbf{97.1}&\textbf{87.8}& 98.3&\textbf{94.8} &\textbf{94.8} \\
\hline
\end{tabular}}
\end{table*}
\begin{table*}[t]
\centering
\caption{Comparisons of AP and mAP in $\%$ of our model and state-of-the-art methods on the PASCAL VOC 2012. $*$ indicates methods using larger input size (576$\times$576).}\label{table:voc12}
\footnotesize
\resizebox{\textwidth}{!}{
\begin{tabular}{|@{\,}c@{\,}||@{\,}c@{\,}|| *{20}{@{\,}c@{\,}}||@{\,}c@{\,}|}
\hline
Methods &Backbone &aero &bike &bird &boat &bottle &bus &car &cat &chair &cow &table &dog &horse &mbike &person &plant &sheep &sofa &train &tv &mAP\\
\hline
VGG+SVM~\cite{Simonyan15} &VGG16\&19 &99.0 &89.1 &96.0 &94.1 &74.1 &92.2 &85.3 &97.9 &79.9 &92.0 &83.7 &97.5 &96.5 &94.7 &97.1 &63.7 &93.6 &75.2 &97.4 &87.8 &89.3\\
Fev+Lv~\cite{yang2016exploit} &VGG16 &98.4 &92.8 &93.4 &90.7 &74.9 &93.2 &90.2 &96.1 &78.2 &89.8 &80.6 &95.7 &96.1 &95.3 &97.5 &73.1 &91.2 &75.4 &97.0 &88.2 &89.4\\
HCP~\cite{wei2015hcp} &VGG16 &99.1 &92.8 &97.4 &94.4 &79.9 &93.6 &89.8 &98.2 &78.2 &94.9 &79.8 &97.8 &97.0 &93.8 &96.4 &74.3 &94.7 &71.9 &96.7 &88.6 &90.5\\
SSGRL*~\cite{chenlearning} &ResNet-101 &99.5 &95.1 &97.4 &96.4 &85.8 &94.5 &93.7 &\textbf{98.9} &86.7 &96.3 &84.6 &\textbf{98.9} &\textbf{98.6} &96.2 &98.7 &82.2 &\textbf{98.2} &\textbf{84.2}&98.1 &93.5 &93.9\\
\hline
MCAR &MobileNet-v2 &98.6&92.3&95.4&93.3&77.7&93.8&92.6&97.6&80.8&90.9&82.3&96.5&96.6&95.5&98.3&78.4&92.6&78.7&96.8&90.9 &91.0\\
MCAR &ResNet-50 &99.6&95.6&97.5&95.2&85.1&95.5&94.3&98.6&85.2&95.8&83.9&98.4&98.0&97.2&98.8&81.6&95.5&81.8&98.3&\textbf{93.6}&93.5\\
MCAR &ResNet-101 &\textbf{99.6} &\textbf{97.1}&\textbf{98.3}&\textbf{96.6}&\textbf{87.0}&\textbf{95.5}&\textbf{94.4}&98.8&\textbf{87.0}&\textbf{96.9}&\textbf{85.0}&98.7&98.3&\textbf{97.3}&\textbf{99.0}&\textbf{83.8}&96.8&83.7&\textbf{98.3}&93.5 &\textbf{94.3}\\
\hline
\end{tabular}}
\end{table*}
\noindent \textbf{PASCAL VOC 2007 and 2012.}
PASCAL VOC 2007 and 2012~\cite{everingham2010pascal} are the most widely used datasets for MLR. There are 9,963 and 22,531 images in VOC 2007 and 2012, respectively. Each image contains one or several labels, corresponding to 20 object categories. These images are divided into three parts including~\emph{train}, \emph{val} and~\emph{test} sets. In order to fairly compare with other competitors, we follow the common setting to train our model on the~
\emph{train-val} sets, and then evaluate produced models on the~\emph{test} set. VOC 2007 contains a~\emph{train-val} set of 5,011 images and a~\emph{test} set of 4,952 images. VOC 2012 consists of 11,540 images as~\emph{train-val} set and 10,991 images as the~\emph{test} set.
\noindent \textbf{Results on VOC 2007.}
We first report the AP for each category and the mAP for all categories on VOC 2007~\emph{test} set in Table~\ref{table:voc07}. The current state-of-the-art is SSGRL~\cite{chenlearning} which uses GCN to model label dependency to boost the performance. We can see that our method achieves the best mAP performance among all methods.
It largely outperforms the SSGRL~\cite{chen2019multi} by 1.4 points~(94.8\% vs. 93.4\%) when SSGRL uses a larger input size 576$\times$576.
Moreover, the proposed method improves the baseline ResNet-101 model by 1.9\% under the same setting such as data augmentation and hyper-parameters of optimization. Last but not least, our framework shows good performance for some difficult categories such as ``bottle", ``table" and ``sofa". This shows that exploiting global and local vision information is very crucial for multi-label recognition.
\noindent \textbf{Results on VOC 2012.}
We report the results on VOC 2012~\emph{test} set with PASCAL VOC evaluation server in Table~\ref{table:voc12}. We compare state-of-the-arts with our method on several backbone networks. First, we still win the best mAP performance with a smaller input size compared to SSGRL~\cite{chenlearning} when ResNet-101 is considered as a backbone. Second, our method achieves better performance using lightweight networks, \emph{i.e}\onedot MobileNet-v2 and ResNet-50, than that of VGG. This implies that it may be easy to extend our method to resource-restricted devices such as mobile phones.
\subsection{Ablation Study}\label{exps:as}
In order to comprehend how MCAR works, we perform exhaustive experiments to analyze the components in MCAR.
We firstly analyze the contribution of each component in our two-stream architecture and demonstrate its effectiveness. The training details are exactly the same as those described in Section~\ref{cstoa}. Then, the effect of the attentional maps selection criteria and learning strategy is analyzed. Next, we also present the effects of MCAR in different hyper-parameters ($topN$ and $\tau$) appearing in the local region localization module. The experiment is conducted on VOC 2007 and MS-COCO using different backbone networks, \eg MobileNet-v2, ResNet-50 and ResNet-101, and we set the input size to 256$\times$256.
Finally, we extensively analyze the effects of our method under different conditions such as different global pooling strategies, various input sizes, and different network architectures.
\begin{table}[t]
\centering
\caption{Ablative study of two streams in MCAR with ResNet-101 backbone and the input size of 448$\times$448.}\label{table:voc-coco}
\footnotesize{
\begin{tabular}{|c||c|cc||c||c|}
\hline
Line No. &Methods &{Global} &{Local} &{VOC 2007} &{MS-COCO}\\
\hline
{\textcolor[rgb]{0.6,0.6,0.6}{~~~0~~~}} &Baseline &$\surd$ & &92.9 &77.1\\
\hline
{\textcolor[rgb]{0.6,0.6,0.6}{~~~1~~~}} &\multirow{3}*{MCAR}&$\surd$ & &93.4 {\color{red} $\uparrow$0.5} &81.9 {\color{red} $\uparrow$4.8} \\
{\textcolor[rgb]{0.6,0.6,0.6}{~~~2~~~}} & & &$\surd$ &94.2 {\color{red} $\uparrow$1.3} &82.9 {\color{red} $\uparrow$5.8} \\
{\textcolor[rgb]{0.6,0.6,0.6}{~~~3~~~}} & &$\surd$ &$\surd$ &94.8 {\color{red} $\uparrow$1.9} &83.8 {\color{red} $\uparrow$6.7} \\
\hline
\end{tabular}}
\end{table}
\noindent \textbf{Contributions of proposed global-to-local framework.}
To explore the effectiveness of two streams, we jointly train the global and local streams in MCAR, and during the inference stage, we report the influence of using each stream in Table~\ref{table:voc-coco}. Firstly, thanks to the joint training strategy, our MCAR significantly outperforms the baseline method even when the same global image is taken as input (line 1 vs. line 0). Such improvement is very intuitive because MCAR is more robust and generalized by learning on not only global image but also various scales of local regions. Secondly, we can see that using local stream alone performs better than only using global stream (line 2 vs. line 1), which is because the local stream is able to flexibly focus on the details of each object. Nonetheless, we want to emphasize that the global stream plays an important role in guiding the learning of local stream. Last but not least, it is obvious that employing both global and local streams achieves the best results (line 3). This is similar to humans perception because we usually make a final decision after our brain gathers information from different spatial locations and object scales.
\begin{figure*}[t]
\centering
\vspace{-5pt}
\subfloat[MobileNet-v2]
{\includegraphics[width= 0.3\columnwidth]{mobilenetv2-mcar-topN}\label{fig:voc-m2topn}}
\subfloat[ResNet-50]
{\includegraphics[width= 0.3\columnwidth]{resnet50-mcar-topN}\label{fig:voc-res50topn}}
\subfloat[ResNet-101]
{\includegraphics[width= 0.3\columnwidth]{resnet101-mcar-topN}\label{fig:voc-res101topn}}
\quad \vrule \quad
\subfloat[MobileNet-v2]
{\includegraphics[width= 0.3\columnwidth]{coco-mobilenetv2-mcar-topN}\label{fig:coco-m2topn}}
\subfloat[ResNet-50]
{\includegraphics[width= 0.3\columnwidth]{coco-resnet50-mcar-topN}\label{fig:coco-res50topn}}
\subfloat[ResNet-101]
{\includegraphics[width= 0.3\columnwidth]{coco-resnet101-mcar-topN}\label{fig:coco-res101topn}}\\
\subfloat[MobileNet-v2]
{\includegraphics[width= 0.3\columnwidth]{mobilenetv2-mcar-thresh}\label{fig:voc-m2thresh}}
\subfloat[ResNet-50]
{\includegraphics[width= 0.3\columnwidth]{resnet50-mcar-thresh}\label{fig:voc-res50thresh}}
\subfloat[ResNet-101]
{\includegraphics[width= 0.3\columnwidth]{resnet101-mcar-thresh}\label{fig:voc-res101thresh}}
\quad \vrule \quad
\subfloat[MobileNet-v2]
{\includegraphics[width= 0.3\columnwidth]{coco-mobilenetv2-mcar-thresh}\label{fig:coco-m2thresh}}
\subfloat[ResNet-50]
{\includegraphics[width= 0.3\columnwidth]{coco-resnet50-mcar-thresh}\label{fig:coco-res50thresh}}
\subfloat[ResNet-101]
{\includegraphics[width= 0.3\columnwidth]{coco-resnet101-mcar-thresh}\label{fig:coco-res101thresh}}
\caption{mAP comparisons of our MCAR with different values of $topN$ and $\tau$. The left three columns are based on PASCAL-VOC 2007 and the right three columns are based on MS-COCO dataset.} \label{fig:hyperpa}
\vspace{-10pt}\label{fig:topntao}
\end{figure*}
\noindent \textbf{Importances of attentional maps selection.}
In our method, all attentional maps are firstly sorted by global stream score following a descending order and then the $topN$ attentional maps are chosen. In order to further verify the selection strategy, we conduct other criterions to see if the performance is sensitive to the score ranking. Specifically, we design two criterions to compare with our $topN$ strategy. The first one is that we still sort global stream scores but pick $bottom$ $N$ feature maps. The second one is that we randomly $sample$ $N$ maps among all attentional maps.
For simplicity, we test MCAR with $top4$, $random4$ and $bottom4$ local regions while using the weights trained on MCAR with $top4$ setting in Table~\ref{table:voc-coco-ams}. We can see that the performance of MCAR using high-confidence local regions~($top4$) is significantly better than that of using low-confidence ones~($bottom4$) or random manner~($random4$). This indicates the effectiveness of the local region selection strategy based on the ranking of the global scores.
\begin{table}[t]
\centering
\caption{Ablative study of attentional maps selection strategy in MCAR with ResNet-101 backbone and the input size of 448$\times$448.}\label{table:voc-coco-ams}
\begin{tabular}{|c|c||c||c|}
\hline
Methods &Selection criteria &{VOC 2007} &{MS-COCO}\\
\hline
\multirow{3}*{MCAR}
&$bottom4$ &93.8 &81.2\\
&$random4$ &94.2 &82.1\\
&$top4$ &\textbf{94.8} &\textbf{83.8}\\
\hline
\end{tabular}
\end{table}
\noindent \textbf{Single or Pair loss?}
Instead of two-stream learning using a pair of parallel losses in Eq.~\ref{eq:loss}, we design a simple way to train the network utilizing a single loss. Specifically, we firstly fuse global and local prediction scores with a max-wised aggregation
\begin{equation}
\hat {y}^i = \max \big(\hat y_{g}^i , \hat y_{l}^i \big),
\end{equation}
and then train the network with a single BCE loss as the same in Eq.~\ref{eq:loss},
\begin{equation}
\mathcal {L} = \sum_{i=1}^{M}\sum_{j=1}^{C} y_i^j \log(\hat {y}_i^j) + (1- y_i^j)\log(1-\hat {y}_i^j)\\
\end{equation}
Using ResNet-101 backbone and keeping the rest settings the same, the experimental results are reported in Table~\ref{table:voc-coco-loss}. We can see that MCAR with single loss obtains 94.4 mAP on VOC2007 and 82.6 mAP on MS-COCO which improves the baseline by 1.5 and 5.5 mAP, but is 0.4 and 1.2 mAP worse than our main method (with pair loss).
Why is MCAR equipped with a pair of losses better than that of a single loss? The main insight is that global visual processing usually precedes the local one. The pair loss may ensure that after the global stream has been converged fast, then it guides the local stream to find possible local regions. Indeed, we find that the single loss setting usually needs more epochs when arriving at the similar performance. This indicates that the convergence of single loss is slower than that of our pair loss.
\begin{table}[t]
\centering
\caption{Ablative study of learning strategy in MCAR with ResNet-101 backbone and the input size of 448$\times$448.}\label{table:voc-coco-loss}
\begin{tabular}{|c|c||c||c|}
\hline
Methods &Learning Strategy &{VOC 2007} &{MS-COCO}\\
\hline
Baseline &single &92.9 &77.1\\
\hline
\multirow{2}*{MCAR} &single &94.4 &82.6\\
&pair &\textbf{94.8} &\textbf{83.8}\\
\hline
\end{tabular}
\end{table}
\noindent \textbf{Number of local regions.} We fix $\tau$ to 0.5 and choose the value $topN$ from a given set $\{0, 1, 2, 4, 6, 8\}$. Note that, $topN=0$ implies we train the model using global stream only, which is equal to our baseline. In the first row of Fig.~\ref{fig:hyperpa}, we show the mAP performance curves when $topN$ is set to different numbers. First, the mAP performance shows an upward trend with the number of $topN$ gradually being increased. This means that it is useful to improve the multi-label classification performance using more local regions. Second, the performance tends to be stable when $topN$ is set to 4 or 6, which implies that the improvements will be not significant when applying a large $topN$. Third, the performance of a small $topN$,~(\eg, 1, 2, or 4) is significantly better than that of a pure global stream~(\emph{i.e}\onedot, $topN$=0). This further verifies the effectiveness of the proposed selection strategy of generated high-confidence local regions. Another benefit of the region selection strategy is to help reduce the cost of computation resources.
\begin{table*}[t]
\centering
\caption{Comparisons of mAP in $\%$ of our methods and baseline on the MS-COCO dataset. Compared to the baseline method, the improvements of our method are highlighted in red.}\label{table:coco-as}
\footnotesize{
\begin{tabular}{|c||c|c||c|c||c|c|}
\hline
Methods &\multicolumn{2}{c||}{MobileNet-v2} &\multicolumn{2}{c||}{ResNet-50} &\multicolumn{2}{c|}{ResNet-101}\\
\hline
Input Size &256 &448 &256 &448 &256 &448 \\
\hline\hline
Baseline &61.5 &67.8 &70.1 &75.4 &71.2 &77.1 \\
\hline
MCAR~(GAP) &66.6 {\color{red} $\uparrow$5.1} &74.3 {\color{red} $\uparrow$6.5} &75.9 {\color{red} $\uparrow$5.8} &78.0 {\color{red} $\uparrow$2.6} &77.4 {\color{red} $\uparrow$6.2} &80.5 {\color{red} $\uparrow$3.4} \\
MCAR~(GWP) &69.8 {\color{red} $\uparrow$8.3} &75.0 {\color{red} $\uparrow$7.2} &78.0 {\color{red} $\uparrow$7.9} &82.1 {\color{red} $\uparrow$6.7} &79.4 {\color{red} $\uparrow$8.2} &83.8 {\color{red} $\uparrow$6.7} \\
\hline
\end{tabular}}
\end{table*}
\begin{table*}[t]
\centering
\caption{Comparisons of mAP in $\%$ of our methods and baseline on the PASCAL VOC 2007 dataset. Compared to the baseline method, the improvements of our method are highlighted in red..}\label{table:voc07-as}
\footnotesize{
\begin{tabular}{|c||c|c||c|c||c|c|}
\hline
Backbone &\multicolumn{2}{c||}{MobileNet-v2} &\multicolumn{2}{c||}{ResNet-50} &\multicolumn{2}{c|}{ResNet-101}\\
\cline{1-7}
Input Size &256 &448 &256 &448 &256 &448 \\
\hline\hline
Baseline &85.5 &89.5 &89.1 &91.8 &89.2 &92.9 \\
\hline
MCAR~(GAP) &88.1 {\color{red} $\uparrow$2.6} &91.3 {\color{red} $\uparrow$1.8} &92.3 {\color{red} $\uparrow$3.2}&94.1 {\color{red} $\uparrow$2.3} &93.0 {\color{red} $\uparrow$3.8} &94.8 {\color{red} $\uparrow$1.9}\\
MCAR~(GWP) &88.5 {\color{red} $\uparrow$3.0} &91.7 {\color{red} $\uparrow$2.2} &92.0 {\color{red} $\uparrow$2.9}&93.7 {\color{red} $\uparrow$1.9} &92.6 {\color{red} $\uparrow$3.4} &94.3 {\color{red} $\uparrow$1.4}\\
\hline
\end{tabular}}
\end{table*}
\noindent \textbf{Threshold of localization.} To explore the sensitivity of the $\tau$ in Eq.~\ref{eq:pxpytau}, we fix $topN$ to 4 and test different $\tau$ values from $\{0, 0.1, 0.3, 0.5, 0.7, 0.9\}$. The whole image will be considered as a local region when $\tau$ equals to 0, and it is also equivalent to the baseline method. We show the mAP performances as the function of $\tau$ in the second row of Fig.~\ref{fig:hyperpa}.
First, we observe that the performance is better when $\tau$ is greater than 0. Second, the performance drops when $\tau$ is either too small or too large. We argue that if $\tau$ is too small, local regions may contain more context information and lack discriminative features because all local regions are close to the original input image. When $\tau$ is too large, it makes local regions only contain the most discriminative parts of an object and easily leads to over-fitting. It is a good choice when the value $\tau$ is in the interval between 0.3 and 0.7.
\noindent \textbf{Global pooling strategy.} Encoding spatial feature descriptors to a single vector is a necessary step in state-of-the-art CNNs. The early works,~\eg, AlexNet and VGGNet, use a fully connected layer, and the recent ResNet usually employs global average pooling~(GAP) which outputs the spatial average of each feature map. Specifically, considering class-agnostic feature map $A$ from the top block of a backbone network. The GAP operation outputs the spatial average of the $A$, returning a vector $\vec f^{a} \in {\mathbf{R}^{d^\prime}}$ with the $k$-th element being
\begin{equation}
f_k^{a} = \frac{1}{h^\prime w^\prime} \sum_{i=1}^{h^\prime}\sum_{j=1}^{w^\prime} A_{i,j,k}.
\end{equation}
We denote the output of global maximum pooling~(GMP) as $\vec f^{m}\in {\mathbf{R}^{d^\prime}}$, whose the $k$-th element is
\begin{equation}
f_k^{m} = \max \{A_{i,j,k}\}_{i=1~j=1}^{h^\prime~~w^\prime}.
\end{equation} The GMP easily falls into over-fitting because it enforces the network to learn the most discriminative feature. Generally, GAP usually has a better generalization ability than GMP. However, GAP may lead to under-fitting and slow convergence because it equally gives the same importance for all spatial feature descriptors. Our local region localization needs to discover the discriminative region which seems to be opposite to the objective of GAP. In order to alleviate this conflict, we propose a simple solution termed as \emph{Global Weighted Pooling}~(GWP) which is an average of $\vec f^a$ and $\vec f^m$, as
\begin{equation}
\vec f= \lambda \vec f^a + (1-\lambda) \vec f^m,
\end{equation}
where $\lambda \in [0,1]$ is a weight which balances the importance between GAP and GMP. In our paper, the weight $\lambda$ is empirically set to 0.5.
In Table~\ref{table:coco-as}, we can see that MCAR with GWP further boosts performance on MS-COCO dataset. It improves the mAP by 4.1 points and 3.3 points compared to the common GAP on ResNet-50 and ResNet-101 when input size is 448$\times$448. Nevertheless, the overall performance of GWP is comparable to that of GAP on the PASCAL-VOC dataset as reported in Table~\ref{table:voc07-as}. This may be associated with a specific dataset that the task of PASCAL-VOC is relatively simpler than that of MS-COCO because of small-scale samples, fewer classes and fewer instances per image in PASCAL-VOC. Generally, MCAR equipped with GWP is better than GAP, especially on more challenging tasks.
\noindent \textbf{Network architecture.} The recent state-of-the-art methods usually take ResNet-101 as a backbone to report their performance. However, in real applications, lightweight networks have been widely adopted. To meet such requirements, we extensively evaluate the proposed method with MobileNet-v2 and ResNet-50 besides ResNet-101 on PASCAL-VOC and MS-COCO and report their results in Tables~\ref{table:coco-as} and \ref{table:voc07-as}. The deeper network tends to obtain better performance. This is not surprised because the big network has more parameters and a deeper structure to ensure strong capacity and transferability. Note that our method still has good performance using the lightweight MobileNet-v2. In addition, the proposed method has significant improvements for all backbones. On the MS-COCO dataset, our MCAR with GWP improves the baseline by about 7\% using the input size of 448$\times$448.
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\captionsetup[subfigure]{justification=centering, font=tiny, labelfont=bf}
\centering
\captionsetup[subfigure]{oneside,margin={0cm,0cm,5cm,5cm}}
\subfloat[] [\hspace{-1.4cm} \textbf{Baseline:} {\it {car, person}} \\ \hspace{-1.04cm} \textbf{MCAR:} {\it {car, person, train}}]{\includegraphics[width= 0.165\textwidth]{2008_002531.jpg}}
\subfloat[][\it boat\\ boat, person] {\includegraphics[width= 0.165\textwidth]{2008_000005.jpg}}
\subfloat[][ \it cat \\ car, cat] {\includegraphics[width= 0.165\textwidth]{2008_000017.jpg}}
\subfloat[][ \it none \\car] {\includegraphics[width= 0.165\textwidth]{2008_000055.jpg}}
\subfloat[] [ \it car \\ bird, car]{\includegraphics[width= 0.165\textwidth]{2008_000091.jpg}}
\subfloat[] [ \it horse, person\\horse, person]{\includegraphics[width= 0.165\textwidth]{2008_000046.jpg}}
\\ \vspace{-10pt}
\subfloat[] [\hspace{-1.4cm} \textbf{Baseline:} {\it {car, person}}\\ \hspace{-1.0cm} \textbf{MCAR:} {\it {car, chair, person}}]{\includegraphics[width= 0.165\textwidth]{2008_000542.jpg}}
\subfloat[] [ \it mbike, person\\car, chair, mbike, person]{\includegraphics[width= 0.165\textwidth]{2008_000326.jpg}}
\subfloat[][ \it person \\person, pottedplant] {\includegraphics[width= 0.165\textwidth]{2008_000387.jpg}}
\subfloat[] [ \it person \\cat, person]{\includegraphics[width= 0.165\textwidth]{2008_006263.jpg}}
\subfloat[] [ \it none \\ sofa]{\includegraphics[width= 0.165\textwidth]{2008_002534.jpg}}
\subfloat[] [ \it bicycle, person\\bicycle, car, person]{\includegraphics[width= 0.165\textwidth]{2008_002490.jpg}}
\\ \vspace{-10pt}
\subfloat[][\hspace{-1.7cm} \textbf{Baseline:} {\it{person}} \\ \hspace{-0.9cm} \textbf{MCAR:} {\it{person, snowboard}} \\\hspace{-1.15cm}\textbf{GT:} {\it{person, snowboard}} ] {\includegraphics[width= 0.165\textwidth]{COCO_val2014_000000000360.jpg}}
\subfloat[][\it{bench, elephant, person \\bench, elephant, person \\bench, elephant, person}] {\includegraphics[width= 0.165\textwidth]{COCO_val2014_000000000974.jpg}}
\subfloat[][\it{person, umbrella \\car, person, umbrella\\car, person, umbrella}] {\includegraphics[width= 0.165\textwidth]{COCO_val2014_000000007088.jpg}}
\subfloat[][\it{bbat, bglove, person \\bbat, bglove, bottle, person\\bbat, bglove, bench, bottle, person}] {\includegraphics[width= 0.165\textwidth]{COCO_val2014_000000006471.jpg}}
\subfloat[] [\it{bicycle, car, chair, person, truck\\car, kite, mbike, person, tlight, truck\\car, kite, mbike, person, truck}]{\includegraphics[width= 0.165\textwidth]{COCO_val2014_000000004246.jpg}}
\subfloat[][\it{cow \\ cow, dog \\cow, dog}]{\includegraphics[width= 0.165\textwidth]{COCO_val2014_000000011169.jpg}}
\\ \vspace{-10pt}
\subfloat[] [\hspace{-1.34cm}\textbf{Baseline:} {\it{cat, person}}\\ \hspace{-0.63cm} \textbf{MCAR:} {\it{cat, cell phone, person}}\\ \hspace{-0.95cm} \textbf{GT:} {\it{cat, cell phone, person}}]{\includegraphics[width= 0.165\textwidth]{COCO_val2014_000000012827.jpg}}
\subfloat[] [\it{airplane \\train \\train}]{\includegraphics[width= 0.165\textwidth]{COCO_val2014_000000007666.jpg}}
\subfloat[] [\it{clock \\bottle, clock\\ bottle, clock, cup}]{\includegraphics[width= 0.165\textwidth]{COCO_val2014_000000004157.jpg}}
\subfloat[][\it{apple, orange, vase \\ apple, bowl, vase \\ apple, bowl, vase }] {\includegraphics[width= 0.165\textwidth]{COCO_val2014_000000003742.jpg}}
\subfloat[] [\it{cat\\cat, mouse\\cat, couch, mouse}]{\includegraphics[width= 0.165\textwidth]{COCO_val2014_000000015497.jpg}}
\subfloat[] [\it{laptop, person, tie\\ laptop, person, tie\\book, person, tie}]{\includegraphics[width= 0.165\textwidth]{COCO_val2014_000000011655.jpg}}
\caption{Selected examples of region localization and classification results on PASCAL VOC 2012 testing images (first two rows) and MS-COCO validation images (last two rows). Our MCAR achieves 94.3\% mAP on the VOC 2012 testing set and 83.8\% mAP on the MS-COCO validation set by using ResNet-101 backbone and the input size of 448$\times$448. Note that these attentional regions are generated by using the model trained on image-level labels only~(without bounding box annotations). Each region box is associated with a category label~($c$), a global stream score~($\hat {y}_{g}^{c}$) and a two-stream score ($\max\{\hat {y}_{g}^{c}$, $\hat {y}_{l}^{c}$\}), organized as ``category name:global score/two-stream score", \eg, ``train: 0.14/0.99" in the image at the fourth row and second column. These region boxes are displayed with conditions on $\hat {y}_{l}^{c} > 0.1$, $\max\{\hat {y}_l^c, \hat {y}_g^c\} > 0.6$, $topN=4$ and $\tau=0.5$. For each image, one color represents one object category in that image. The proposed two-stream MCAR framework recognizes objects of a wide range of scales, especially for those small-sized or occluded objects, such as the car in (1,3), the bird in (1,5), the cat in (2,4) and the sofa in (2,5) on VOC 2012 testing images and the snowboard in (3,1), the car in (3,3), the dog in (3,6), the cell phone in (4,1), the train in (4,2) and the mouse in (4,5) on MS-COCO validation images, where $(i,j)$ represents the image at $i$-th row and $j$-th column. It is noteworthy that MCAR may produce incorrect or incomplete predictions when the local region is too small or too blurry such as as the bench in (3,4), the book in (4,6) and the couch in (4,5) on the MS-COCO testing images. (Best view in color and zoom in.)} \label{fig:vis}
\end{figure*}
\noindent \textbf{Input size.} The performance of multi-label recognition is sensitive to the choice of input size. Generally, the larger size tends to get the better performance as reported in Tables~\ref{table:coco-as} and \ref{table:voc07-as} . However, it is more practicable to employ small-sized input on resource-restrict devices. Somewhat surprisingly, MCAR performs better using small inputs. In Table~\ref{table:coco-as} and \ref{table:voc07-as}, we can see that our method always tends to produce more improvements when a smaller input size is employed. This advantage comes from the two-stream architecture which can look at an image in a comprehensive manner (global to local). This indicates that our method is more friendly for low-resolution inputs.
\section{Discussion}\label{discuss}
In this section, we try to understand how the network recognizes multi-objects for a multi-label image via visualizing the produced local regions and discuss why MCAR is a simple and efficient multi-label framework.
\noindent \textbf{Visualization.}
To analyze where our model focuses on an image, we show the class-specific attentional regions generated by a multi-class attentional region module in Fig.~\ref{fig:vis}. It can be seen that these attentional regions cover almost all possible objects in each image which is consistent with our initial intention. Furthermore, we can find that global prediction scores of some small-scale objects are low,~\eg~the train in (1,1), the car in (1,3), the bird in (1,5), the chair in (1,1), the cat in (2,4) and the sofa in (2,5) on the PASCAL VOC 2012 testing set and the snowboard in (3,1), the car in (3,3), the dog in (3,6), the cell phone in (4,1), the train in (4,2) and the mouse in (4,5) on the MS-COCO validation images, where $(i,j)$ is the image at $i$-th row and $j$-th column in Fig.~\ref{fig:vis}. This indicates that it is suboptimal to use global image stream solely, especially for small-scaled and partly occluded objects. This limitation would be improved by our two-stream network because it recognizes this type of object from a closer view (high score of two-stream). Compared to the baseline method, our method significantly improves the multi-label image recognition performance. Note that MCAR may produce incorrect or incomplete predictions when local regions are too small or too blurry such as the bench in (3,4), the book in (4,6) and the couch in (4,5) on MS-COCO testing images.
Furthermore, the local region stream is hardly ensured to cover all target objects even if we use a larger number~($topN$). However, the local stream is able to contain a majority of target objects because of the high diversity of local regions. Moreover, our two streams can complement each other by finding missing discriminative regions. Considering this is a weakly supervised problem and the computation efficiency, we think such this situation can be acceptable.
\noindent \textbf{Simplicity.}
Our framework aims at proposing a simple and efficient method that puts forward to learn global and local image semantics in a single unified model.
On one hand, we generate object proposals only using the network itself while HCP utilizes external tools such as EdgeBox~\cite{zitnick2014edge} or BING~\cite{cheng2014bing}. On the one hand, our method can efficiently obtain multi-class regions with a parameter-free region localization module because of the parameter share mechanism in Eq.~\ref{eq:linear} and~\ref{eq:cam}. Unlike some existing attention-based methods, they always need a slightly complex module such as LSTM unit in~\cite{jaderberg2015spatial,yu2019delta} or reinforcement learning module in RARL~\cite{chen2018recurrent}.
\begin{table}[t]
\centering
\caption{Comparisons of average inference time of per-image between our MCAR (including each component) and baselines with different backbones and input sizes. The time is measured in milliseconds~(ms) on one P40 GPU.}\label{table:inferenetime}
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{|c||c||r||r||r|r|r|}
\hline
\multirow{2}{*} {Methods} &\multirow{2}{*} {ImgSize} &\multirow{2}{*} {Baseline} &\multicolumn{4}{c|}{MCAR~($topN$=$4$) }\\
\cline{4-7}
& & &Total &Global &G-to-L &Local \\
\hline
\multirow{2}{*} {MobileNet-v2} &256$\times$256 &6.7 &22.7 &6.6 &9.2 &6.9 \\
&448$\times$448 &6.9 &34.3 &6.9 &13.0 &14.4 \\
\hline
\multirow{2}{*} {ResNet-50} &256$\times$256 &8.5 &31.7 &8.2 &9.1 &14.3 \\
&448$\times$448 &11.2 &55.4 &11.1 &13.2 &31.2 \\
\hline
\multirow{2}{*} {ResNet-101} &256$\times$256 &15.7 &46.8 &16.1 &8.6 &22.1 \\
&448$\times$448 &18.4 &82.8 &18.8 &13.5 &50.6 \\
\hline
\end{tabular}}
\end{table}
\noindent \textbf{Complexity.}
The computation complexity linearly grows with the region number (\emph{i.e}\onedot, $topN$). However, it is worth noting that the number of local regions has been significantly reduced by using our framework compared to region-based methods such as HCP (\eg, 500). Our method works well when a small $topN$ (\eg, 4) is used and thus the complexity is controllable and the computation cost is affordable. For example, the number of object proposals in HCP is 500 while we reduce this number to 4. So about 100-time speedup is obtained.
We test the forward running time of each model using the input sizes of 256$\times$256 and 448$\times$448. This evaluation is conducted on one P40 GPU accelerated by cuDNN v7.4.1. The actual inference time is reported in Table~\ref{table:inferenetime}. We can see that the total time of our MCAR~($topN$=4) is about 4 to 5 times compared to baselines. This is not surprising because there is at least $topN$+1 times computation cost with our method. We also report the inference time of each component (global stream, global-to-local and local stream) of our MCAR. It can be seen that those local regions' generation and their forward inference dominate the computation cost of our MCAR, reducing the number of local regions would accelerate inference speed greatly. In addition, our MCAR significantly outperforms the baseline method~(81.9\% vs. 77.1\% mAP on MS-COCO Table~\ref{table:voc-coco}) even if only global image~(without local regions) is taken as input, which implies our method still better than the baseline under the same inference time. Meanwhile, our method needs no additional parameters for generating local regions because of the parameter sharing mechanism between global and local streams.
\section{Conclusion}\label{cons}
We observe that humans recognize multiple objects following two steps. In practice, they usually obey a rule of global to local. Through looking at the whole image at first, people can discover places that need to be focused with more attentions. These attentional regions are then checked closer for a better decision. Inspired by this observation, we develop a two-stream framework to recognize multi-label images from global to local as human's perception system works. In order to localize object regions, we propose an efficient multi-class attentional region module which significantly reduces the number of regions and keeps their diversity. Our method can efficiently and effectively recognize multi-class objects with an affordable computation cost and a parameter-free region localization module. On three prevalent multi-label benchmarks, the proposed method achieves state-of-the-art results. In the future, we will try to integrate the label dependency into our method to further boost the performance. It is also an interesting direction to explore how to extend the proposed method for weakly supervised image detection and semantic segmentation.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,498 |
i run windowsXP SP2 and have encountered this problem on shutdown. a box appears dw.exe DLL INITIALIZATION FAILED then end of programme box msn6.exe iknow it isn't causing me problems just annoying i have run ad-aware,registry mech,avg uninstalled msn reinstalled it but still the problem persists, could it be a registry problem.
I have the same exact issue, but I have never used MSN.
In order to get rid of this minor irritant, save your money! instead of buying new software to get rid of it........you do this.
Go to "my computer", search, type in dw.exe (click in subfolders and hidden files), it will find 5 files, 1 having a strange extension (mine was fp.) Delete this file, close out the window and it wont come up again.
I don't uderstand about "dw.exe dll" It is not clear. What problem?
I'm also having this problem Geoff, I've been onto a few sites and they are saying it's adware or spyware and to delete it right away, but I'm not sure. If you get any closer to sorting it out post the answer.
I have found out how to stop this happening. All you have to do is to sign out of MSN and this seems to stop the problem! However I had been useing MSN with no change and it just started happening out of the blue. I still do not know what caused the problem. Hope this helps any one.
im haveing the same problem it is annoying int it and i do a lot on MSN its realy buging me help!
I've been getting dw.exe dll failed --etc. on shut-downIt is not causing grief just annoying-The other night I had to shut down using the power button then it was okay after that. I also downloaded from MS the other day.
My experience exactly like yours Geoff. Could be related to a recemt MSN upgrade which I uninstalled.
I am getting the same problem. I do not know the answer and have searched the microsoft data base and there is nothing on there about this. I believe it has something to do with msn messanger. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,344 |
Updated: Report corrected after falsely accusing anti-fracking campaigners of "grooming" 14-year-old boy
By Ruth Hayhurst on July 30, 2018 • ( 76 Comments )
Anti-fracking rally in Manchester in November 2016. Photo: DrillOrDrop
Update 29/8/2018: See section headed Swapping fracking at the bottom of this post.
A report published today on social cohesion in Greater Manchester has been rewritten this evening after it emerged that a case study falsely accused anti-fracking activists of "grooming" a 14-year-old boy.
The first version of the report, commissioned after the Manchester Arena bombing, said the boy was referred to Channel, part of the government's anti-extremist Prevent programme. His school was said to have been concerned about what were described as his "extreme beliefs" around fracking.
The boy, named Aaron, was said to have been targeted aggressively via social media after signing an online petition. The case study said he was encouraged to participate in local protests and was on "the periphery of engaging in criminal behaviour". He engaged with local activists through the dark web, it added.
Paragraph from version 1 of the report, p89
But this evening, a revised version of the 124-page report, titled A Shared Future, has removed all reference to anti-fracking campaigners.
Revised version of the paragraph
A statement from the Greater Manchester Combined Authority said:
"The A Shared Future report contains a number of case studies where some details have been changed to protect the identities of those involved. This is standard practice where sensitive information is being used in a report.
"However, in one of these case studies – Case Study J – a factual detail has been altered which should not have been.
"The case study mistakenly said that concerns were raised around fracking. They were actually raised around a form of environmental extremism – but it had nothing to do with fracking.
"Although this change was made with the good intention of protecting the individual's identity, ultimately if was the wrong thing to do. We apologise for this error.
"Because of a genuine fear that this vulnerable child cold be identified, we cannot give more specific details about the type of extremism."
The report, commissioned by the Mayor of Greater Manchester, Andy Burnham, was designed to improve social cohesion across the region. The conclusion included a recommendation to submit it as the Greater Manchester response to the Government's Green Paper consultation on the Integrated Communities Strategy.
The Green Party peer, Baroness Jones, told The Guardian:
"To potentially drag the name of fracking activists through the mud like this is totally unacceptable. We should not stand by and watch while environmental campaigners are discredited in this way.
"Disguising the identity of a vulnerable young person and ensuring appropriate safeguards are in place is of course very important, but we must also make sure we are not wrongly implicating activists in this fashion."
Anti-fracking campaigners reacted angrily in 2016 when opposition to fracking was listed alongside terrorist organisations, including the IRA, Al Quaeda, the PKK and ISIL, in official counter-extremism documents.
The police monitoring group, Netpol, recently won a case in its challenge to establish how many anti-fracking campaigners have been referred to Channel. Five police forces refused to respond to a question asked in Freedom of Information requests, citing "national security".
But in June, a tribunal ruled that police could not use this reason and ordered the police forces to respond to the requests. Netpol said today that it was still waiting.
Swapping fracking
A Freedom of Information request has revealed some details of how the case study falsely accused anti-fracking activists of "grooming" the teenager.
According to correspondence released in response, the author of the report (name redacted) was concerned that the original case study might identify the teenager.
Eleven days before the report was released, the author, employed by the Greater Manchester Combined Authority, contacted someone (name also redacted) who knew the teenager and asked for advice.
The contact replied the following day:
"People working in [redacted] could easily work out who this is as it was such an unusual – [redacted].
"I wonder if the subject matter being changed might help. [redacted]
"Would it work to changing it to something like 'anti-fracking' or something like that? The methodologies of grooming and being 'pulled into that world etc would be the same but it would be harder for someone reading it to make the connection to the real case of both the [redacted] matter were different."
The report author replied within seven minutes with a new version of the case study. The author wrote:
"Thank you for your quick response. I completely share your concerns. Your idea around changing the motivation is a good one – I have changed to anti-fracking (please see below), what are your thoughts on the edit?"
Within 16 minutes the contact replied:
"Yes that feels more comfortable and less identifiable."
The contact suggested another change to the case study text, suggesting that the teenager became known to activists by attending a local protest or signing a petition".
The FOI request was made by John Hobson, a Lancashire anti-fracking campaigner. He has asked the Greater Manchester Combined Authority to reconsider its decision not to identify the report author.
Categories: policing
Tagged as: A Shared Future, accuse, Andy Burnham, anti- extremism, Baroness Jones, case study, correct, false, Fracking, Greater Manchester Combined Authority, Netpol, Prevent, report, social cohesion, tribunal
Guest post: Third Energy discusses new North Yorks fracking well with police but not the community
Events diary – August 2018 onward
Alan J Tootill says:
Drivers being groomed by anti-frackers – shame! 😉
https://www.blackpoolgazette.co.uk/news/m55-hit-with-anti-fracking-graffiti-again-1-9277221
Thanks Ruth and Paul for this update on the accusations of grooming of a 14 year old child.the redaction is telling, but i am sure that it produced the desired effect of smearing the anti fracking movement.
Perhaps nothing but a police announcement and a newspaper and media headline retraction will suffice?
If 300 were arrested outside one pub, Alan it might even be more stupid? Yes, they would be extremely stupid.
You only get arrested, by the way, if you have broken the law-apart from the odd one where a genuine error is made.
You can, and will, justify that no extremists involved-although some of your anti colleagues have disagreed with that, a love in for fresh faced youngsters, all arrests a result of heavy handed policing and injunctions only granted because the Judges were misled, or part of the conspiracy.
It's par for the course, but outside of the antis shrinking community will gain little support. Most of the public will also think that 85,000 people who drink drive are extremists, as well. (Unless they read the Guardian, and are told, and believe, they only do so to overcome the pains of austerity.)
Yes, I do agree with you that it is fortunate you don't work for a fracking company. I believe they require Gold Standard operatives.
Another blue paint graffiti flag isn't it?
The colour is a dead giveaway. Genuine would be green.
Pure tory industry blue flag fear smear campaign.
Number three on the list.
Kisheny says:
They were actually raised around a form of environmental extremism ???
Environment extremists? Seen any of them recently?
Looked in a mirror recently?
You need gas to attain those high temperatures to produce glass.
Funny how even the smallest sentences can include products made using gas…
Is that an environmental extremist view? Lets look at mirrors a bit closer shall we? Closer? Closer? Whoops! Stop! You are steaming it up, try backing off a little.
The reflection you see is reflected light off the object, in this case it is called kish, sounds a bit like breaking glass doesn't it?
And the glass is just the top transparent medium, its not necessary for the reflection.
In fact, reflection is the very thing you don't want it to do, the reflective element behind the glass is metallic foil or plating such as silver nitrate, and mirrors used to be polished metal or stone such as obsidian without glass because glass then was not clear enough and could not be made perfectly flat, i believe mercury beds to get the molten glass perfectly flat and silver nitrate is used to get the reflective element, NASA use gold reflective elements i believe?
Glass has been made for thousands of years going back into pre history without the use of gas, but by use of coal, or coke or wood gas even:
https://en.wikipedia.org/wiki/Wood_gas
Today aside from natural gas, concentrated sunlight will produce the required temperature in a solar furnace in hot countries, nuclear powered furnace from thorium is more available than uranium, and can be made small scale, how about electromagnetic? Gas is just a very recent medium, but it is not needed for the reflective element.
TW says:
Any kind of grooming for extreme activism is despicable [edited by moderator]. Especially using young impressionable minds. Very shamefull act.
Didnt Ineos attempt to fund child sports and make them wear ineos tee shirts, no doubt encouraging compliance to ineos propaganda?
[Edited by moderator]
Ineos have a healthy sporting culture. Promoting and funding Children's sports is fantastic. [Edited by moderator]
[Edited by moderator] the child has autism; there is no such thing as an 'autistic' child.
Yes Sherwulfe
The term used is ASD or Autism Spectrum Disorder and covers a whole range or spectrum of disorders which cannot be clinically defined by any one condition or set of symptoms, each child is unique and special as we all are. The term ASD also covers conditions such as Asperger syndrome which is treated as a related condition.
The spectrum range represents both mental and physical disability spectrum ranges and as such children are extremely vulnerable and deserve and receive every protection.
There are very strict and far reaching laws about the responsible treatment, education, care and human rights of such children.
I am allowed to say, that for a child to have been put in such a vulnerable position from the very outset is outside of the strict guidelines and the recommendations and the law.
A newspaper report giving any details not approved by the care provider contravenes the legal framework that regulates such reporting.
This text was submitted for approval and approved before posting it.
This is the final approved version.
PREVENT was designed to record behaviour of young people and their influences to try and spot potential extreme behaviors that may go on to manifest in terrorist activity. It was set up after the extreme responses by a small number of individuals, erroneously in the name of religion. It is also there to spot early signs of of child abuse. Many groups and cultural preferences have been included for political correctness.
If giving out leaflets is an early sign of terrorism then all political parties, charities, company sponsorship and event organizers are in trouble.
Let's just keep this all in perspective.
It takes a certain individual to step over the line.
As a 'mistake' has been made in the report and subsequently in the correction of reporting the error in mainstream media it should be rectified at once and an inquiry should commence into the perpetrators of the initial misinformation and the possible motivation for such a statement.
https://news.sky.com/story/amp/british-gas-owner-centrica-hints-at-further-price-rise-over-wholesale-energy-costs-11454938?__twitter_impression=true
What an ill informed remark. Try Giggling Daily Mile Campaign or Go Run for Fun. What rubbish can we heap upon Elaine Wyllie?
"I only post the truth"!
Obviously not. No doubt about that.
I suppose next it will be £110m for the Admirals Cup challenge is to be able to meet sailors?
Then, football academies in Africa should really be an easy target.
"Run for fun"-enhancing mood, behaviour, academic performance and all round wellbeing. Well, start at the younger generation and leave some of the older generation as a symbol of the negative control.
Oooh dear. Reading again martin? Tut Tut! The guys and gals round the Fog and F…rack, wont be amused?
Free porkie pies anyone?
Free tee shirts for the kiddies?
Read the comment about posting, if the selectamatic black out blinkers will allow of course?
Personal attacks are not pretty and may scupper your application for the bought and paid for industry Haw Haw Frack Hack?
I can only tell you?
You should read the Times, Sherwulfe. They seem to be able to get it right. Not all the time, not on everything, but in this case, yes. They also serialised part of The Alchemists, which would be a pure bonus!
Sounds like serial abuse?
Sound like a serial denialist Phil C???
I read as many different reports as available MC. My comment was specific; yours seems to have gone off topic – again…..
Personal attacks are not pretty. Hmm. Pot and kettle and INEOS (Ie. Jim Radcliffe.) Remember "climate change deniers"? No? Remember sending PhilipP off for re-education because he appeared to be a "wolf in sheep's clothing"? No?
Tell you what PhilC, forget generating fog and do just a little research about the subjects and some might then start to believe you post the truth. You can still be biased, an excuse for a poet, but truthful. It takes a bit more effort. But, if you wish to continue to follow the Guardian lead, feel free-and exposed. Just like they were.
By the way-Third Energy are owned by Barclays Bank-still. As such they do not have a share price.
Sorry that you wish to continue to go unchallenged and that you fell for the basic trick of believing what you read on social media, but the only way to stop fake news is to expose it. If it is not me, it will be someone less gentle.
(Actually, I did intend spending less time on DOD, but plans changed because a supplier let me down and I was stuck with computer "stuff" more than I intended.)
Still suffering from personal deformation verbal constipation? Just cant let it go?
Need to unleash a stream of invective and toxic detritus waste?
Try the new Alchemist Smear Fear Porn Suppositories (accent on the tories) guaranteed to loosen even the most treasured democratic principle! One little (Patented ##) "Chill Pill" and its frackboots in the streets and freedom out of the window!
Used by all "good" Dictators and fifth columnist invaders! Endorsed by ## Bigot Oil and Gas below a community near you?
Need to pretend you are not reading all that nasty stuff that cannot be answered?
Well relax!
New improved Bullshit Baffles Brains Virtual Total Blackout Blinkers does the trick every time!
Just plug 'em in to the real world, turn 'em on and descend into (Patented ##) "Comfort Blanket Oblivion" mode and wrap your entire world with virtual (Patented ##) "E-Truth" and no one will know the difference! Everything is automatically converted into your favourite (Patented ##) "Bias Mode" of your choice! Whatever you say, whatever you type, whatever you think, will be automatically translated into (Patented ##) "Hate Speech" or "Total Ignorance" or "Extremist Derog a tory" or "Fracking Fractures Everything" mode of your choice!
Late News! The Latest Endorsement from Tweetmaster Donald F Trump is here! "i said stroke that pussy" and it was turned into "Declare War on Vladimir Putin!" What a boon to international politics this little device is!
And this from The Guardian: "We said "[edited]" and the Blinker's said "[edited]!" Like wow man! We'll use it all the time now!
Sounds….really…..awful!
Fogs rolling in again.
Switch on the light house-when Donald/Vlad lets us have some LPG.
(Saves Jim's plastic if right first time. Or, failing that, once in a while.)
Fogs all yours its clear as a bell here.
The last word syndrome…….say anything as irrelevant as you can; gets you a header or footer 🙂
[Comments removed] says:
[Comments removed by moderator] | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,688 |
package com.android.tools.idea.editors.gfxtrace.schema;
import com.android.tools.idea.editors.gfxtrace.rpc.StructInfo;
/**
* A structure instance unpacked using a schema description.
*/
public class Struct {
public final StructInfo info;
public final Field[] fields;
public Struct(StructInfo info, Field[] fields) {
this.info = info;
this.fields = fields;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,915 |
{"url":"https:\/\/www.gamedev.net\/articles\/programming\/math-and-physics\/leading-the-target-r4223","text":"\u2022 Followers 0\n\nMath and Physics\n\n\u2022 Posted By alvaro\nWhere should we aim if we want to hit a moving target with a finite-speed projectile? This is one of the recurrent questions from beginner game developers. If we naively aim at the target's current position, by the time our projectile gets there the target will have moved, so we need to aim ahead of the target's current position. The technical name for the technique is \"deflection\", but most people use \"leading the target\". Wikipedia page here. There are several variations of the problem, where perhaps the projectile is a missile that needs to accelerate, or where the shooter is a turret that is currently aiming in some direction and needs time to aim somewhere else... We'll cover the simple case first, and then we'll present a general template for solving variations of the problem.\n\nPlain-vanilla deflection\n\nLet's assume the shooter can aim and shoot anywhere instantly, the target is moving at a constant velocity and the projectile will travel at a constant velocity too. We are given as inputs the target's current position, its velocity and the speed of our projectile. We'll use coordinates where the shooter is at the origin and has zero velocity.\n\nFirst-order correction\n\nAs mentioned before, if we naively aim at the target's current position, by the time the projectile gets there, the target will have moved. We can compute how long it will take for the projectile to get to the target's current position, compute where the target will be then and aim there instead. Position compute_first_order_correction(Position target_position, Vector target_velocity, float projectile_speed) { float t = distance(Origin, target_position) \/ projectile_speed; return target_position + t * target_velocity; } This simple piece of code is probably good enough in many cases (if the target is moving slowly compared to the projectile speed, if the target is moving perpendicularly to the shooter-to-target vector, or if we want to sometimes miss because a more precise solution would be detrimental to the fun of the game).\n\nIterative approximation\n\nFor a more precise solution, you could iterate this first-order correction until it converges. Position iterative_approximation(Position target_position, Vector target_velocity, float projectile_speed) { float t = 0.0f; for (int iteration = 0; iteration < MAX_ITERATIONS; ++iteration) { float old_t = t; t = distance(Origin, target_position + t * target_velocity) \/ projectile_speed; if (t - old_t < EPSILON) break; } return target_position + t * target_velocity; }\n\nIn the iterative approximation, we would stop if we found a place where old_t and t match. This gives us an equation to solve: t = distance(Origin, target_position + t * target_velocity) \/ projectile_speed Let's do some computations to try to solve it. t = sqrt(dot_product(target_position + t * target_velocity, target_position + t * target_velocity)) \/ projectile_speed t^2 * projectile_speed^2 = dot_product(target_position + t * target_velocity, target_position + t * target_velocity) t^2 * projectile_speed^2 = dot_product(target_position, target_position) + 2 * t * dot_product(target_position, target_velocity) + t^2 * dot_product(target_velocity, target_velocity) This is a second-degree equation in t which we can easily solve, leading to the following code: \/\/ a*x^2 + b*x + c = 0 float first_positive_solution_of_quadratic_equation(float a, float b, float c) { float discriminant = b*b - 4.0f*a*c; if (discriminant < 0.0f) return -1.0f; \/\/ Indicate there is no solution float s = std::sqrt(discriminant); float x1 = (-b-s) \/ (2.0f*a); if (x1 > 0.0f) return x1; float x2 = (-b+s) \/ (2.0f*a); if (x2 > 0.0f) return x2; return -1.0f; \/\/ Indicate there is no positive solution } Position direct_solution(Position target_position, Vector target_velocity, float projectile_speed) { float a = dot_product(target_velocity, target_velocity) - projectile_speed * projectile_speed; float b = 2.0f * dot_product(target_position, target_velocity); float c = dot_product(target_position, target_position); float t = first_positive_solution_to_quadratic_equation(a, b, c); if (t <= 0.0f) return Origin; \/\/ Indicate we failed to find a solution return target_position + t * target_velocity; } \n\nThe general case\n\nThere are many variations of the problem we could consider: Accelerating targets, accelerating projectiles, situations where it takes time to aim at a new direction... All of them can be solved following the same template. The things that could change can be encoded in two functions:\n\u2022 position_of_target_at(time)\n\u2022 time_to_hit(position)\nAll we are really doing is finding a time t at which the following equation holds: t = time_to_hit(position_of_target_at(t)) We then compute where the target will be at time t and aim there. Just as before, we could do a first-order correction, use iterative approximation or solve the problem directly. It might be the case that an analytical solution can be found, like we did in the previous section, but things can get messy quickly and you may have to resort to a numerical solution.\n\nConclusion\n\nThis article covered three methods to implement deflection in your games: First-order correction, iterative approximation and directly finding the solution. You'll need to use some judgement to decide which one to use. Hopefully this article gives you enough to make an informed decision.\n\nArticle Update Log\n\n30 Oct 2015: Initial release 4 Nov 2015: Minor fixes\n0\n\nFollowers 0\n\nUser Feedback\n\nYou need to be a member in order to leave a review\n\nCreate an account\n\nRegister a new account\n\n\u2022 5\n\n0\n\n\u2022 5\n\n0\n\n\u2022 5\n\n0\n\n\u2022 5\n\n0\n\n\u2022 5\n\n0\n\n\u2022 5\n\n0\n\n\u2022 5\n\n0\n\n\u2022 5\n\n0\n\n\u2022 5\n\n0\n\n\u2022 5\n\n0","date":"2017-06-26 13:53:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.29747629165649414, \"perplexity\": 2085.0733479037417}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-26\/segments\/1498128320763.95\/warc\/CC-MAIN-20170626133830-20170626153830-00455.warc.gz\"}"} | null | null |
{"url":"https:\/\/bjbell.wordpress.com\/2007\/10\/","text":"# Point-In-Polygon Testing Part\u00a01\n\nOne of the most basic geometric tests is if a point, $p$, is inside a polygon $poly$.\n\nThe most common method for this test is casting a ray from the point $p$ either horizontally or vertically and checking how many times it intersects the polygon. If the number of intersections is odd then $p$ is inside $poly$. If the number of intersections is even then $p$ is outside $poly$. In the below illustration the horizontal ray cast from black point intersects the polygon 3 times. Thus the black point is inside the polygon.\n\nThere are a few corner cases that have to be carefully handled.\n\nCase 1. The point $p$ is on the boundary of the polygon. In this case the algorithm may return true or false because of floating point issues.\n\nCase 2. The casted ray intersects a vertex of the polygon. In this case the naive implementation of the algorithm would return two intersections because of the top line segment and the bottom line segment. The way to fix this is to only count an intersection if (1) it\u2019s not an end-point intersection or (2) it\u2019s an end-point intersection and the end-point of the line segment lies below the casted ray.\n\nCase 3. The casted ray overlaps a line segment, $l$. In this case the casted ray intersects an end-point of the overlapping line segment, $l$ and the line segment does not lie below the casted ray. So do not count the line segment $l$ intersection(s).\n\nBy implementing the special cases this point-in-polygon method should work extremely well. It\u2019s a fast method since the only operations used are compares. The big-O is O(n) where n is the number of line segments for the polygon.\n\n# More on Floating\u00a0Point\n\nIn my last post I outlined one of the issues with floating point. My particular geometric problem is testing if two polygons are subsets of each other. In general I want to test if one polygon is a subset of another polygon. Also the polygons can be rotated, translated, or reflected in particular any rigid motion can be applied to the polygons.\n\nTo again illustrate the problem with floating point I\u2019ll give an example:\n\nLet polygon, $T1 = \\{(0, 0), (1, 0), (1, 1),\\}$ (a triangle), and polygon $T2 = \\{ (0, 0), (0.1, 0.1), (0, 1)\\}$ (another triangle). See the below illustration.\n\nBy rotating triangle T1 by exactly $-\\pi\/4$ radians, I can make triangle T1 a subset of triangle T2. But the problem with floating point is that $-\\pi\/4$ is not a floating point number so I can\u2019t rotate by exactly $-\\pi\/4$. Which means that using the ordinary algorithms for subsetness there is no easy way to make T1 a subset of T2.\n\nThat why I wrote the previous post on using some sort of fuzzy $\\epsilon$ in my geometry algorithms. But I believe that there is another way to get the result I want. What I can do is rotate T1 by the closest floating point number to $-\\pi\/4$ and then do some boolean operations to T1 and T2. In particular take the boolean intersection of T1 and T2, call the resulting shape I (the intersection). And then take the boolean difference of T1 and I, call the resulting shape R (the remainder). If R is small enough (say the area is within $\\epsilon = 10^-5$) then I can say that T1 is a subset of T2.\n\nIn this way I can avoid using $\\epsilon$ in the algorithm for testing if a point is contained in a polygon (which is what I was supposed to write about in this post).\n\n# Rigorous Geometry\n\nI\u2019ve mentioned before that floating numbers are not real numbers. In particular they are not associative or distributive link\nand also most real numbers cannot be exactly represented. Thus a naive implementation for common geometry operations such as line intersection or testing if a point is contained (by contained I mean the point is either on the boundary or in the interior of the polygon) inside a polygon can have problems. I\u2019ll give a short example of testing if a point is inside a square.\n\nLet $p = (0.5, 0)$ and let the square, $s$, have vertices at $(0, 0), (1, 0), (1, 1), (0, 1)$. Obviously $p$ is contained in $s$ and any reasonable algorithm for testing so would yield true. I now want to try rotating $s$ by $\\pi\/4$ radians. This is the point where the pesky problems with floating point come in. The number $\\pi\/4$ is not a floating point number so it is approximated by 0.785398164 (I haven\u2019t written all the digits of the floating point number but that doesn\u2019t change the issue). It turns out that 0.785398164 is greater than $\\pi\/4.$ That is the square was rotated by a little bit more than $\\pi\/4$ radians. Thus the point $(0.5, 0.5)$ is not contained in the rotated square.\n\nThis is a bad thing because intuitively the point $(0.5, 0.5)$ should be contained in the rotated square. After all we meant to rotate the square by $\\pi\/4$ radians. I don\u2019t know of any way to fix the problem where we can\u2019t rotate by the number of desired radians. What is possible is to modify the algorithms for testing if a point is contained inside a polygon so that they return true if the point is within $\\epsilon$ of the boundary of the polygon or is actually contained in the polygon. The $\\epsilon$ is a fixed number that is large enough to mask the floating point rounding problems.\n\nMy next post will give one of the algorithms for testing point containedness with the $\\epsilon$ modification.","date":"2017-08-17 23:22:38","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 36, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6351330280303955, \"perplexity\": 218.62937264921936}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886104172.67\/warc\/CC-MAIN-20170817225858-20170818005858-00524.warc.gz\"}"} | null | null |
<?php
namespace Alazjj\SimpleBootstrapBundle\Composer;
use Composer\Script\Event;
use Symfony\Component\Finder\SplFileInfo;
use Symfony\Component\Finder\Finder;
use Symfony\Component\Filesystem\Filesystem;
use Symfony\Component\Filesystem\Exception\IOException;
class ScriptHandler
{
const BUNDLE_NAME = 'SimpleBootstrapBundle';
public static function getVendorDir()
{
return realpath(
__DIR__ . DIRECTORY_SEPARATOR . '..' . DIRECTORY_SEPARATOR . '..' . DIRECTORY_SEPARATOR . '..' . DIRECTORY_SEPARATOR . '..' . DIRECTORY_SEPARATOR
);
}
public static function getBundleCssDir()
{
return self::getBundleAssetDir('css');
}
public static function getBundleImgDir()
{
return self::getBundleAssetDir('img');
}
public static function installAssets()
{
$cssFinder = new Finder();
$cssFinder->files()
->in(self::getVendorDir())
->name('/\.css$/')
->exclude(self::BUNDLE_NAME);
self::symlinkAssets($cssFinder, 'css');
$imgFinder = new Finder();
$imgFinder->files()
->in(self::getVendorDir())
->name('/\.png$/')
->name('/\.gif$/')
->name('/\.jpg$/')
->name('/\.jpeg$/')
->exclude(self::BUNDLE_NAME);
self::symlinkAssets($imgFinder, 'img');
}
/**
* Creates a symlink for each asset into the public/$assetType directory of the bundle.
*/
private static function symlinkAssets(Finder $finder, $assetType)
{
$getBundleAssetDir = "getBundle" . ucfirst($assetType) . "Dir";
$bundleAssetDir = self::$getBundleAssetDir() . DIRECTORY_SEPARATOR;
$fs = new Filesystem();
/** @var $asset SplFileInfo */
foreach ($finder as $asset) {
try {
$fs->symlink($asset->getPathname(), $bundleAssetDir . $asset->getBasename());
} catch (IOException $e) {
echo "An error occurred while symlinking the asset {$asset->getBasename()}.\n";
}
}
}
private static function getBundleAssetDir($assetType)
{
return realpath(
__DIR__ . DIRECTORY_SEPARATOR . '..' . DIRECTORY_SEPARATOR . 'Resources' . DIRECTORY_SEPARATOR . 'public' . DIRECTORY_SEPARATOR . $assetType . DIRECTORY_SEPARATOR
);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 33 |
// Copyright (c) Rick Beerendonk. All rights reserved.
//
// The use and distribution terms for this software are covered by the
// Eclipse Public License 1.0 (http://opensource.org/licenses/eclipse-1.0.php)
// which can be found in the file epl-v10.html at the root of this distribution.
// By using this software in any fashion, you are agreeing to be bound by
// the terms of this license.
//
// You must not remove this notice, or any other, from this software.
using System;
namespace Beerendonk.Memoization
{
/// <summary>
/// Identifies a key-value-cache.
/// </summary>
/// <typeparam name="TKey">The type of the key.</typeparam>
/// <typeparam name="TValue">The type of the value.</typeparam>
public interface ICache<TKey, TValue>
{
/// <summary>Removes all keys and values from the cache.</summary>
void Clear();
/// <summary>Adds a key/value pair to the cache if the key does not already exist.</summary>
/// <param name="key">The key of the element to add.</param>
/// <param name="valueFactory">The function used to generate a value for the key.</param>
/// <returns>
/// The value for the key. This will be either the existing value for the key if
/// the key is already in the cache, or the new value for the key as returned by
/// valueFactory if the key was not in the cache.
/// </returns>
TValue GetOrAdd(TKey key, Func<TKey, TValue> valueFactory);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,662 |
{"url":"https:\/\/www.ann-geophys.net\/17\/984\/1999\/","text":"Journal cover Journal topic\nAnnales Geophysicae An interactive open-access journal of the European Geosciences Union\nJournal topic\n\n\u2022 IF 1.585\n\u2022 IF 5-year\n1.698\n\u2022 CiteScore\n1.62\n\u2022 SNIP 0.820\n\u2022 IPP 1.52\n\u2022 SJR 0.781\n\u2022 Scimago H\nindex 83\n\u2022 h5-index 24\n\n# Abstracted\/indexed\n\nAbstracted\/indexed\nAnn. Geophys., 17, 984-995, 1999\nhttps:\/\/doi.org\/10.1007\/s00585-999-0984-6\nAnn. Geophys., 17, 984-995, 1999\nhttps:\/\/doi.org\/10.1007\/s00585-999-0984-6\n\n31 Aug 1999\n\n31 Aug 1999\n\n# Analysis of thick, non-planar boundaries using the discontinuity analyser\n\nM. W. Dunlop and T. I. Woodward M. W. Dunlop and T. I. Woodward\n\u2022 Space Physics Group, Physics Department, Imperial College of Science, Technology and Medicine, London, SW7 2BZ, UK\n\u2022 E-mail: m.dunlop@ic.ac.uk\n\nAbstract. The advent of missions comprised of phased arrays of spacecraft, with separation distances ranging down to at least mesoscales, provides the scientific community with an opportunity to accurately analyse the spatial and temporal dependencies of structures in space plasmas. Exploitation of the multi-point data sets, giving vastly more information than in previous missions, thereby allows unique study of their small-scale physics. It remains an outstanding problem, however, to understand in what way comparative information across spacecraft is best built into any analysis of the combined data. Different investigations appear to demand different methods of data co-ordination. Of the various multi-spacecraft data analysis techniques developed to affect this exploitation, the discontinuity analyser has been designed to investigate the macroscopic properties (topology and motion) of boundaries, revealed by multi-spacecraft magnetometer data, where the possibility of at least mesoscale structure is considered. It has been found that the analysis of planar structures is more straightforward than the analysis of non-planar boundaries, where the effects of topology and motion become interwoven in the data, and we argue here that it becomes necessary to customise the analysis for non-planar events to the type of structure at hand. One issue central to the discontinuity analyser, for instance, is the calculation of normal vectors to the structure. In the case of planar and thin' non-planar structures, the method of normal determination is well-defined, although subject to uncertainties arising from unwanted signatures. In the case of thick', non-planar structures, however, the method of determination becomes particularly sensitive to the type of physical sampling that is present. It is the purpose of this article to firstly review the discontinuity analyser technique and secondly, to discuss the analysis of the normals to thick non-planar structures detected in magnetometer data.\n\nKey words. Space plasma physics (discontinuities; instruments and techniques)","date":"2019-07-21 21:36:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.33903300762176514, \"perplexity\": 3092.070828632279}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195527204.71\/warc\/CC-MAIN-20190721205413-20190721231413-00436.warc.gz\"}"} | null | null |
Q: REST API for yii2, the authenticator (HttpBearerAuth) is not working on server I've just created a project for working with REST API (using yii2 framework).
All issues of REST API is working really cool on localhost. But when bringing the project on server (also the same database is taken by), the authorization is not available. Now I'm using "yii\filters\auth\HttpBearerAuth"
Inside the model "implements IdentityInterface", there's finding-token function "findIdentityByAccessToken" that's so simple, the "validateAuthKey" function is returning always true; see below:
public static function findIdentityByAccessToken($token, $type = null){
return static::findOne(["token" => $token]);
}
public function validateAuthKey($token)
{
return true;
}
See any pictures:
https://www.flickr.com/photos/40158620@N03/20701523349/in/dateposted-public/
Anyone can have some experience on this problem, can you tell me how to solve it? Thanks for your kindness.
*
*Note:
*The project, I'm following https://github.com/NguyenDuyPhong/yii2_advanced_api_phong (It works fine on localhost; I also deployed exactly the project on my server, it raised the same problem )
*To make sure that the server is configured right: I created 2 actions, 1 is authorized, another is not. I checked unauthorized action, it works very well. =======>
*actionView is not authorized => getting API info. is ok
*actionIndex is authorized by "yii\filters\auth\HttpBearerAuth" => FAIL
A: In my case the Problem was that the server removes Authorization Header
I needed to add this to .htaccess
SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1
A: HttpBearerAuth used $user->loginByAccessToken to authorize see
validateAuthKey used by "loginByCookie"(and it seems that in this case not used)
Try to use QueryParamAuth for test purposes (it's easier to test)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,578 |
{"url":"http:\/\/www.koreascience.or.kr\/article\/JAKO201312855325571.page","text":"# Two Uncertain Programming Models for Inverse Minimum Spanning Tree Problem\n\n\u2022 Zhang, Xiang ;\n\u2022 Wang, Qina ;\n\u2022 Zhou, Jian\n\u2022 Accepted : 2013.03.05\n\u2022 Published : 2013.03.31\n\u2022 70 15\n\n#### Abstract\n\nAn inverse minimum spanning tree problem makes the least modification on the edge weights such that a predetermined spanning tree is a minimum spanning tree with respect to the new edge weights. In this paper, the concept of uncertain ${\\alpha}$-minimum spanning tree is initiated for minimum spanning tree problem with uncertain edge weights. Using different decision criteria, two uncertain programming models are presented to formulate a specific inverse minimum spanning tree problem with uncertain edge weights involving a sum-type model and a minimax-type model. By means of the operational law of independent uncertain variables, the two uncertain programming models are transformed to their equivalent deterministic models which can be solved by classic optimization methods. Finally, some numerical examples on a traffic network reconstruction problem are put forward to illustrate the effectiveness of the proposed models.\n\n#### Keywords\n\nMinimum Spanning Tree;Uncertain Minimum Spanning Tree;Inverse Optimization;Uncertain Programming\n\n#### References\n\n1. Ahuja, R. K., Magnanti, T. L., and Orlin, J. B. (1993), Network Flows: Theory, Algorithms, and Applications, Prentice Hall, Englewood Cliffs, NJ.\n2. Ahuja, R. K. and Orlin, J. B. (2000), A faster algorithm for the inverse spanning tree problem, Journal of Algorithms, 34(1), 177-193. https:\/\/doi.org\/10.1006\/jagm.1999.1052\n3. Chen, X. (2011), American option pricing formula for uncertain financial market, International Journal of Operations Research, 8(2), 27-32.\n4. Farago, A., Szentesi, A., and Szviatovszki, B. (2003), Inverse optimization in high-speed networks, Discrete Applied Mathematics, 129(1), 83-98. https:\/\/doi.org\/10.1016\/S0166-218X(02)00235-4\n5. Guan, X. and Zhang, J. (2007), Inverse constrained bottleneck problems under weighted $l_{{\\infty}}$ norm, Computers and Operations Research, 34(11), 3243-3254. https:\/\/doi.org\/10.1016\/j.cor.2005.12.003\n6. He, Y., Zhang, B., and Yao, E. (2005), Weighted inverse minimum spanning tree problems under Hamming distance, Journal of Combinatorial Optimization, 9(1), 91-100. https:\/\/doi.org\/10.1007\/s10878-005-5486-1\n7. Kershenbaum, A. (1993), Telecommunication Network Design Algorithms, McGraw-Hill, New York, NY.\n8. Li, S. and Peng, J. (2012), A new approach to risk comparison via uncertain measure, Industrial Engineering & Management Systems, 11(2), 176-182. https:\/\/doi.org\/10.7232\/iems.2012.11.2.176\n9. Liu, B. (2007), Uncertainty Theory (2nd ed.), Springer-Verlag, Berlin.\n10. Liu, B. (2009), Some research problems in uncertainty theory, Journal of Uncertain Systems, 3(1), 3-10.\n11. Liu, B. (2010), Uncertainty Theory: A Branch of Mathematics for Modeling Human Uncertainty, Springer- Verlag, Berlin.\n12. Peng, J. and Li, S. (2011), Spanning tree problem of uncertain network, Proceedings of the 3rd International Conference on Computer Design and Applications, Xi'an, Shaanxi, China.\n13. Peng, Z. and Iwamura, K. (2010), A sufficient and necessary condition of uncertainty distribution, Journal of Interdisciplinary Mathematics, 13(3), 277-285. https:\/\/doi.org\/10.1080\/09720502.2010.10700701\n14. Sheng, Y. and Yao K. (2012), Fixed charge transportation problem and its uncertain programming model, Industrial Engineering and Management Systems, 11(2), 183-187. https:\/\/doi.org\/10.7232\/iems.2012.11.2.183\n15. Sokkalingam, P. T., Ahuja, R. K., and Orlin, J. B. (1999), Solving inverse spanning tree problems through network flow techniques, Operations Research, 47(2), 291-298. https:\/\/doi.org\/10.1287\/opre.47.2.291\n16. Wang, Q., Yang, X., and Zhang, J. (2006), A class of inverse dominant problems under weighted $l_{{\\infty}}$ norm and an improved complexity bound for Radzik's algorithm, Journal of Global Optimization, 34(4), 551-567. https:\/\/doi.org\/10.1007\/s10898-005-1649-y\n17. Xu, X. and Zhu, Y. (2012), Uncertain bang-bang control for continuous time model, Cybernetics and Systems, 43(6), 515-527. https:\/\/doi.org\/10.1080\/01969722.2012.707574\n18. Yang, X. and Zhang, J. (2007), Some inverse min-max network problems under weighted l1 and $l_{{\\infty}}$ norms with bound constraints on changes, Journal of Combinatorial Optimization, 13(2), 123-135.\n19. Zhang, B., Zhang, J., and He, Y. (2006), Constrained inverse minimum spanning tree problems under the bottleneck-type Hamming distance, Journal of Global Optimization, 34(3), 467-474. https:\/\/doi.org\/10.1007\/s10898-005-6470-0\n20. Zhang, J., Liu. Z., and Ma, Z. (1996), On the inverse problem of minimum spanning tree with partition constraints, Mathematical Methods of Operations Research, 44(2), 171-187. https:\/\/doi.org\/10.1007\/BF01194328\n21. Zhang, J. and Zhou, J. (2006), Models and hybrid algorithms for inverse minimum spanning tree problem with stochastic edge weights, World Journal of Modelling and Simulation, 2(5), 297-311.\n22. Zhou, C. and Peng, J. (2011), Models and algorithm ofmaximum flow problem in uncertain network, Proceedingsof the 3rd International Conference on-Artificial Intelligence and Computational Intelligence,Taiyuan, Shanxi, China, 101-109.\n\n#### Cited by\n\n1. Multi-objective optimization in uncertain random environments vol.13, pp.4, 2014, https:\/\/doi.org\/10.1007\/s10700-014-9183-3\n2. Uncertain Quadratic Minimum Spanning Tree Problem pp.17962021, 2014, https:\/\/doi.org\/10.12720\/jcm.9.5.385-390\n3. Entropy of Uncertain Random Variables wi h Application to Minimum Spanning Tree Problem vol.25, pp.04, 2017, https:\/\/doi.org\/10.1142\/S0218488517500210\n4. An interactive satisficing approach for multi-objective optimization with uncertain parameters vol.28, pp.3, 2017, https:\/\/doi.org\/10.1007\/s10845-014-0998-0\n5. Minimum spanning tree problem of uncertain random network vol.28, pp.3, 2017, https:\/\/doi.org\/10.1007\/s10845-014-1015-3\n6. Uncertain risk aversion vol.28, pp.3, 2017, https:\/\/doi.org\/10.1007\/s10845-014-1013-5\n7. The covariance of uncertain variables: definition and calculation formulae pp.1573-2908, 2017, https:\/\/doi.org\/10.1007\/s10700-017-9270-3","date":"2019-06-18 21:50:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7726401686668396, \"perplexity\": 5335.510636746697}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627998817.58\/warc\/CC-MAIN-20190618203528-20190618225528-00122.warc.gz\"}"} | null | null |
The leading lady has been cast in the new MBC drama Terius Behind Me, and it is Jung In Sun (Eulachacha Waikiki). She joins the already confirmed So Ji Sub (Oh My Venus). Lee Yoo Young and Yoo In Na were previously offered this role but both declined.
Terius Behind Me is a mystery romantic comedy about a single mother (Jung In Sun) whose husband passed away that teams up with her legendary NIS agent neighbor (So Ji Sub) to solve a huge conspiracy. The drama comes from the writer of Shopping King Louis and will be directed by one of the PD's of Radiant Office.
I do find this to be an odd pairing, but both actors fit their character descriptions. And Shopping King Louie is one of my all time favorite dramas, so I'm interested to see what the writer brings to a new drama. Hopefully, it all comes together to be a fun spy drama with a hefty dose of romance.
Terius Behind Me is looking to air later in 2018 on MBC.
This entry was posted in News and tagged Jung In Sun, kdrama, Korean Drama, So Ji Sub, Terius Behind Me. Bookmark the permalink. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,205 |
{"url":"https:\/\/techwhiff.com\/learn\/i-need-help-with-the-calculations-please\/244386","text":"# I need help with the calculations please Presented below are selected transactions at Whispering Winds Corp....\n\n###### Question:\n\nI need help with the calculations please\n\nPresented below are selected transactions at Whispering Winds Corp. for 2019. Jan. 1 Retired a piece of machinery that was purchased on January 1, 2009. The machine cost $61,300 on that date. It had a useful life of 10 years with no salvage value. June 30 Sold a computer that was purchased on January 1, 2016. The computer cost$36,000. It had a useful life of 5 years with no salvage value. The computer was sold for $14,000. Dec. 31 Discarded a delivery truck that was purchased on January 1, 2015. The truck cost$36,000. It was depreciated based on a 6-year useful life with a $3,000 salvage value. Journalize all entries required on the above dates, including entries to update depreciation, where applicable, on assets disposed of. Whispering Winds Corp. uses straight-line depreciation. (Assume depreciation is up to date as of December 31, 2018.) (Credit account titles are automatically indented when amount is entered. Do not indent manually. Record journal entries in the order presented in the problem. If no entry is required, select \"No Entry for the account titles and enter Ofor the amounts. Do not round intermediate calculations.) Date Account Titles and Explanation Debit Credit (To record depreciation to date of disposal) June 30 (To record sale of computer) (To record depreciation to date of disposal) Dec. 31 (To record retirement of truck) ## Answers #### Similar Solved Questions 1 answer ##### You pour a small amount of blue dye into a bathtub full of water. After three... You pour a small amount of blue dye into a bathtub full of water. After three seconds the radius of the approximately circular colored patch is about 4 cm. What is its radius after about six seconds?... 1 answer ##### P=10 kN A cantilever beam is subiected to a concentrated force P, a uniformly distributed load... P=10 kN A cantilever beam is subiected to a concentrated force P, a uniformly distributed load w and a moment MI shown in the figure. Neglect the weight of the beam. (a) Draw the free body diagram for the beam showing all the 2 m reactions, replacing the support M.-2 kNm by the reaction forces\/momen... 1 answer ##### Problem 7 A long straight wire of 15 turns carries a current of 20A. The conductor... Problem 7 A long straight wire of 15 turns carries a current of 20A. The conductor is surrounded by air. Find the magnetic field intensity and flux density at a distance 0.2 m from the wire.... 1 answer ##### (a) Which one is a terminal in a grammar: a token or a lexeme? Explain. (b)... (a) Which one is a terminal in a grammar: a token or a lexeme? Explain. (b) Provide an example.... 1 answer ##### Please help answer P1-P4 for this Perkin Reaction Lab Perkin Reaction Organic REActivity PERKIN REACTION (ALDOL... Please help answer P1-P4 for this Perkin Reaction Lab Perkin Reaction Organic REActivity PERKIN REACTION (ALDOL CONDENSATION) The Reaction Scheme: Formation of trans-cinnamic acid The following reaction is named the Perkin Reaction. It is a very specific type of Aldol condensation that forms a vari... 1 answer ##### Hunkins Corporation has provided the following data concerning last month's operations. Purchases of raw materials$33,000...\nHunkins Corporation has provided the following data concerning last month's operations. Purchases of raw materials $33,000 Indirect materials included in manufacturing$ 4,000 overhead Direct labor cost $58,000 Manufacturing overhead applied to Work in Process$91,000 Beginning Ending $14,000$...\n##### When looking at a karyotype, for example to diagnose trisomy 21 in a fetus, is it...\nWhen looking at a karyotype, for example to diagnose trisomy 21 in a fetus, is it possible to use that analysis also to tell if the fetus has inherited a cystic fibrosis allele from a carrier mother?...\n##### Arrange these elements in order of increasing first ionization energy, from sm Sb, Cl, Pb, P...\nArrange these elements in order of increasing first ionization energy, from sm Sb, Cl, Pb, P < mallest mallest | < < Largest Largest Arrange this isoelectronic series in order of increasing radius, from smallest to large S?, Ca2+, K+, CH < < Smallest Smallest | < Largest Largest Ar...\n##### 3. Find the projection of \u016b along \u2713. Your answer must not contain any decimal numbers....\n3. Find the projection of \u016b along \u2713. Your answer must not contain any decimal numbers. In our text they use the notation projj\u016b. Write the formula or formulas that you need to get your answer. \u016b = (0,3, -2) and \u2713 = (3,1,2)...\n##### Product Cost Method of Product Pricing La Femme Accessories Inc. produces women's handbags. The cost of producing 1,180...\nProduct Cost Method of Product Pricing La Femme Accessories Inc. produces women's handbags. The cost of producing 1,180 handbags is as follows: Direct materials $16,900 7,400 Direct labor Factory overhead 5,600 Total manufacturing cost$29,900 The selling and administrative expenses are \\$27,500....\n##### Your book discusses a changing relationship between physicians and their patients that has profoundly changed the...\nYour book discusses a changing relationship between physicians and their patients that has profoundly changed the way health care is delivered in the U.S. Identify and discuss trends contributing to this change. Is this change positive or negative? In what way?...\n##### P1.12 A simple pin-connected truss is loaded and supported as shown in Figure P1.12. The load...\nP1.12 A simple pin-connected truss is loaded and supported as shown in Figure P1.12. The load P is 200 kN. All members of the truss are aluminum pipes that have an outside diameter of 115 mm and a wall thickness of 6 mm. Determine the normal stress in each truss member. Assume truss dimensions of a-...\n##### The effects of te An pressures, and six different catalysts are under consideration. (a) If any...\nthe effects of te An pressures, and six different catalysts are under consideration. (a) If any particular experimental run involves the use of a single temperature, pressure, and catalyst, how many experimental runs are possible? is ure, pressure, and type of catalyst on yield from a certain chemic...\n##### If alpha, beta are the roots of the equation x^2 +px + q = 0, find the value of alpha^4 + alpha^4 beta^4 + beta^4?\nIf alpha, beta are the roots of the equation x^2 +px + q = 0, find the value of alpha^4 + alpha^4 beta^4 + beta^4?...\n##### How do you graph y=(x+1)^2 - 4?\nHow do you graph y=(x+1)^2 - 4?...","date":"2023-01-28 19:19:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.25168538093566895, \"perplexity\": 3433.8842786667783}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499654.54\/warc\/CC-MAIN-20230128184907-20230128214907-00054.warc.gz\"}"} | null | null |
Пётр Васильевич Есипов (1919—1975) — гвардии полковник Советской Армии, участник Великой Отечественной войны, Герой Советского Союза (1945).
Биография
Пётр Есипов родился 15 января 1919 года в селе Нижняя Катуховка (ныне — Новоусманский район Воронежской области). В 1935 году окончил школу фабрично-заводского ученичества в Воронеже, после чего работал токарем. В 1940 году Есипов был призван на службу в Рабоче-крестьянскую Красную Армию. В 1941 году он окончил Егорьевскую военную авиационную школу пилотов. С октября 1942 года — на фронтах Великой Отечественной войны.
К марту 1945 года гвардии лейтенант Пётр Есипов был заместителем командира эскадрильи и одновременно штурманом 93-го гвардейского штурмового авиаполка 5-й гвардейской штурмовой авиадивизии 2-го гвардейского штурмового авиакорпуса 2-й воздушной армии 1-го Украинского фронта. К тому времени он совершил 134 боевых вылета на штурмовку скоплений вражеской боевой техники и живой силе, объектов противника. Принял участие в 23 воздушных боях, сбив 2 немецких самолёта.
Указом Президиума Верховного Совета СССР от 27 июня 1945 года за «образцовое выполнение боевых заданий командования на фронте борьбы с немецкими захватчиками и проявленные при этом мужество и героизм» гвардии лейтенант Пётр Есипов был удостоен высокого звания Героя Советского Союза с вручением ордена Ленина и медали «Золотая Звезда» за номером 6590.
После окончания войны Есипов продолжил службу в Советской Армии. В 1948 году он окончил Высшую офицерскую школу штурманов. В 1960 году в звании полковника Есипов был уволен в запас. Проживал в Воронеже, умер 25 февраля 1975 года, похоронен на Воронежа.
Был также награждён двумя орденами Красного Знамени, двумя орденами Красной Звезды, рядом медалей.
Примечания
Литература
Гринько А. И., Улаев Г. Ф. Богатыри земли Воронежской. — Воронеж, 1965.
Лётчики Великой Отечественной войны | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,263 |
0:05Skip to 0 minutes and 5 secondsThere wasn't any one particular thing, because looking back I can see how all the dots joined together. But I do recall an incident at the dentist. I've taken my mum to the dentist on many occasions. On this particular occasion she had a tooth that was hurting. We went to the dentist, and she was in the chair. And as the dentist was approaching her with the needle to administer the anaesthetic, she screamed her head off, as if she was being attacked. And it was so loud that we had to close the door. And my mum was begging me to help her. Then unfortunately, the tooth needed extracting, and it broke.
0:52Skip to 0 minutes and 52 secondsSo the dentist had a real tough job of trying to extract the tooth. So it took a very long time. And during this whole episode my mum was just screaming her head off, begging me to rescue her, almost as if she was dying. And I was just stood there traumatised and helpless. And even the dentist, I think was getting very, very nervous because she'd never experienced this level of screaming before. And what was strange was that immediately after the extraction, she just calmly got up from the chair, walked out of the room and went to the bathroom, and proceeded to call my father. And just say oh, you know what they did?
1:32Skip to 1 minute and 32 secondsThey put me in the chair and they extracted the tooth. But she said it in a very calm manner. And then the dentist said to me, she said something is not right. She said I've treated many people of your mother's age, and I've never seen anything like this. There was a lot of money going from the bank account. We didn't know where it was being spent. That was probably the most worrying thing. My mum was very, very aggressive. Any little tiny disagreement would lead to an explosive argument. One example is my mum would not pay her bills on time, and therefore she would get a late fee on, let's say a credit card bill.
2:09Skip to 2 minutes and 9 secondsAnd then she should come to me to say, oh, why have they fined me? And I'd say, oh mum, you need to send it off five days before. And then she would just explode and say, yes, but I did, I did. And I was like, no mum, you didn't. And we would end up having a huge row over something very, very simple. In public She really loved touching young children. Meaning she liked to go up to them and squeeze their cheeks. But she ended up squeezing so hard that obviously the little child would get distressed, and the mother would get very upset. And she liked to pull their hair.
2:48Skip to 2 minutes and 48 secondsWhenever we walked in public, her arm would just swing out and just hit whoever was walking by. And so you find yourself apologising continually in public, and saying sorry. But at this point we didn't really know what was wrong. We just thought she was just doing it because she wanted to.
Shaheen describes the symptoms that led to Hosna's diagnosis, including an incident of uncharacteristic behaviour at the dentist, and explosive arguments.
Shaheen also tells how Hosna behaved inappropriately in public, for example squeezing the cheeks of children, and swinging her arm out hitting people walking by. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,288 |
The upcoming June Interactive Customer Experience (ICX) Summit boasts top leaders in customer experience speaking on a range of topics from CX strategy to Internet of Things, integration to measuring customer experience and return-on-investment.
That latter focus will be the subject of a panel featuring Albert Vita, director, strategy and insights for The Home Depot. The session, sponsored by Intel and moderated by Raj Maini, worldwide director of marketing for visual retail at Intel, promises insightful real-life information and best practices.
Vita will discuss metrics, benchmarks and tools retailers can use to determine if the retail experience is hitting the mark or falling below the bottom line.
"I think the buzzword for not just retailers, but retail banks, restaurants and many others, this year is definitely IoT. The Internet of Things isn't exactly a new concept, but it's one that certainly seems to be seizing the imagination of brands looking to get an edge on the competition," said Christopher Hall, Managing Director of the ICX Association.
Another session which will surely draw big attendance targets customer-facing robots and how early leaders such as Lowe's and the makers of Pepper, are making robots a real part of today's retail landscape. Sarah Furnari, VP of retail experience for BEHR, will share her insight and view of how such emerging technologies are playing a more valuable role each year. SoftBank Robotics, the session's sponsor, will have Pepper the robot at the summit.
Also on the agenda for the June 5-7 event, being held at the Four Seasons Resort Dallas at Las Colinas, highlights on how to design the store with digital in mind. Phillip Raub, founder and CMO at b8ta, will speak on best approaches in integrating digital into the brick-and-mortar environment without overwhelming the store environment. The session, sponsored by NEC Display solutions, will be moderated by Richard Ventura, VP of business development and solutions at NEC Display Solutions of America.
"This year's summit is going to be bringing in a panel to specifically address one of the biggest areas of concern for brands these days: millennials. That's one thing I think people will find particularly useful and enlightening. Brands are trying to figure out what millennials want and how to approach and engage with them, so we're bringing in a panel of the people themselves to talk to attendees," said Hall.
For a deeper look at the agenda, click here and to register for what promises to be a valuable event, click here. Early bird registration is now open through May 5. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,763 |
{"url":"https:\/\/leanprover-community.github.io\/archive\/stream\/116395-maths\/topic\/how.20to.20prove.20of.20find.20simple.20theorem.html","text":"## Stream: maths\n\n### Topic: how to prove of find simple theorem\n\n#### Truong Nguyen (Aug 10 2018 at 17:54):\n\nHi everyone, I am a new user, my question is maybe too simple. Please make it lear for me.\nCan you tell me how to find in the library or prove the simple theorem like:\n\ntheorem ttt1 (n m: \u2115 ) : n <= m \u2192 n < m \u2228 n = m :=\nsorry\n\n#### Mario Carneiro (Aug 10 2018 at 17:56):\n\nthe naming convention would call that lt_or_eq_of_le\n\n#### Truong Nguyen (Aug 10 2018 at 22:04):\n\nThank you, but can you tell me how can find the stuffs like this one?\n\nIs there a \u201csearch\u201d command to find a theorem in the library?\nI am working in some proof, sometime, it looks me quite a lot of time to find simple theorem to use.\nFor example, I need this one:\n\ntheorem tq (a b: \u2115 ): \u00ac a \u2264 b \u2194 b < a :=\nsorry\n\nIs there a way that I can find or prove it easily? I think it should be easy.\nThank you,\nTruong\n\n#### Kenny Lau (Aug 10 2018 at 22:10):\n\nimport tactic.find tactic.ring\n\nrun_cmd tactic.skip\n\n#find \u00ac _ \u2264 _ \u2194 _ < _\n-- not_le: \u2200 {\u03b1 : Type u} [_inst_1 : linear_order \u03b1] {a b : \u03b1}, \u00aca \u2264 b \u2194 b < a\n\n\n#### Kevin Buzzard (Aug 10 2018 at 22:14):\n\n@Truong Nguyen Mario already explained how to find stuff like this -- learn the naming convention :-) Follow the link!\n\nOh, thank you\n\n#### Truong Nguyen (Aug 31 2018 at 19:20):\n\nDear Kenny Lau,\nCan you give some instruction for how to use the \"#find\" command?\n\n#### Kevin Buzzard (Aug 31 2018 at 20:35):\n\nimport tactic.find\n\ndef x := 0 -- or anything -- for some reason you can't use #find immediately\n\n#find _ + _ \u2264 _ + _\n\n\nLast updated: May 10 2021 at 07:15 UTC","date":"2021-05-10 08:05:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2902033030986786, \"perplexity\": 3148.625038921301}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243989115.2\/warc\/CC-MAIN-20210510064318-20210510094318-00110.warc.gz\"}"} | null | null |
Pg. 99: David Freidenreich's "Foreigners and Their Food"
The current feature at the Page 99 Test: Foreigners and Their Food: Constructing Otherness in Jewish, Christian, and Islamic Law by David M. Freidenreich.
Foreigners and Their Food explores how Jews, Christians, and Muslims conceptualize "us" and "them" through rules about the preparation of food by adherents of other religions and the act of eating with such outsiders. David M. Freidenreich analyzes the significance of food to religious formation, elucidating the ways ancient and medieval scholars use food restrictions to think about the "other." Freidenreich illuminates the subtly different ways Jews, Christians, and Muslims perceive themselves, and he demonstrates how these distinctive self-conceptions shape ideas about religious foreigners and communal boundaries. This work, the first to analyze change over time across the legal literatures of Judaism, Christianity, and Islam, makes pathbreaking contributions to the history of interreligious intolerance and to the comparative study of religion.
Read more about Foreigners and Their Food at the University of California Press website.
David M. Freidenreich is the Pulver Family Assistant Professor of Jewish Studies at Colby College.
The Page 99 Test: Foreigners and Their Food. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,875 |
Vox's The Weeds is an open-ended discussion of current events. The inability of the hosts to LIKE refrain from saying LIKE in almost every utterance was LIKE too much. As the LIKEs piled up I was unable to focus on the topic at hand. I'm now monitoring my own LIKE output in conversation, and I can sympathize with the transgressors--- but I'm not irritating an audience of thousands, only my wife and dog. Loudon Wainwright III addressed this social ill in his song Cobwebs. | {
"redpajama_set_name": "RedPajamaC4"
} | 274 |
{"url":"https:\/\/math.stackexchange.com\/questions\/617035\/acceleration-vector-of-a-car-making-a-turn-without-a-change-in-speed","text":"# Acceleration Vector of a car making a turn without a change in speed\n\nAcceleration Vector (2D) Vector = (Magnitude, Direction)\n\nI am trying to calculate the acceleration vector acting on a car moving at a constant speed but changing direction. Example: A car is traveling 60 mph north (88 fps north (V1)). The car comes to a constant curve in the road that it takes 15 seconds to complete. The car\u2019s velocity is now 88 fps east (V2). No change in speed, just change in direction.\n\nIt appears that the acceleration is dependent on the speed and time.\n\nI have written a simulation and iteratively concluded that 9.2155 ft\/sec^2 is the answer for my example.\n\nI have: \u2022 V1 = Vector (88 fps, 90 degrees) \u2022 V2 = Vector (88 fps, 0 degrees) \u2022 Change in direction 90 degrees \u2022 Delta time 15 seconds \u2022 Degrees change per second = 6 \u2022 The acceleration is acting perpendicular to the velocity vector\n\nWhat I have not figured out: \u2022 A formula to calculate the acceleration given the speed, degrees change in direction, and seconds to complete the change\/turn.\n\nThe basic formulas are that $a=\\frac {v^2}r$ and the time to complete a full circle is $t=\\frac {2\\pi r}v$ You can use these to get whatever you want. So if you take $t$ seconds to complete a quarter turn at speed $v$, your cover a linear distance of $vt$, so the circumference is $4vt$, the radius is $\\frac{4vt}{2\\pi}$ and the acceleration is $a=\\frac {2\\pi v^2}{4vt}=\\frac {\\pi v}{2t}$","date":"2019-06-20 22:00:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.739387035369873, \"perplexity\": 447.4089472218923}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627999273.79\/warc\/CC-MAIN-20190620210153-20190620232153-00203.warc.gz\"}"} | null | null |
#include <asm/page.h>
#include <linux/clk.h>
#include <linux/completion.h>
#include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <linux/err.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <linux/of_irq.h>
#include <linux/spi/spi.h>
/* SPI register offsets */
#define BCM2835_SPI_CS 0x00
#define BCM2835_SPI_FIFO 0x04
#define BCM2835_SPI_CLK 0x08
#define BCM2835_SPI_DLEN 0x0c
#define BCM2835_SPI_LTOH 0x10
#define BCM2835_SPI_DC 0x14
/* Bitfields in CS */
#define BCM2835_SPI_CS_LEN_LONG 0x02000000
#define BCM2835_SPI_CS_DMA_LEN 0x01000000
#define BCM2835_SPI_CS_CSPOL2 0x00800000
#define BCM2835_SPI_CS_CSPOL1 0x00400000
#define BCM2835_SPI_CS_CSPOL0 0x00200000
#define BCM2835_SPI_CS_RXF 0x00100000
#define BCM2835_SPI_CS_RXR 0x00080000
#define BCM2835_SPI_CS_TXD 0x00040000
#define BCM2835_SPI_CS_RXD 0x00020000
#define BCM2835_SPI_CS_DONE 0x00010000
#define BCM2835_SPI_CS_LEN 0x00002000
#define BCM2835_SPI_CS_REN 0x00001000
#define BCM2835_SPI_CS_ADCS 0x00000800
#define BCM2835_SPI_CS_INTR 0x00000400
#define BCM2835_SPI_CS_INTD 0x00000200
#define BCM2835_SPI_CS_DMAEN 0x00000100
#define BCM2835_SPI_CS_TA 0x00000080
#define BCM2835_SPI_CS_CSPOL 0x00000040
#define BCM2835_SPI_CS_CLEAR_RX 0x00000020
#define BCM2835_SPI_CS_CLEAR_TX 0x00000010
#define BCM2835_SPI_CS_CPOL 0x00000008
#define BCM2835_SPI_CS_CPHA 0x00000004
#define BCM2835_SPI_CS_CS_10 0x00000002
#define BCM2835_SPI_CS_CS_01 0x00000001
#define BCM2835_SPI_POLLING_LIMIT_US 30
#define BCM2835_SPI_POLLING_JIFFIES 2
#define BCM2835_SPI_DMA_MIN_LENGTH 96
#define BCM2835_SPI_MODE_BITS (SPI_CPOL | SPI_CPHA | SPI_CS_HIGH \
| SPI_NO_CS | SPI_3WIRE)
#define DRV_NAME "spi-bcm2835"
struct bcm2835_spi {
void __iomem *regs;
struct clk *clk;
int irq;
const u8 *tx_buf;
u8 *rx_buf;
int tx_len;
int rx_len;
bool dma_pending;
};
static inline u32 bcm2835_rd(struct bcm2835_spi *bs, unsigned reg)
{
return readl(bs->regs + reg);
}
static inline void bcm2835_wr(struct bcm2835_spi *bs, unsigned reg, u32 val)
{
writel(val, bs->regs + reg);
}
static inline void bcm2835_rd_fifo(struct bcm2835_spi *bs)
{
u8 byte;
while ((bs->rx_len) &&
(bcm2835_rd(bs, BCM2835_SPI_CS) & BCM2835_SPI_CS_RXD)) {
byte = bcm2835_rd(bs, BCM2835_SPI_FIFO);
if (bs->rx_buf)
*bs->rx_buf++ = byte;
bs->rx_len--;
}
}
static inline void bcm2835_wr_fifo(struct bcm2835_spi *bs)
{
u8 byte;
while ((bs->tx_len) &&
(bcm2835_rd(bs, BCM2835_SPI_CS) & BCM2835_SPI_CS_TXD)) {
byte = bs->tx_buf ? *bs->tx_buf++ : 0;
bcm2835_wr(bs, BCM2835_SPI_FIFO, byte);
bs->tx_len--;
}
}
static void bcm2835_spi_reset_hw(struct spi_master *master)
{
struct bcm2835_spi *bs = spi_master_get_devdata(master);
u32 cs = bcm2835_rd(bs, BCM2835_SPI_CS);
/* Disable SPI interrupts and transfer */
cs &= ~(BCM2835_SPI_CS_INTR |
BCM2835_SPI_CS_INTD |
BCM2835_SPI_CS_DMAEN |
BCM2835_SPI_CS_TA);
/* and reset RX/TX FIFOS */
cs |= BCM2835_SPI_CS_CLEAR_RX | BCM2835_SPI_CS_CLEAR_TX;
/* and reset the SPI_HW */
bcm2835_wr(bs, BCM2835_SPI_CS, cs);
/* as well as DLEN */
bcm2835_wr(bs, BCM2835_SPI_DLEN, 0);
}
static irqreturn_t bcm2835_spi_interrupt(int irq, void *dev_id)
{
struct spi_master *master = dev_id;
struct bcm2835_spi *bs = spi_master_get_devdata(master);
/* Read as many bytes as possible from FIFO */
bcm2835_rd_fifo(bs);
/* Write as many bytes as possible to FIFO */
bcm2835_wr_fifo(bs);
/* based on flags decide if we can finish the transfer */
if (bcm2835_rd(bs, BCM2835_SPI_CS) & BCM2835_SPI_CS_DONE) {
/* Transfer complete - reset SPI HW */
bcm2835_spi_reset_hw(master);
/* wake up the framework */
complete(&master->xfer_completion);
}
return IRQ_HANDLED;
}
static int bcm2835_spi_transfer_one_irq(struct spi_master *master,
struct spi_device *spi,
struct spi_transfer *tfr,
u32 cs)
{
struct bcm2835_spi *bs = spi_master_get_devdata(master);
/* fill in fifo if we have gpio-cs
* note that there have been rare events where the native-CS
* flapped for <1us which may change the behaviour
* with gpio-cs this does not happen, so it is implemented
* only for this case
*/
if (gpio_is_valid(spi->cs_gpio)) {
/* enable HW block, but without interrupts enabled
* this would triggern an immediate interrupt
*/
bcm2835_wr(bs, BCM2835_SPI_CS,
cs | BCM2835_SPI_CS_TA);
/* fill in tx fifo as much as possible */
bcm2835_wr_fifo(bs);
}
/*
* Enable the HW block. This will immediately trigger a DONE (TX
* empty) interrupt, upon which we will fill the TX FIFO with the
* first TX bytes. Pre-filling the TX FIFO here to avoid the
* interrupt doesn't work:-(
*/
cs |= BCM2835_SPI_CS_INTR | BCM2835_SPI_CS_INTD | BCM2835_SPI_CS_TA;
bcm2835_wr(bs, BCM2835_SPI_CS, cs);
/* signal that we need to wait for completion */
return 1;
}
/*
* DMA support
*
* this implementation has currently a few issues in so far as it does
* not work arrount limitations of the HW.
*
* the main one being that DMA transfers are limited to 16 bit
* (so 0 to 65535 bytes) by the SPI HW due to BCM2835_SPI_DLEN
*
* also we currently assume that the scatter-gather fragments are
* all multiple of 4 (except the last) - otherwise we would need
* to reset the FIFO before subsequent transfers...
* this also means that tx/rx transfers sg's need to be of equal size!
*
* there may be a few more border-cases we may need to address as well
* but unfortunately this would mean splitting up the scatter-gather
* list making it slightly unpractical...
*/
static void bcm2835_spi_dma_done(void *data)
{
struct spi_master *master = data;
struct bcm2835_spi *bs = spi_master_get_devdata(master);
/* reset fifo and HW */
bcm2835_spi_reset_hw(master);
/* and terminate tx-dma as we do not have an irq for it
* because when the rx dma will terminate and this callback
* is called the tx-dma must have finished - can't get to this
* situation otherwise...
*/
dmaengine_terminate_all(master->dma_tx);
/* mark as no longer pending */
bs->dma_pending = 0;
/* and mark as completed */;
complete(&master->xfer_completion);
}
static int bcm2835_spi_prepare_sg(struct spi_master *master,
struct spi_transfer *tfr,
bool is_tx)
{
struct dma_chan *chan;
struct scatterlist *sgl;
unsigned int nents;
enum dma_transfer_direction dir;
unsigned long flags;
struct dma_async_tx_descriptor *desc;
dma_cookie_t cookie;
if (is_tx) {
dir = DMA_MEM_TO_DEV;
chan = master->dma_tx;
nents = tfr->tx_sg.nents;
sgl = tfr->tx_sg.sgl;
flags = 0 /* no tx interrupt */;
} else {
dir = DMA_DEV_TO_MEM;
chan = master->dma_rx;
nents = tfr->rx_sg.nents;
sgl = tfr->rx_sg.sgl;
flags = DMA_PREP_INTERRUPT;
}
/* prepare the channel */
desc = dmaengine_prep_slave_sg(chan, sgl, nents, dir, flags);
if (!desc)
return -EINVAL;
/* set callback for rx */
if (!is_tx) {
desc->callback = bcm2835_spi_dma_done;
desc->callback_param = master;
}
/* submit it to DMA-engine */
cookie = dmaengine_submit(desc);
return dma_submit_error(cookie);
}
static inline int bcm2835_check_sg_length(struct sg_table *sgt)
{
int i;
struct scatterlist *sgl;
/* check that the sg entries are word-sized (except for last) */
for_each_sg(sgt->sgl, sgl, (int)sgt->nents - 1, i) {
if (sg_dma_len(sgl) % 4)
return -EFAULT;
}
return 0;
}
static int bcm2835_spi_transfer_one_dma(struct spi_master *master,
struct spi_device *spi,
struct spi_transfer *tfr,
u32 cs)
{
struct bcm2835_spi *bs = spi_master_get_devdata(master);
int ret;
/* check that the scatter gather segments are all a multiple of 4 */
if (bcm2835_check_sg_length(&tfr->tx_sg) ||
bcm2835_check_sg_length(&tfr->rx_sg)) {
dev_warn_once(&spi->dev,
"scatter gather segment length is not a multiple of 4 - falling back to interrupt mode\n");
return bcm2835_spi_transfer_one_irq(master, spi, tfr, cs);
}
/* setup tx-DMA */
ret = bcm2835_spi_prepare_sg(master, tfr, true);
if (ret)
return ret;
/* start TX early */
dma_async_issue_pending(master->dma_tx);
/* mark as dma pending */
bs->dma_pending = 1;
/* set the DMA length */
bcm2835_wr(bs, BCM2835_SPI_DLEN, tfr->len);
/* start the HW */
bcm2835_wr(bs, BCM2835_SPI_CS,
cs | BCM2835_SPI_CS_TA | BCM2835_SPI_CS_DMAEN);
/* setup rx-DMA late - to run transfers while
* mapping of the rx buffers still takes place
* this saves 10us or more.
*/
ret = bcm2835_spi_prepare_sg(master, tfr, false);
if (ret) {
/* need to reset on errors */
dmaengine_terminate_all(master->dma_tx);
bcm2835_spi_reset_hw(master);
return ret;
}
/* start rx dma late */
dma_async_issue_pending(master->dma_rx);
/* wait for wakeup in framework */
return 1;
}
static bool bcm2835_spi_can_dma(struct spi_master *master,
struct spi_device *spi,
struct spi_transfer *tfr)
{
/* only run for gpio_cs */
if (!gpio_is_valid(spi->cs_gpio))
return false;
/* we start DMA efforts only on bigger transfers */
if (tfr->len < BCM2835_SPI_DMA_MIN_LENGTH)
return false;
/* BCM2835_SPI_DLEN has defined a max transfer size as
* 16 bit, so max is 65535
* we can revisit this by using an alternative transfer
* method - ideally this would get done without any more
* interaction...
*/
if (tfr->len > 65535) {
dev_warn_once(&spi->dev,
"transfer size of %d too big for dma-transfer\n",
tfr->len);
return false;
}
/* if we run rx/tx_buf with word aligned addresses then we are OK */
if ((((size_t)tfr->rx_buf & 3) == 0) &&
(((size_t)tfr->tx_buf & 3) == 0))
return true;
/* otherwise we only allow transfers within the same page
* to avoid wasting time on dma_mapping when it is not practical
*/
if (((size_t)tfr->tx_buf & (PAGE_SIZE - 1)) + tfr->len > PAGE_SIZE) {
dev_warn_once(&spi->dev,
"Unaligned spi tx-transfer bridging page\n");
return false;
}
if (((size_t)tfr->rx_buf & (PAGE_SIZE - 1)) + tfr->len > PAGE_SIZE) {
dev_warn_once(&spi->dev,
"Unaligned spi rx-transfer bridging page\n");
return false;
}
/* return OK */
return true;
}
static void bcm2835_dma_release(struct spi_master *master)
{
if (master->dma_tx) {
dmaengine_terminate_all(master->dma_tx);
dma_release_channel(master->dma_tx);
master->dma_tx = NULL;
}
if (master->dma_rx) {
dmaengine_terminate_all(master->dma_rx);
dma_release_channel(master->dma_rx);
master->dma_rx = NULL;
}
}
static void bcm2835_dma_init(struct spi_master *master, struct device *dev)
{
struct dma_slave_config slave_config;
const __be32 *addr;
dma_addr_t dma_reg_base;
int ret;
/* base address in dma-space */
addr = of_get_address(master->dev.of_node, 0, NULL, NULL);
if (!addr) {
dev_err(dev, "could not get DMA-register address - not using dma mode\n");
goto err;
}
dma_reg_base = be32_to_cpup(addr);
/* get tx/rx dma */
master->dma_tx = dma_request_slave_channel(dev, "tx");
if (!master->dma_tx) {
dev_err(dev, "no tx-dma configuration found - not using dma mode\n");
goto err;
}
master->dma_rx = dma_request_slave_channel(dev, "rx");
if (!master->dma_rx) {
dev_err(dev, "no rx-dma configuration found - not using dma mode\n");
goto err_release;
}
/* configure DMAs */
slave_config.direction = DMA_MEM_TO_DEV;
slave_config.dst_addr = (u32)(dma_reg_base + BCM2835_SPI_FIFO);
slave_config.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
ret = dmaengine_slave_config(master->dma_tx, &slave_config);
if (ret)
goto err_config;
slave_config.direction = DMA_DEV_TO_MEM;
slave_config.src_addr = (u32)(dma_reg_base + BCM2835_SPI_FIFO);
slave_config.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
ret = dmaengine_slave_config(master->dma_rx, &slave_config);
if (ret)
goto err_config;
/* all went well, so set can_dma */
master->can_dma = bcm2835_spi_can_dma;
master->max_dma_len = 65535; /* limitation by BCM2835_SPI_DLEN */
/* need to do TX AND RX DMA, so we need dummy buffers */
master->flags = SPI_MASTER_MUST_RX | SPI_MASTER_MUST_TX;
return;
err_config:
dev_err(dev, "issue configuring dma: %d - not using DMA mode\n",
ret);
err_release:
bcm2835_dma_release(master);
err:
return;
}
static int bcm2835_spi_transfer_one_poll(struct spi_master *master,
struct spi_device *spi,
struct spi_transfer *tfr,
u32 cs,
unsigned long long xfer_time_us)
{
struct bcm2835_spi *bs = spi_master_get_devdata(master);
unsigned long timeout;
/* enable HW block without interrupts */
bcm2835_wr(bs, BCM2835_SPI_CS, cs | BCM2835_SPI_CS_TA);
/* fill in the fifo before timeout calculations
* if we are interrupted here, then the data is
* getting transferred by the HW while we are interrupted
*/
bcm2835_wr_fifo(bs);
/* set the timeout */
timeout = jiffies + BCM2835_SPI_POLLING_JIFFIES;
/* loop until finished the transfer */
while (bs->rx_len) {
/* fill in tx fifo with remaining data */
bcm2835_wr_fifo(bs);
/* read from fifo as much as possible */
bcm2835_rd_fifo(bs);
/* if there is still data pending to read
* then check the timeout
*/
if (bs->rx_len && time_after(jiffies, timeout)) {
dev_dbg_ratelimited(&spi->dev,
"timeout period reached: jiffies: %lu remaining tx/rx: %d/%d - falling back to interrupt mode\n",
jiffies - timeout,
bs->tx_len, bs->rx_len);
/* fall back to interrupt mode */
return bcm2835_spi_transfer_one_irq(master, spi,
tfr, cs);
}
}
/* Transfer complete - reset SPI HW */
bcm2835_spi_reset_hw(master);
/* and return without waiting for completion */
return 0;
}
static int bcm2835_spi_transfer_one(struct spi_master *master,
struct spi_device *spi,
struct spi_transfer *tfr)
{
struct bcm2835_spi *bs = spi_master_get_devdata(master);
unsigned long spi_hz, clk_hz, cdiv;
unsigned long spi_used_hz;
unsigned long long xfer_time_us;
u32 cs = bcm2835_rd(bs, BCM2835_SPI_CS);
/* set clock */
spi_hz = tfr->speed_hz;
clk_hz = clk_get_rate(bs->clk);
if (spi_hz >= clk_hz / 2) {
cdiv = 2; /* clk_hz/2 is the fastest we can go */
} else if (spi_hz) {
/* CDIV must be a multiple of two */
cdiv = DIV_ROUND_UP(clk_hz, spi_hz);
cdiv += (cdiv % 2);
if (cdiv >= 65536)
cdiv = 0; /* 0 is the slowest we can go */
} else {
cdiv = 0; /* 0 is the slowest we can go */
}
spi_used_hz = cdiv ? (clk_hz / cdiv) : (clk_hz / 65536);
bcm2835_wr(bs, BCM2835_SPI_CLK, cdiv);
/* handle all the 3-wire mode */
if ((spi->mode & SPI_3WIRE) && (tfr->rx_buf))
cs |= BCM2835_SPI_CS_REN;
else
cs &= ~BCM2835_SPI_CS_REN;
/* for gpio_cs set dummy CS so that no HW-CS get changed
* we can not run this in bcm2835_spi_set_cs, as it does
* not get called for cs_gpio cases, so we need to do it here
*/
if (gpio_is_valid(spi->cs_gpio) || (spi->mode & SPI_NO_CS))
cs |= BCM2835_SPI_CS_CS_10 | BCM2835_SPI_CS_CS_01;
/* set transmit buffers and length */
bs->tx_buf = tfr->tx_buf;
bs->rx_buf = tfr->rx_buf;
bs->tx_len = tfr->len;
bs->rx_len = tfr->len;
/* calculate the estimated time in us the transfer runs */
xfer_time_us = (unsigned long long)tfr->len
* 9 /* clocks/byte - SPI-HW waits 1 clock after each byte */
* 1000000;
do_div(xfer_time_us, spi_used_hz);
/* for short requests run polling*/
if (xfer_time_us <= BCM2835_SPI_POLLING_LIMIT_US)
return bcm2835_spi_transfer_one_poll(master, spi, tfr,
cs, xfer_time_us);
/* run in dma mode if conditions are right */
if (master->can_dma && bcm2835_spi_can_dma(master, spi, tfr))
return bcm2835_spi_transfer_one_dma(master, spi, tfr, cs);
/* run in interrupt-mode */
return bcm2835_spi_transfer_one_irq(master, spi, tfr, cs);
}
static int bcm2835_spi_prepare_message(struct spi_master *master,
struct spi_message *msg)
{
struct spi_device *spi = msg->spi;
struct bcm2835_spi *bs = spi_master_get_devdata(master);
u32 cs = bcm2835_rd(bs, BCM2835_SPI_CS);
cs &= ~(BCM2835_SPI_CS_CPOL | BCM2835_SPI_CS_CPHA);
if (spi->mode & SPI_CPOL)
cs |= BCM2835_SPI_CS_CPOL;
if (spi->mode & SPI_CPHA)
cs |= BCM2835_SPI_CS_CPHA;
bcm2835_wr(bs, BCM2835_SPI_CS, cs);
return 0;
}
static void bcm2835_spi_handle_err(struct spi_master *master,
struct spi_message *msg)
{
struct bcm2835_spi *bs = spi_master_get_devdata(master);
/* if an error occurred and we have an active dma, then terminate */
if (bs->dma_pending) {
dmaengine_terminate_all(master->dma_tx);
dmaengine_terminate_all(master->dma_rx);
bs->dma_pending = 0;
}
/* and reset */
bcm2835_spi_reset_hw(master);
}
static void bcm2835_spi_set_cs(struct spi_device *spi, bool gpio_level)
{
/*
* we can assume that we are "native" as per spi_set_cs
* calling us ONLY when cs_gpio is not set
* we can also assume that we are CS < 3 as per bcm2835_spi_setup
* we would not get called because of error handling there.
* the level passed is the electrical level not enabled/disabled
* so it has to get translated back to enable/disable
* see spi_set_cs in spi.c for the implementation
*/
struct spi_master *master = spi->master;
struct bcm2835_spi *bs = spi_master_get_devdata(master);
u32 cs = bcm2835_rd(bs, BCM2835_SPI_CS);
bool enable;
/* calculate the enable flag from the passed gpio_level */
enable = (spi->mode & SPI_CS_HIGH) ? gpio_level : !gpio_level;
/* set flags for "reverse" polarity in the registers */
if (spi->mode & SPI_CS_HIGH) {
/* set the correct CS-bits */
cs |= BCM2835_SPI_CS_CSPOL;
cs |= BCM2835_SPI_CS_CSPOL0 << spi->chip_select;
} else {
/* clean the CS-bits */
cs &= ~BCM2835_SPI_CS_CSPOL;
cs &= ~(BCM2835_SPI_CS_CSPOL0 << spi->chip_select);
}
/* select the correct chip_select depending on disabled/enabled */
if (enable) {
/* set cs correctly */
if (spi->mode & SPI_NO_CS) {
/* use the "undefined" chip-select */
cs |= BCM2835_SPI_CS_CS_10 | BCM2835_SPI_CS_CS_01;
} else {
/* set the chip select */
cs &= ~(BCM2835_SPI_CS_CS_10 | BCM2835_SPI_CS_CS_01);
cs |= spi->chip_select;
}
} else {
/* disable CSPOL which puts HW-CS into deselected state */
cs &= ~BCM2835_SPI_CS_CSPOL;
/* use the "undefined" chip-select as precaution */
cs |= BCM2835_SPI_CS_CS_10 | BCM2835_SPI_CS_CS_01;
}
/* finally set the calculated flags in SPI_CS */
bcm2835_wr(bs, BCM2835_SPI_CS, cs);
}
static int chip_match_name(struct gpio_chip *chip, void *data)
{
return !strcmp(chip->label, data);
}
static int bcm2835_spi_setup(struct spi_device *spi)
{
int err;
struct gpio_chip *chip;
/*
* sanity checking the native-chipselects
*/
if (spi->mode & SPI_NO_CS)
return 0;
if (gpio_is_valid(spi->cs_gpio))
return 0;
if (spi->chip_select > 1) {
/* error in the case of native CS requested with CS > 1
* officially there is a CS2, but it is not documented
* which GPIO is connected with that...
*/
dev_err(&spi->dev,
"setup: only two native chip-selects are supported\n");
return -EINVAL;
}
/* now translate native cs to GPIO */
/* get the gpio chip for the base */
chip = gpiochip_find("pinctrl-bcm2835", chip_match_name);
if (!chip)
return 0;
/* and calculate the real CS */
spi->cs_gpio = chip->base + 8 - spi->chip_select;
/* and set up the "mode" and level */
dev_info(&spi->dev, "setting up native-CS%i as GPIO %i\n",
spi->chip_select, spi->cs_gpio);
/* set up GPIO as output and pull to the correct level */
err = gpio_direction_output(spi->cs_gpio,
(spi->mode & SPI_CS_HIGH) ? 0 : 1);
if (err) {
dev_err(&spi->dev,
"could not set CS%i gpio %i as output: %i",
spi->chip_select, spi->cs_gpio, err);
return err;
}
/* the implementation of pinctrl-bcm2835 currently does not
* set the GPIO value when using gpio_direction_output
* so we are setting it here explicitly
*/
gpio_set_value(spi->cs_gpio, (spi->mode & SPI_CS_HIGH) ? 0 : 1);
return 0;
}
static int bcm2835_spi_probe(struct platform_device *pdev)
{
struct spi_master *master;
struct bcm2835_spi *bs;
struct resource *res;
int err;
master = spi_alloc_master(&pdev->dev, sizeof(*bs));
if (!master) {
dev_err(&pdev->dev, "spi_alloc_master() failed\n");
return -ENOMEM;
}
platform_set_drvdata(pdev, master);
master->mode_bits = BCM2835_SPI_MODE_BITS;
master->bits_per_word_mask = SPI_BPW_MASK(8);
master->num_chipselect = 3;
master->setup = bcm2835_spi_setup;
master->set_cs = bcm2835_spi_set_cs;
master->transfer_one = bcm2835_spi_transfer_one;
master->handle_err = bcm2835_spi_handle_err;
master->prepare_message = bcm2835_spi_prepare_message;
master->dev.of_node = pdev->dev.of_node;
bs = spi_master_get_devdata(master);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
bs->regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(bs->regs)) {
err = PTR_ERR(bs->regs);
goto out_master_put;
}
bs->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(bs->clk)) {
err = PTR_ERR(bs->clk);
dev_err(&pdev->dev, "could not get clk: %d\n", err);
goto out_master_put;
}
bs->irq = platform_get_irq(pdev, 0);
if (bs->irq <= 0) {
dev_err(&pdev->dev, "could not get IRQ: %d\n", bs->irq);
err = bs->irq ? bs->irq : -ENODEV;
goto out_master_put;
}
clk_prepare_enable(bs->clk);
bcm2835_dma_init(master, &pdev->dev);
/* initialise the hardware with the default polarities */
bcm2835_wr(bs, BCM2835_SPI_CS,
BCM2835_SPI_CS_CLEAR_RX | BCM2835_SPI_CS_CLEAR_TX);
err = devm_request_irq(&pdev->dev, bs->irq, bcm2835_spi_interrupt, 0,
dev_name(&pdev->dev), master);
if (err) {
dev_err(&pdev->dev, "could not request IRQ: %d\n", err);
goto out_clk_disable;
}
err = devm_spi_register_master(&pdev->dev, master);
if (err) {
dev_err(&pdev->dev, "could not register SPI master: %d\n", err);
goto out_clk_disable;
}
return 0;
out_clk_disable:
clk_disable_unprepare(bs->clk);
out_master_put:
spi_master_put(master);
return err;
}
static int bcm2835_spi_remove(struct platform_device *pdev)
{
struct spi_master *master = platform_get_drvdata(pdev);
struct bcm2835_spi *bs = spi_master_get_devdata(master);
/* Clear FIFOs, and disable the HW block */
bcm2835_wr(bs, BCM2835_SPI_CS,
BCM2835_SPI_CS_CLEAR_RX | BCM2835_SPI_CS_CLEAR_TX);
clk_disable_unprepare(bs->clk);
bcm2835_dma_release(master);
return 0;
}
static const struct of_device_id bcm2835_spi_match[] = {
{ .compatible = "brcm,bcm2835-spi", },
{}
};
MODULE_DEVICE_TABLE(of, bcm2835_spi_match);
static struct platform_driver bcm2835_spi_driver = {
.driver = {
.name = DRV_NAME,
.of_match_table = bcm2835_spi_match,
},
.probe = bcm2835_spi_probe,
.remove = bcm2835_spi_remove,
};
module_platform_driver(bcm2835_spi_driver);
MODULE_DESCRIPTION("SPI controller driver for Broadcom BCM2835");
MODULE_AUTHOR("Chris Boot <bootc@bootc.net>");
MODULE_LICENSE("GPL v2");
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,652 |
Paraleprodera triangularis is a species of beetle in the family Cerambycidae. It was described by James Thomson in 1865, originally under the genus Epicedia. It is known from India, Vietnam, Laos, Thailand, and Myanmar.
References
Lamiini
Beetles described in 1865 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,786 |
Activision CEO Tells Execs He Will Consider Leaving If Harassment Issues Aren't Fixed 'With Speed'
1,200 Activision Blizzard employees have signed a petition demanding CEO Bobby Kotick step down
Jeremy Fuster | November 21, 2021 @ 5:50 PM
Activision Blizzard CEO Bobby Kotick told senior managers at the videogame company that he will consider leaving if issues of sexual misconduct and harassment are not resolved "with speed," according to the Wall Street Journal.
After more than 1,200 employees signed a petition demanding his resignation, Kotick met with executives at Blizzard on Friday, ready to step away if he couldn't reform the company's culture. He stopped short of outright leaving, however.
In a separate meeting with Activision executives, Kotick said he was "ashamed of some of the incidents that had happened on his watch and apologized for how he has handled the unfolding problems," but was told by executives that some employees would only be satisfied by his departure.
Activision Board Backs CEO Bobby Kotick After Report He Knew For Years About Sexual Misconduct Accusations
In July, the gaming studio behind popular franchises like "Call of Duty" and "World of Warcraft," was hit with a civil lawsuit from California's Department of Fair Employment and Housing, claiming the company was "akin to working in a frat house." The alleged sexual harassment included inappropriate comments about women's bodies, rape jokes and unsolicited touching of female employees.
The company dismissed the claims and said in part in a statement to NPR, "The DFEH includes distorted, and in many cases false, descriptions of Blizzard's past."
Activision Blizzard employees later staged a mass walkout at the company's offices in Irvine, California, a move intended to pressure the company into creating a better work environment for non-male employees as well as equalize pay. And in an open letter to management published via Polygon, nearly 3,100 Activision Blizzard employees urged the company to do better.
Activision CEO Bobby Kotick Laments 'Tone Deaf' Response to Harassment Allegations
On Oct. 27, Kotick announced a "zero-tolerance policy" along with a series of reform policies, including a pledge to increase the number of female and non-binary employees at the company by 50%, waiving arbitration for any employee filing a harassment or discrimination claim, and slashing his pay to the minimum annual salary of $62,500 required by California law.
"I truly wish not a single employee had had an experience at work that resulted in hurt, humiliation, or worse – and to those who were affected, I sincerely apologize," Kotick said. "You have my commitment that we will do everything possible to honor our values and create the workplace every member of this team deserves."
But this past week, the heat was turned up on Kotick again after a Wall Street Journal report revealed his direct involvement in the harassment of women at Activision Blizzard, specifically threatening in a 2006 voicemail to have a female employee killed and threatening to "destroy" a private jet flight attendant who sued him for sexual harassment committed by his jet's pilot.
Activision CEO Slashes His Own Pay, Pledges 'Zero-Tolerance' Harassment Policy
After the story was published, ABK Workers Alliance, an advocacy group formed after the systemic issues of harassment at Activision Blizzard came to light, staged a walkout to demand that Kotick step down. A group of Activision shareholders led by SOC Investment Group joined in the calls; and Microsoft gaming EVP Phil Spencer, who heads the company's Xbox division, said in a memo to employees that he is "evaluating all aspects of our relationship with Activision Blizzard."
Despite the renewed pressure, Activision's board of directors has said in a statement that it stands by Kotick and "remains confident that he appropriately addressed workplace issues brought to his attention." | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,764 |
Legacy Ornamental Mill Demo Video YouTube. Jan 3, 2008 Routers Manual, Legacy Ornamental Mill Model 1000. Other 1000. More details » Get Price.
Apr 02, 2015 · legacy ornamental mill 1200 for sale Excellent cond. all access + carbide bits I have a Legacy Model 1200 Ornamental Mill we would like to sell.
Apr 24, 2008 · Legacy Ornamental Mill for sale, EX1000. This is an older model, but in excellent shape with light use. I have used it for a number of different types of jobs, but lately my woodworking has taken a different direction and it is just taking up space.
Legacy Ornamental Mill Owners. Many of you have been with Legacy for a long time and we sincerely appreciate your business and your loyalty.
legacy cnc ornamental mill for sale Mach 3 CNC lathe turning an Egg Shape In 6061 Aluminum . . legacy ornamental mill for sale, legacy ornamental mill model 900.
Legacy Ornamental Mill 1000 EXL BRAND NEW UNUSED CONDITION (80" capacity) Private Party "One of the hard to find manual models!" | {
"redpajama_set_name": "RedPajamaC4"
} | 6,055 |
\section{Introduction}
\label{sec:intro}
Galaxy clusters have been one of the most important probes of
cosmology (e.g., Bartlett \& Silk 1994; Eke, Cole, \& Frenk 1996; Viana \& Liddle 1996; Kitayama \& Suto 1996, 1997; Kitayama, Sasaki, \& Suto 1998; Holder et al. 2000; Haiman, Mohr, \& Holder 2001; \,\, Majumdar \& Mohr 2004). In the context
of dark energy surveys, which attracts much of the attention of the
cosmology and particle physics communities, galaxy cluster surveys are
also unique in that they most directly probe the growth of structure
rather than relying solely on distance measurements
\citep[e.g.,][]{albrecht2006}. In order to capitalize on this, galaxy
cluster surveys, in particular those utilizing the Sunyaev-Zel'dovich
effect \citep[for reviews see, for example,][]{carlstrom02,
birkinshaw99, rephaeli95, sunyaev80}, are currently operating
and many more are planned in the near future. However, one of the
biggest challenges in interpreting these surveys is relating physical
quantities of galaxy clusters, namely mass, to observable ones. In
particular, these mass-observable relations may be sensitive to the
inherent complex structure of clusters. Therefore we must better
understand galaxy clusters to utilize fully the potential of galaxy
cluster surveys in constraining cosmological parameters.
Recent observations of galaxy clusters have revealed a rich variety of
structural complexity. Recent X-ray satellites with their improved
angular resolution, collecting area, and simultaneous spectral
measurement capabilities have unveiled complex temperature structure
\citep[e.g.,][]{markevitch2000, furusho01}, shock fronts
\citep[e.g.,][]{jones2002}, cold fronts
\citep[e.g.,][]{markevitch2000}, and X-ray holes
\citep[e.g.,][]{fabian2002}. Improved observational strategies and
analysis methods of lensing observations of galaxy clusters show that
the mass distribution, as opposed to just the gas, is often
complicated as well \citep[e.g.,][]{bradac06}. Both X-ray and lensing
observations of galaxy clusters reveal that clusters are frequently
undergoing mergers \citep[e.g.,][]{briel04}. With such various and
sundry structural complexities may galaxy clusters reliably be used as
cosmological probes ?
The complex structure seen in galaxy clusters motivates our
investigation of the intracluster medium (ICM) inhomogeneity. We
note, however, that we take a statistical approach to modeling the
inhomogeneity rather than directly modeling such complex phenomena as
shocks, cold fronts, etc. Motivated by results from cosmological
hydrodynamic simulations we explore the ramifications of a lognormal
model of the inhomogeneity of the ICM. This model was first proposed
(Kawahara et al.\ 2007, hereafter Paper I; Kawahara et al.\ 2008), in
this context, to explain the discrepancies between emission weighted
and spectroscopic temperature estimates from galaxy clusters
\citep[][]{mazzotta04, rasia05, vikhlinin06}. They found that local
inhomogeneities of the ICM play an essential role in producing the
systematic bias between spectroscopic and emission weighted
temperatures.
Thus far, the lognormal model has been motivated by and applied only
to clusters from cosmological hydrodynamic simulations. Therefore it
is crucial to see if inhomogeneities in real galaxy clusters also show
the lognormal signature. In reality, this is not a straightforward task
since one can observe clusters in X-rays only through their projection
over the line of sight. Thus we develop a method of extracting
statistical information of the three-dimensional properties of
fluctuations from the two-dimensional X-ray surface brightness.
The rest of the paper is organized as follows. We first summarize the
log-normal model in \S\ref{sec:model}. We create synthetic clusters
to explore the relationship between the intrinsic cluster
inhomogeneity and X-ray observables in \S\ref{sec:synthetic}. In \S
\ref{sec:obs} we apply our methodology to \chandra observations of the
galaxy cluster Abell 3667, and then attempt to quantify the nature of
cluster inhomogeneity. We also compare our synthetic cluster results
with cosmological hydrodynamic simulations in
\S\ref{sec:con}. Finally, we summarize our results in \S\ref{sec:sum}.
Throughout the paper the Hubble constant is parameterized by $h$ in
the usual way, $H_0 = 100\, h$ km s$^{-1}$ Mpc$^{-1}$.
\section{Model of the ICM Inhomogeneity}
\label{sec:model}
\subsection{Lognormal Distribution}
\label{sec:analytic}
In order to characterize the inhomogeneity of the ICM, we define the
density and temperature fluctuations as the ratios $\mbox{$\delta_n$} \equiv
n({\bf r})/\overline{n}(r)$ and $\mbox{$\delta_T$} \equiv T({\bf r})/\overline{T}(r)$, where
$n({\bf r})$ and $T({\bf r})$ are the local density and temperature at radius
${\bf r}$, and $\overline{n}(r)$ and $\overline{T}(r)$ are the angular average
profiles defined by
\begin{eqnarray}
\label{eq:averagen}
\overline{n}(r) &\equiv& \frac{1}{4 \pi}
\int n(r,\theta,\phi) \sin\theta\; d\theta\; d\phi\\
\label{eq:averageT}
\overline{T}(r) &\equiv& \frac{1}{4 \pi} \int T(r,\theta,\phi)
\sin\theta\; d\theta\; d\phi
\end{eqnarray}
where $\theta$ and $\phi$ are polar and azimuthal angles,
respectively. Analysis of hydrodynamical
simulations (Paper I) found that $\mbox{$\delta_n$}$ and $\mbox{$\delta_T$}$ are approximately
independent and follow the radially independent lognormal probability
density function (PDF) given by
\begin{equation}
p(\delta_x; \sigma_{\mathrm{LN}, \, x}) \, d \delta_x = \frac{1}{\sqrt{2 \pi} \sigma_{\mathrm{LN}, \, x}}
\exp{\left[ \frac{-\left(\log{\delta_x}+\sigma_{\mathrm{LN}, \, x}^2/2 \right)^2}{2 \sigma_{\mathrm{LN}, \, x}^2}
\right]} \, \frac{d \delta_x}{\delta_x},
\label{eq:pdf_delta}
\end{equation}
where $x$ denotes $n$ or $T$, $\delta_x \equiv x({\bf r})/\overline{x}(r)$, and
$\sigma_{\mathrm{LN}, \, x}$ is the standard deviation of the logarithm of density or
temperature.
To construct the two-dimensional surface brightness profile from the
three-dimensional density and temperature distribution, we also need
the properties of the power spectra of the density and temperature
fluctuations. We adopt statistically isotropic fluctuations with a
power-law type power spectrum for both the density fluctuations
$P_{n}(k) \propto k^{\alpha_{n}}$ and the temperature fluctuations
$P_{T}(k) \propto k^{\alpha_{T}}$. These assumptions are based on the
results of the cosmological hydrodynamic simulations described in \S
\ref{subsec:hydro_sim}.
We use this model to generate synthetic clusters to explore the
relationship between the three-dimensional inhomogeneity in the ICM and
the two-dimensional X-ray surface brightness.
\subsection{Cosmological Hydrodynamic Simulated Clusters}
\label{subsec:hydro_sim}
When one considers the projection of galaxy clusters to two dimensions
for mock X-ray observations, the power spectrum of the fluctuations is
important in addition to the PDF of the inhomogeneity. Here, we once
again turn to simulations to investigate the power spectrum of the
fluctuations.
We extract the six massive clusters from cosmological hydrodynamic
simulations of the local universe performed by \citet{dolag05}. The
simulations utilize the smoothed particle hydrodynamic (SPH) method, and
assume a flat $\Lambda $ CDM universe with $\Omega_m=0.3, \Omega_b=0.04,
\sigma_8=0.9$, and a dimensionless Hubble parameter $h=0.7$. The number
of dark matter and SPH particles is $\sim 20$ million each within a
high-resolution sphere of radius $\sim 110 $ Mpc, which is embedded in a
periodic box $\sim 343$ Mpc on a side that is filled with nearly seven
million low-resolution dark matter particles. The simulation is designed
to reproduce the matter distribution of the local universe by adopting
the initial conditions based on the {\it IRAS} galaxy distribution
smoothed over a scale of $4.9 h^{-1} \mathrm{Mpc}$. Thus, the six
massive clusters are identified as Coma, Perseus, Virgo, Centaurus,
A3627, and Hydra. A cubic region with 6 $h^{-1}$ Mpc on a side centered
on each simulated cluster is extracted and divided into $512^3$
cells. The density and temperature of each mesh point are calculated
from SPH particles using the B-spline smoothing kernel. A detailed
description of this procedure is given in Paper I. The distance between
two adjacent grid points is given by $ d_{\mathrm{grid}} = 6 h^{-1}\mathrm{Mpc}/512
\sim 12 h^{-1}$ kpc, which is comparable to the gravitational force
resolution (14 kpc) and the inter-particle separation reached by SPH
particles in the dense centers of clusters. Therefore, the (maximum)
resolution is $d_{\mathrm{grid}}/r_{\mathrm{c}} \approx 0.1$ assuming $r_{\mathrm{c}} \sim 100$ kpc. This
is about one order of magnitude worse than that of both the synthetic
clusters (\S~\ref{sec:synthetic}) and the observational data
(\S~\ref{sec:obs}).
For each simulated cluster, we compute the radially averaged density
and temperature profiles, $\overline{n}(r)$ and $\overline{T}(r)$,
respectively (Eqn. [\ref{eq:averagen}] and [\ref{eq:averageT}]), and
use them to compute the density and temperature fluctuations $\mbox{$\delta_n$} =
n/\overline{n}$ and $\mbox{$\delta_T$} = T/\overline{T}$ at each grid point. We
extract $128^3$ cells of $\mbox{$\delta_n$}$ and $\mbox{$\delta_T$}$ around the center of a
simulated cluster and compute the power spectrum. The distance from the
center to the corner of the $128^3$ cells is $\sim 1.3\ h^{-1}$ Mpc
which is approximately equal to the virial radius of the simulated
clusters ($r_{\mathrm{200}} = 1.0$-$1.6\ h^{-1}$ Mpc). The virial
radius, $r_{\mathrm{200}}$, is the radius within which the mean
interior density is 200 times that of the critical density.
Figure~\ref{fig:chspower} shows the power spectra for each simulated
cluster for both $\mbox{$\delta_n$}$ (upper panel) and $\mbox{$\delta_T$}$ (lower panel). In each
panel a simple power law, $P(k) \propto k^{-3}$ (dotted line), is also
plotted for comparison. The power spectra for both the density and
temperature are relatively well approximated by a single power law.
We therefore adopt a power-law spectral model for the density and
temperature fluctuations for the synthetic cluster analysis.
\begin{figure}[!tbh]
\centerline{\includegraphics[width=85mm]{f1.ps}}
\caption{The power spectra of $\mbox{$\delta_n$}$ (upper) and $\mbox{$\delta_T$}$ (lower) of the
six simulated clusters. Dashed lines indicate $P(k) \propto
k^{-3}$. \label{fig:chspower}}
\end{figure}
\section{Synthetic Clusters}
\label{sec:synthetic}
Cosmological hydrodynamic simulations provide a useful test-bed for
exploring cluster structure. Simulated clusters exhibit complex
density and temperature structure akin to that of real galaxy
clusters. The resolution of our current simulations, however, is
limited, especially when compared to the resolution available from
current generation X-ray satellites. In addition, we need to
systematically survey the parameter space of $\sigma_{\mathrm{LN}, \, n}$ and $\alpha_n$ in
order to relate the X-ray surface brightness fluctuations to the
density fluctuations. Thus we create a set of synthetic clusters at
higher resolution that have lognormal fluctuations around their mean
profile. Analysis of mock observations of these synthetic clusters
enables us to investigate the relation between the X-ray surface
brightness and the statistical properties of the three-dimensional
density fluctuations, namely $\sigma_{\mathrm{LN}, \, n}$ and $\alpha_n$.
\subsection{Method \label{ssec:method}}
\subsubsection{Synthetic Cluster Generation}
\label{sss:syn_cl_gen}
The three-dimensional synthetic clusters will be projected to two
dimensions when considering the X-ray surface brightness. In order to
incorporate a power-law type power spectrum of spatial fluctuations into
the synthetic clusters, we follow a similar methodology as that of
several studies of the interstellar medium \citep{Elmegreen02,FD04}.
First a Gaussian random field with a power-law power spectrum is
constructed and that field is mapped into a lognormal field. Therefore,
our assumption for the power spectrum is adopted for the Gaussian field
$q$ as opposed to $\delta_n$. However, we will verify that the ensemble
average of the power spectra of $q$ and $\mbox{$\delta_n$}$ ($ P_q(k) \propto
k^{\alpha_q}$ and $ P_{n}(k) \propto k^{\alpha_{n}}$) have almost the same
power-law indices, $\alpha_q \sim \alpha_{n}$.
We generate the lognormal density fluctuation field as follows. We
first generate the real random fields, $a({\bf k})$ and
$b({\bf k})$, in $k$-space, whose distribution functions obey
\begin{equation}
\label{eq:deviate1}
{p}(a)da = \frac{1}{ \sqrt{\pi f(k)} }
\exp{\left[-\frac{a^2}{f (k)}\right]} da,
\quad
{p}(b)db = \frac{1}{ \sqrt{\pi f(k)} }
\exp{\left[-\frac{b^2}{f (k)}\right]} db,
\end{equation}
where $f(k) \equiv A k^{\alpha_q}$. Then we compute $q({\bf r})$, the
Fourier transform of a complex field $\tilde{q} ({\bf k}) \equiv a({\bf
k}) + i b({\bf k})$. With the additional conditions $a({\bf k})=a(-{\bf
k})$ and $b({\bf k})=-b(-{\bf k})$, $q({\bf r})$ becomes a real Gaussian
random field, and its power spectrum, $P_q(k)$, is equal to the input
function $f(k) \equiv A k^{\alpha_q}$. The amplitude $A$ is related to
the variance of the Gaussian random field:
\begin{equation}
\sigma_g^2 \equiv
4 \pi \int_{k_{\rm min}}^{k_{\rm max}} k^2 f(k) dk,
\end{equation}
where $k_{\rm min}$ and $k_{\rm max}$ denote the minimum and maximum
value of the wavenumber. Finally the lognormal deviate, $\delta_{x}
({\bf r})$, is obtained from the Gaussian deviate, $q({\bf r})$, using the
relation
\begin{equation}
\delta_{x} ({\bf r}) =\exp{ \left( \frac{\sigma_{\mathrm{LN}, \, x}}{\sigma_g} q({\bf r}) -
\frac{\sigma_{\mathrm{LN}, \, x}^2}{2} \right)},
\label{eq:lognorm_deviate}
\end{equation}
where $\sigma_{\mathrm{LN}, \, x}$ is the standard deviation of the lognormal field.
We construct synthetic clusters with average density given by the
$\beta$ model and $\mbox{$\delta_n$}$ drawn from a lognormal distribution taking
into account the power-law type power spectrum of spatial
fluctuations. The $\beta$ model is given by
\citep{cavaliere1976,cavaliere1978}
\begin{equation}
\overline{n}(r) = n_0 \left[ 1 + \left( \frac{r}{r_{\mathrm{c}}} \right)^2
\right]^{-3 \beta / 2},
\label{eq:beta_model}
\end{equation}
where $n_0$ is the central electron number density, $r_{\mathrm{c}}$ is the core
radius, and $\beta$ specifies a power-law index. For simplicity, we first
adopt a fiducial value of $\beta=2/3$, and assume isothermality for
the synthetic clusters. Later, we examine the effects of varying
$\beta$ (\S~\ref{sss:vary_beta}) and of temperature structure using a
polytropic temperature profile (\S~\ref{ss:tstruct}).
The density at an arbitrary point is given by
\begin{equation}
n ({\bf r}) = \delta_{n} \overline{n} (r) .
\label{eq:n_ijk}
\end{equation}
The X-ray surface brightness profile is obtained by projecting the
three-dimensional synthetic cluster down to two dimensions. For the
isothermal case the projected X-ray surface brightness profile is
\begin{equation}
S_{\mathrm{X}} ({\bf R}) \propto \int [n({\bf r})]^2 d l,
\label{eq:sx_jk}
\end{equation}
where ${\bf R}$ indicates the position on the projected plane and $l$ is
the projection of ${\bf r}$ onto the line of sight direction.
\begin{figure}[!tbh]
\centerline{\includegraphics[width=80mm]{f2.ps}}
\caption{The change of the power-law spectral index of the density
($\alpha_{n}$) and density squared fields ($\alpha_{nn}$) compared to
that of the Gaussian field ($\alpha_q$). Solid and dashed lines
indicate $\alpha_{n}/\alpha_q -1$ (density) and $\alpha_{nn}/\alpha_q -1$
(density squared), respectively. Each symbol indicates a different
value of $\sigma_{\mathrm{LN}, \, n}$ (cross, square, and triangle correspond to
$\sigma_{\mathrm{LN}, \, n}=0.1$, $0.3$, and $0.5$,respectively.) The power-law index of
the density field is very close ($\lesssim 3$\%) to that of the
Gaussian field used to generate the lognormal distribution and that
of the square of the density is within $\sim 13$\% for larger values
of $\sigma_{\mathrm{LN}, \, n}$ and $\lesssim 5$\% for smaller values ($\sigma_{\mathrm{LN}, \, n} \lesssim
0.3$). }
\label{fig:ipchange1}
\end{figure}
Performing the procedure described above, we set up a cubic mesh of
$n({\bf r})$ in which our three-dimensional synthetic cluster is located
with $N_{\mathrm{grid}}=512$ grid points along each axis. We choose the box size
$L_{\mathrm{box}} = 10 \, r_{\mathrm{c}}$, which results in the distance between two
adjacent grid points being $d_{\mathrm{grid}}=10 \, r_{\mathrm{c}} / N_{\mathrm{grid}} \sim 0.02 \, r_{\mathrm{c}} $.
We fit the power spectrum of the $\mbox{$\delta_n$}$ field by a power-law spectrum so
that $P_{n}({\bf k}) \propto k^{\alpha_{n}}$. We also fit the power spectrum
of the square density field, $\delta_{nn} \equiv n^2/\langle n^2 \rangle = \mbox{$\delta_n$}^2
\exp{(-\sigma_{\mathrm{LN}, \, n}^2)}$ (Appendix B), by the power-law $P_{nn}(k) \propto
k^{\alpha_{nn}}$, relevant to X-ray surface brightness since $S_{\mathrm{X}} \propto
\int d\ell \; n^2$. Throughout this
paper, the notation $\langle x \rangle$ is used to denote the ensemble
average of quantity $x$ over many clusters.
Figure~\ref{fig:ipchange1} shows the change of the power-law spectral
index of the density ($\alpha_{n}$) and density squared fields
($\alpha_{nn}$) compared to that of the Gaussian field ($\alpha_q$). The
change in the power-law index for the density and density squared
distributions compared to the initial Gaussian field are small ($<$3\%
and $<$ 13\%, respectively), and therefore, $\alpha_q \sim \alpha_{n}
\sim \alpha_{nn}$, consistent with the results of \cite{FD04}.
\subsubsection{X-ray Surface Brightness}
\label{ss:em}
To quantify the relationship between the inhomogeneity of the density
and the X-ray surface brightness, $S_{\mathrm{X}}$, we introduce the X-ray
surface brightness fluctuation from the average radial surface
brightness profile $\overline{S}_{\mathrm{X}} (R)$
\begin{equation}
\mbox{$\delta_\mathrm{Sx}$}({\bf R}) \equiv \frac{S_{\mathrm{X}}({\bf R})}{\overline{S}_{\mathrm{X}} (R)},
\label{eq:dsx}
\end{equation}
where $R \equiv |{\bf R}|$. We define the average
profile $\overline{S}_{\mathrm{X}} (R)$ for an individual cluster by fitting the
projected synthetic clusters to an isothermal $\beta$ model
\begin{equation}
\overline{S}_{\mathrm{X}} (R) = S_{\mathrm{X},0} \left[ 1 +
\left(\frac{R}{r_{\mathrm{c,X}}}\right)^2 \right]^{-3\beta_{\mathrm{X}}+1/2},
\label{eq:ave1}
\end{equation}
where $S_{\mathrm{X},0}$ is the central X-ray surface brightness,
$r_{\mathrm{c,X}}$ is the core radius, and $\beta_{\mathrm{X}}$ specifies the power-law
index for the X-ray surface brightness distribution. These three
parameters are derived from a model fit to each synthetic cluster. It
is important to emphasize that the average in equation (\ref{eq:ave1})
is defined for {\it an individual cluster}. We note that if we adopt
directly the average X-ray surface brightness profile instead of a
$\beta$ model fit (Eqn. [\ref{eq:ave1}]), the results are unchanged.
This is because the radial profile is well approximated by the $\beta$
model for the synthetic clusters. However, for observations of real
galaxy clusters, the $\beta$ model approximation might break down and
one should instead use an average of $S_{\mathrm{X}}({\bf R})$ directly in such
cases. In \S 3.2, we will investigate the relation between the
standard deviation of the X-ray surface brightness fluctuations,
$\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$, and that of the intrinsic density fluctuations, $\sigma_{\mathrm{LN}, \, n}$.
Here, we consider the relation of
$\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ and $\sigma_{\mathrm{LN}, \, n}$ for the {\it ensemble average} of clusters
assuming they all obey the $\beta$ model with the same $\beta$, $r_{\mathrm{c}}$,
$\alpha_q$ and $\sigma_{\mathrm{LN}, \, n}$:
\begin{eqnarray}
\A S_{\mathrm{X}} \E (R) &\equiv& \langle S_{\mathrm{X}} (|{\bf R}|) \rangle
\label{eq:ens1}\\
\langle S_{\mathrm{X}} ({\bf R}) \rangle &\propto& e^{\sigma_{\mathrm{LN}, \, n}^2} \int
\overline{n}^2 d l,
\label{eq:ens2}
\end{eqnarray}
where the exponential term of the right hand side of
equation~(\ref{eq:ens2}) comes from the second moment of the lognormal
distribution (Paper I). Although the ensemble average is {\it not} an
observable quantity, we can describe an analytical prediction of
$\sigma_{\mathrm{LN}, \, \mathrm{Sx}} (R)$ assuming the isothermal $\beta$ model
(Appendix~\ref{a1:den_sb}). In addition, one expects that $\overline{S}_{\mathrm{X}}
\sim \A S_{\mathrm{X}} \E $ if there is a large enough volume compared with the
size of fluctuations when calculating $\overline{S}_{\mathrm{X}}$. In other words, the
spatial average approaches the ensemble average. For these reasons,
it is useful to consider the ensemble average. Using equations
(\ref{eq:ens1}) and (\ref{eq:ens2}), we define the ensemble average of
fluctuations in the X-ray surface brightness as
\begin{equation}
\delta_{\mathrm{Sx,ens}}({\bf R}) \equiv \frac{S_{\mathrm{X}}({\bf R})}{\A S_{\mathrm{X}} \E (R)}.
\label{eq:ensd}
\end{equation}
We note that the distribution of the square of density fluctuations,
which is proportional to the local emissivity in the isothermal case,
is also distributed according to the lognormal function with a
lognormal standard deviation of $2 \sigma_{\mathrm{LN}, \, n}$ if the density fluctuations
follow the lognormal distribution with standard deviation $\sigma_{\mathrm{LN}, \, n}$
(Appendix~\ref{sec:a2_densquared}).
\subsection{Statistical Analysis of the Synthetic Clusters}
Here, we investigate the distribution of $\mbox{$\delta_\mathrm{Sx}$}$ of the synthetic
clusters and relate quantities obtainable from observations, $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$
and $\alpha_{\mathrm{Sx}}$, to that of the underlying density, $\sigma_{\mathrm{LN}, \, n}$ and $\alpha_n$.
\subsubsection{Lognormal nature and the relation between $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ and $\sigma_{\mathrm{LN}, \, n}$}
\label{ss:synthetic_clusters}
\begin{figure}[!tbh]
\centerline{\includegraphics[width=120.0mm]{f3.ps}}
\caption{The probability distribution of the ensemble-averaged
distribution of $\mbox{$\delta_\mathrm{Sx}$}$ illustrating the radial dependence. The
distributions in shells of thickness $0.5 \, r_{\mathrm{c}}$ are shown. Each color
indicates a different radial interval: $R<1.5 \, r_{\mathrm{c}}$ (red), $1.5 \, r_{\mathrm{c}} <
R < 3.5 \, r_{\mathrm{c}}$ (black), and $R>3.5 \, r_{\mathrm{c}}$ (blue). \label{fig:shells}}
\end{figure}
\begin{figure}[!tbh]
\centerline{\includegraphics[width=80mm]{f4.ps}}
\caption{The radial dependence of the standard deviations of the
logarithm of X-ray surface brightness, $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$. Two values of
$\sigma_{\mathrm{LN}, \, n}$ are plotted, 0.1 and 0.5, as indicated in the figure. Solid
and dotted lines show $\sigma_{\mathrm{LN}, \, \mathrm{Sx}} (R)$ calculated using the average
profile defined by the $\beta$ model (Eq.~[\ref{eq:ave1}]) and the
ensemble average (Eq.~[\ref{eq:ens2}]), respectively. Dashed lines
show the analytical prediction (Eq.~[\ref{eq:thickr}]) . Dash-dotted
lines indicate the case including the temperature
structure. Although we show results only for a single power-law
index, $\alpha_q=-3.0$, similar results are obtained in other
cases. \label{fig:rdt}}
\end{figure}
\begin{figure}[!tbh]
\centerline{\includegraphics[width=120.0mm]{f5.ps}}
\caption{The probability distribution of $\mbox{$\delta_\mathrm{Sx}$}$ for five individual
synthetic clusters (solid) along with the best-fit lognormal
distributions (dashed). Each color shows a different individual
synthetic cluster. Each panel shows a different value of the power
law index of the Gaussian field, $\alpha_q$, between $-2$ and $-4$ as
indicated in each panel.}
\label{fig:ind}
\end{figure}
We investigate the distribution of $\mbox{$\delta_\mathrm{Sx}$}$ as a function of radial
distance $R$ from the cluster center. We first divide the $\mbox{$\delta_\mathrm{Sx}$}$ field
into shells of thickness $0.5 \, r_{\mathrm{c}}$. The distributions of $\mbox{$\delta_\mathrm{Sx}$}$
within each shell, $p(\mbox{$\delta_\mathrm{Sx}$} ; R)$, averaged over 256 synthetic clusters
are shown in Figure~\ref{fig:shells} for various values of $\alpha_q$. We
find that $\mbox{$\delta_\mathrm{Sx}$}$ also approximately follows the lognormal distribution.
The standard deviation of the logarithm of $\mbox{$\delta_\mathrm{Sx}$}$ versus radius, $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}
(R)$, constructed from the averaged shells is displayed in
Figure~\ref{fig:rdt}. Two values of $\sigma_{\mathrm{LN}, \, n}$ are plotted, 0.1 and
0.5, in addition to using the average profile defined by both
the $\beta$ model (Eq.~[\ref{eq:ave1}]; solid) and that for the ensemble
(Eq.~[\ref{eq:ens2}]; dotted). The analytic prediction
(Eq.~[\ref{eq:thickr}]; dashed) and the case including the temperature
structure (\S \ref{ss:tstruct}; dot-dashed) are also plotted. At large
$R$, $\sigma_{\mathrm{LN}, \, \mathrm{Sx,ens}} (R)$ is approximately $\sigma_{\mathrm{LN}, \, \mathrm{Sx}} (R)$ because the spatial
average tends to the ensemble average due to the large volume used for
averaging. However, the agreement is poor near the center, where the
ensemble average is not a good approximation. Although only one value
for $\alpha_q$ is shown, similar results are obtained for other values.
Figures ~\ref{fig:shells} and \ref{fig:rdt} indicate that the
probability density function is weakly dependent on the projected
radius $R$. This radial dependence is caused mainly by two competing
effects. Consider first the case where the typical nonlinear scale of
fluctuations is much smaller than the size of the cluster itself
(shallow spectrum). As equation (A1) indicates, the surface brightness
at $R$ is given by
\begin{equation}
S_{\mathrm{X}}(R) \propto \int \delta_{nn}
\left[1+ \left(\frac{l^2}{r_{\mathrm{c}}^2+R^2}\right)\right]^{-3\beta} dl.
\end{equation}
This implies that the mean value of $S_{\mathrm{X}}(R)$ is effectively determined
by the integration over the line of sight weighted towards the cluster
center, roughly between $-\sqrt{r_{\mathrm{c}}^2+R^2}$ and $+\sqrt{r_{\mathrm{c}}^2+R^2}$.
This is also true for the variance of $S_{\mathrm{X}}(R)$. Since the effective
number of independent cells contributing to the variance of $S_{\mathrm{X}}(R)$
is smaller at smaller projected radii, $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ slightly increases for
smaller $R$. This explains the behavior of the shallow spectra results
for $\alpha_q=-2$ and $-2.5$ in Figure~\ref{fig:shells}. On the
contrary, if the typical nonlinear scale of fluctuations is comparable
to or even larger than the cluster size (steep spectrum), the sampling
at the central region significantly underestimates the real
variance. So the $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ should increase toward the outer region.
This is seen in Figure 3 for the steeper spectra, $\alpha_q=-3.5$ and
$-4$.
Note the first effect is very small and the second effect becomes
significant only when $\alpha_q < -3$. The cosmological hydrodynamic
simulations imply that the typical value of $\alpha_q$ is $-3$.
Therefore we neglect the radial dependence of the $\mbox{$\delta_\mathrm{Sx}$}$ field in the
following analysis.
\begin{figure}[!tbh]
\centerline{
\includegraphics[width=80mm]{f6a.ps}
\includegraphics[width=83mm]{f6b.ps}}
\caption{The average of $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ from the 256 synthetic cluster sample
as functions of $\alpha_q$ (left) and $\sigma_{\mathrm{LN}, \, n}$ (right). The left panel
also shows the standard deviation of $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ from the 256 synthetic
clusters and black, red, and blue represent different values of $\sigma_{\mathrm{LN}, \, n}$,
$0.1$, $0.3$, and $0.5$, respectively. In both panels, symbols indicate
values of $\alpha_q$ (cross, square, triangle, asterisk, and circle
correspond to $\alpha_q= -2$, $-2.5$, $-3$, $-3.5$, and $-4$,
respectively). Dashed lines show the best-fit approximately linear
$\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$-$\sigma_{\mathrm{LN}, \, n}$ relation (Eq.~\ref{eq:fitsKa} and Eq.~ \ref{eq:Kalpha})
for each pair of $\sigma_{\mathrm{LN}, \, n}$, $\alpha_q$. \vfill \label{fig:mgsigout} }
\end{figure}
From actual observations, we obtain the $\mbox{$\delta_\mathrm{Sx}$}$ map for an individual
cluster, not the ensemble average. Therefore, we evaluate the
distributions of $\mbox{$\delta_\mathrm{Sx}$}$ in individual synthetic clusters.
Figure~\ref{fig:ind} shows the PDF for five individual synthetic
clusters (solid) along with the best-fit lognormal distributions
(dashed). We neglect the radial dependence and use the distribution for
the whole cluster within a diameter of $L_{\mathrm{box}}=10 \, r_{\mathrm{c}}$. Each color
represents a different individual synthetic cluster and each panel shows
a different value of the power-law index of the Gaussian field,
$\alpha_q$, with values between -2 and -4. Even if the analysis is done
for one cluster, the distribution approximately follows the lognormal
distribution.
The noisy behavior for steeper spectra ($\alpha_q=-3.5$, $-4$) in
Figure 5 is due to the presence of fluctuations on scales larger than
that of the cluster, similar to the discussion above for Figure 3. In
other words, steeper spectra ($\alpha_q<-3$) have relatively more
larger scale fluctuations compared to shallower spectra
($\alpha_q>-3$). Cosmological hydrodynamic simulations suggest that
$\alpha_q \approx 3$, placing galaxy clusters in the less noisy
regime. We do not consider the noisy regime further in this paper.
The standard deviations of the logarithm of $\mbox{$\delta_\mathrm{Sx}$}$, $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$, for the
different sets of $\alpha_q$ (symbols) and $\sigma_{\mathrm{LN}, \, n}$ (colors) are shown in
Figure~\ref{fig:mgsigout}. The relation between $\sigma_{\mathrm{LN}, \, n}$ and $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$
is approximately linear (right panel) although the proportionality
coefficient depends on $\alpha_q$. Therefore, we can write
\begin{eqnarray}
\label{eq:fitsKa}
\sigma_{\mathrm{LN}, \, \mathrm{Sx}} = Q(\alpha_q) \sigma_{\mathrm{LN}, \, n}.
\end{eqnarray}
We find that $Q(\alpha_q)$ can be approximated well by the following
function
\begin{equation}
Q(\alpha_q) = \frac{c_1}{c_2 + |\alpha_q|^{-4}}.
\label{eq:Kalpha}
\end{equation}
We calculate the average of $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}/\sigma_{\mathrm{LN}, \, n}$ for each $\alpha_q$ over
three different values of $\sigma_{\mathrm{LN}, \, n}$ ($\sigma_{\mathrm{LN}, \, n}=0.1,0.3,$ and $0.5$). By
fitting $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}/\sigma_{\mathrm{LN}, \, n} (\alpha_q)$ using equation (\ref{eq:Kalpha}), we
obtain $c_1 = 2.05 \times 10^{-2}$ and $c_2= 1.53 \times 10^{-2}$.
\begin{figure}[!tbh]
\centerline{\includegraphics[width=65mm]{f7.ps}}
\caption{Comparison of the X-ray surface brightness ($\alpha_{\mathrm{Sx}}$) and
the input Gaussian field ($\alpha_q$) power-law indices. Symbols and
error bars indicate the average and the standard deviation,
respectively, of $\alpha_{\mathrm{Sx}}$ for 256 samples for different sets of
$\alpha_q$ and $\sigma_{\mathrm{LN}, \, n}$. Symbols correspond to different values of
$\sigma_{\mathrm{LN}, \, n}$, with cross, square, and triangle symbols indicating
$\sigma_{\mathrm{LN}, \, n}=0.1$, $0.3$, and $0.5$, respectively, and the relations
$\alpha_{\mathrm{Sx}}=\alpha_q$ and $\alpha_{\mathrm{Sx}}=\alpha_q+0.2$ are also shown (dotted
and solid lines, respectively). We obtain $\alpha_{\mathrm{Sx}}$ for each
individual synthetic cluster by fitting $P_{S_{\mathrm{X}}}({\bf K})$ of an
individual cluster under the assumption of both statistical isotropy
and a power-law ($ \propto K^{\alpha_{\mathrm{Sx}}}$).
\label{fig:ipchange2}}
\end{figure}
\begin{figure}[!tbh]
\centerline{\includegraphics[width=80mm]{f8.ps}}
\caption{The average of $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ over the 256 synthetic clusters as a
function of $\alpha_q$ for different values of the $\beta$ model
power-law index, $\beta$. Symbols correspond to different values of
$\alpha_q$ as in Figure~\ref{fig:mgsigout}. Each color shows a
different value of $\beta$ (black, red, and blue correspond to
$\beta=1.0, 2/3, $ and $0.5$, respectively). Solid, dashed, and
dotted lines are fits using equation (\ref{eq:fitsKa}),
corresponding to $\beta=1.0, 2/3,$ and $0.5$, respectively. The top,
middle, and bottom sets of three different lines indicate
$\sigma_{\mathrm{LN}, \, n}=0.5, 0.3$, and $0.1$, respectively, as indicated in the
figure.
\label{fig:betabeta}}
\end{figure}
\subsubsection{Spectral Considerations}
\label{ss:sx_ps}
Because $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ is strongly dependent on the power-law index $\alpha_q$,
the estimate of $\alpha_q$ from the $\mbox{$\delta_\mathrm{Sx}$}$ map is crucial for
interpreting the value of $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$. Because $\alpha_q$ is an
un-observable quantity, we investigate the relationship between the
power spectra of $\mbox{$\delta_n$}$ and $\mbox{$\delta_\mathrm{Sx}$}$ by fitting the power spectrum of
$\mbox{$\delta_\mathrm{Sx}$}$ under the assumptions of both statistical isotropy and a power
law so that $ P_{S_{\mathrm{X}}}({\bf K}) \propto K^{\alpha_{\mathrm{Sx}}}$, where ${\bf K}$ indicates
the two-dimensional wave vector.
Figure~\ref{fig:ipchange2} shows the power-law index of the X-ray
surface brightness, $\alpha_{\mathrm{Sx}}$, as a function of its counterpart
Gaussian field, $\alpha_q$. Averages and standard deviations over 256
synthetic clusters are shown for three values of the standard
deviation of the logarithm of density, $\sigma_{\mathrm{LN}, \, n}$, where crosses,
squares, and triangles correspond to $\sigma_{\mathrm{LN}, \, n}$ of $0.1$, $0.3$, and
$0.5$, respectively. The dotted line corresponds to the relation
$\alpha_{\mathrm{Sx}} = \alpha_q$ and the solid line shows $\alpha_{\mathrm{Sx}} = \alpha_q +
0.2$. We find that $\alpha_{\mathrm{Sx}} \approx \alpha_q + 0.2$ and since $\alpha_q
\approx \alpha_{n}$, this implies $\alpha_{\mathrm{Sx}} \approx \alpha_{n} + 0.2$.
This can be understood as follows. As we have seen in
\S~\ref{ssec:method}, the difference between $\alpha_{n}$ and
$\alpha_{nn}$ is relatively small ($\lesssim 13$\% and often $\lesssim
5$\%). If one assumes $\mbox{$\delta_\mathrm{Sx}$}$ is the projection of $\delta_{nn}$ (although
this is only strictly true if the average of the surface brightness is
defined by the ensemble average as Eq.[\ref{eq:ens1}]), $\mbox{$\delta_\mathrm{Sx}$}$ can be
described as
\begin{equation}
\mbox{$\delta_\mathrm{Sx}$}({\bf \Theta}) = \int d l \, \delta_{nn} \, W({\bf \Theta},l),
\label{eq:dsx_theta}
\end{equation}
where ${\bf \Theta}$ indicates celestial coordinates and $W({\bf \Theta},l)$ is
the window function. If we neglect the ${\bf \Theta}$-dependence of the
window function and set $W({\bf \Theta},l)=W(l)$, then $P_{S_{\mathrm{X}}}({\bf K})$ can
be written as
\begin{equation}
P_{S_{\mathrm{X}}}({\bf K}) = \frac{1}{2 \pi} \int d k_l \, P_{nn} ({\bf k}) \,
|\widetilde{W}(k_l)|^2,
\label{eq:ps_dsx}
\end{equation}
where $\widetilde{W}(k_l)$ is the Fourier transform of $W(l)$. The
assumption that the size of the cluster is much larger than the typical
scales of the fluctuations yields $|\widetilde{W}(k_l)|^2 \sim 2 \pi
\delta(k_l)$, where $\delta(k_l)$ is the Dirac delta function, and
therefore $K^{\alpha_{\mathrm{Sx}}} \propto k^{\alpha_{nn}}$. Thus, we find $\alpha_{n}
\sim \alpha_{nn} \sim \alpha_{\mathrm{Sx}} $ ($\sim \alpha_q$).
In this section, we have found that, in principle, one can estimate the
value of $\sigma_{\mathrm{LN}, \, n}$ from analysis of X-ray observations. From the
observations one measures $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ and $\alpha_{\mathrm{Sx}}$ and uses them to
infer $\sigma_{\mathrm{LN}, \, n}$, noting that $\alpha_q = \alpha_{\mathrm{Sx}} - 0.2$. Therefore, one
can estimate the statistical nature of the intrinsic three dimensional
fluctuations from two dimensional X-ray observations.
\subsection{Potential Systematics}
\label{ss:pot_sys}
Using mock observations of isothermal $\beta$ models we found a
relation between the intrinsic inhomogeneity of the three dimensional
cluster gas and the fluctuations in the X-ray surface brightness. We
turn our attention to the effects of departures from this idealized
model.
\subsubsection{$\beta$ Model Power-law Index}
\label{sss:vary_beta}
In the above description, we have fiducially assumed the $\beta$ model
power-law index $\beta=2/3$. We investigate two other cases,
$\beta=0.5$, and $\beta=1.0$, in Figure~\ref{fig:betabeta}, where we
show $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ as a function of $\alpha_q$ for different cases of $\beta$
(colors). The corresponding fits using equation (\ref{eq:fitsKa}) are
also shown. Although $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ tends to increase with increasing
$\beta$, the change is relatively small ($<10$\%).
\subsubsection{Temperature Structure}
\label{ss:tstruct}
\begin{figure}[!tbh]
\centerline{\includegraphics[width=70.0mm]{f9.ps}}
\caption{The distribution of $\mbox{$\delta_\mathrm{Sx}$}$ for five individual clusters
including the effects of temperature structure. Synthetic clusters
(solid histogram) and best-fit lognormal model (dashed lines) are
both shown for each cluster. Each color corresponds to a different
individual synthetic cluster. Although we display only one example
of the power-law index, $\alpha_q=-3.0$, similar results are also
obtained in other cases. }
\label{fig:tind}
\end{figure}
In the above discussion, we assumed isothermality for the ICM.
However, the X-ray surface brightness also depends on the underlying
cluster temperature structure, including a non-isothermal average
temperature profile and local inhomogeneity. We investigate these
effects for the X-ray surface brightness distribution.
We assume a polytropic profile for the temperature radial
distribution expressed as
\begin{equation}
\overline{T}(r) = T_0 \left(\frac{\overline{n}(r)}{n_0}\right)^{\gamma-1},
\label{eq:t_polytrope}
\end{equation}
with polytropic index $\gamma = 1.2$ and $T_0 = 6$ keV, which is the
typical set of values in simulated clusters (Paper I). The ensemble
average of the power spectrum of $\mbox{$\delta_T$}$ is assumed to have a power-law
form ($\langle P_{T}(k) \rangle \sim P_q(k) \propto k^{\alpha_{q,T}}$). Because
$\alpha_{T} \approx \alpha_{q,T}$ for the same reasons as described in
\S~\ref{ssec:method} for density fluctuations, we fiducially adopt the
power-law index $\alpha_{q,T}=-3$ based on the results of cosmological
hydrodynamic simulations (for details see \S~\ref{subsec:hydro_sim}).
\begin{figure}[!tbh]
\centerline{\includegraphics[width=80mm]{f10.ps}}
\caption{The effect of the PSF on $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ as a function of radius,
$R/r_{\mathrm{c}}$, for the case of $\alpha_q=-3$ and $\beta=2/3$. Solid curves show
$\sigma_{\mathrm{LN}, \, \mathrm{Sx}}(R)$ without convolution of the PSF. Dashed, dash-dotted, and dotted
curves correspond to $\theta_{\mathrm{HPD}}/\theta_\mathrm{c}=0.1,0.2$ and $0.5$,
respectively. Two values of $\sigma_{\mathrm{LN}, \, n}$ are plotted, 0.1 and 0.5, as
indicated in the figure.
} \label{fig:psf}
\end{figure}
We create the lognormal distribution $\delta_{T}$ for temperature
fluctuations in the same manner as for the density fluctuations
described in \S~\ref{ssec:method}. The temperature of an arbitrary
point is assigned according to
\begin{equation}
T({\bf r}) = \delta_{T}({\bf r}) \overline{T} (r).
\label{eq:t_ijk}
\end{equation}
We adopt $\sigma_{\mathrm{LN}, \, T}=0.3$, because it is the typical value for simulated
clusters (Paper I). In addition, we assume that $\mbox{$\delta_n$}$ and $\mbox{$\delta_T$}$ are
distributed independently, following Paper I. The X-ray surface
brightness is given by
\begin{equation}
S_{\mathrm{X}} ({\bf R}) \; \propto \int [n({\bf r})]^2 \, \Lambda[T({\bf r})] \, dl,
\label{eq:sx_jk2}
\end{equation}
where $\Lambda(T)$ is the X-ray cooling function. We calculate
$\Lambda(T)$ in the energy range 0.5-10.0 keV using SPEX 2.0
\citep{1996uxsa.conf..411K} on the assumption of collisional
ionization equilibrium and a constant metallicity of 30\% solar
abundances.
Examples of the distribution of $\mbox{$\delta_\mathrm{Sx}$}$ in individual clusters are shown
in Figure~\ref{fig:tind} (solid histogram) along with the best fit
lognormal distributions (dashed lines). Each color corresponds to a
different individual synthetic cluster. Although only one value for the
power-law index, $\alpha_q=-3$, is shown, similar results are obtained for
other values. The radial dependence of $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ including the effects
of temperature structure is shown in Figure~\ref{fig:rdt} (dot-dashed).
There are only small differences between the isothermal and
non-isothermal cases. The X-ray surface brightness depends on the
density squared but roughly as $\sqrt{T}$ for bremsstrahlung emission.
Therefore, the temperature structure effects on $\mbox{$\delta_\mathrm{Sx}$}$ are much less
important than those of the density structure. Hereafter, we neglect
the effects of temperature structure and focus only on the effects of
density inhomogeneity.
\subsubsection{Finite Spatial Resolution}
\label{ss:resolution}
Actual observations by X-ray satellites have finite spatial
resolution, characterized by the point spread function (PSF). We
assume that the PSF is a circularly symmetric Gaussian with standard
deviation $\sigma$. The PSF can then be parameterized by a single
parameter called the {\it half power diameter} ($\theta_{\mathrm{HPD}}$) in which
50\% of the X-rays are enclosed ($\theta_{\mathrm{HPD}}/\sigma = 2 \sqrt{2 \log
2}$).
We investigate three cases, $\theta_{\mathrm{HPD}}/\theta_\mathrm{c}=0.1,0.2$ and $0.5$.
Figure~\ref{fig:psf} shows the effect of the PSF on $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ as a
function of radius. In each case, the average over 256 synthetic
clusters is shown. Results for no PSF correction ($\theta_{\mathrm{HPD}}=0$, solid)
and $\theta_{\mathrm{HPD}}/\theta_\mathrm{c}=0.1$ (dashed), 0.2 (dot-dashed), and 0.5 (dotted)
are shown. As $\theta_{\mathrm{HPD}}/\theta_\mathrm{c}$ increases, $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ near the center
of the cluster decreases. This can be understood as follows. In each
radial shell, fluctuations smaller than roughly the radius of the shell
predominately contribute to the fluctuations, namely $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}(R)$. The
PSF effectively smooths out the smaller scale fluctuations (roughly up
to the size of the PSF), reducing $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$, while preserving the large
scale fluctuations. Since the inner shells only contain small scale
fluctuations, they are more strongly affected by the PSF. The case of
$\theta_{\mathrm{HPD}}/\theta_\mathrm{c}=0.5$ best illustrates these effects. The reduction
of $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ from the PSF is seen at all radii. However, it is only a
slight reduction at large radii, increasing as the radius decreases,
with a very large effect near the cluster center.
In summary, when $\mbox{$\delta_n$}$ in three dimensions follows the lognormal
distribution, $\mbox{$\delta_\mathrm{Sx}$}$ in two dimensions also approximately follows the
lognormal distribution. The mean value of $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ for an individual
cluster is strongly dependent on both $\sigma_{\mathrm{LN}, \, n}$ and $\alpha_q$. Because
$\alpha_q$ is approximately equal to $\alpha_{\mathrm{Sx}}$, in principle, one can
infer $\sigma_{\mathrm{LN}, \, n}$ from $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ although there is still some dispersion even
if $\alpha_q$ is known. In addition, the effect of the temperature
structure is minimal.
\section{Application to Abell 3667}
\label{sec:obs}
Simulations suggest that the lognormal model (Eq.~[\ref{eq:pdf_delta}])
is a reasonable approximation of the small scale structure in galaxy
clusters. We compare this model with \chandra X-ray observations of the
nearby galaxy cluster Abell 3667 at a redshift $z=0.056$
\citep{struble1999}. A3667 is a well observed nearby bright galaxy
cluster that does not exhibit a cool core observed by \chandra. With
its complex structure, including a cold front \citep{vikhlinin2001} and
possible merger scenario \citep[e.g.,][]{knopp96}, A3667 will serve as a
difficult test case for the lognormal model of density fluctuations.
\begin{figure}[!tbh]
\centerline{
\includegraphics[height=8.5cm]{f11a.ps}
\includegraphics[height=8.5cm]{f11b.ps}
}
\caption{\chandra image of the galaxy cluster Abell 3667 (left)
and the corresponding $\mbox{$\delta_\mathrm{Sx}$}$ image (right). The
counts image has been divided by the exposure map to yield X-ray
surface brightness (cnt s$^{-1}$ cm$^{-2}$ arcmin$^{-2}$), including
scaling for the pixel size. Point sources in the field have been masked.
}
\label{fig:a3667_image}
\end{figure}
\begin{deluxetable}{cccc}
\tablewidth{0pt}
\tablecolumns{4}
\tablecaption{A3667 \chandra Observations
\label{tab:a3667_data}}
\tablehead{
\colhead{} & \colhead{$t_{exp}$} & \colhead{RA} & \colhead{DEC} \\
\colhead{obsID} & \colhead{(ks)} & \colhead{(h m s)} & \colhead{(d m s)}
}
\startdata
$\phn513$ & $\phn45$ & $20\ 12\ 50.30$ & $-56\ 50\ 56.99$\\
$\phn889$ & $\phn51$ & $20\ 11\ 50.00$ & $-56\ 45\ 34.00$\\
$5751$ & $131$ & $20\ 13\ 07.25$ & $-56\ 53\ 24.00$\\
$5752$ & $\phn61$ & $20\ 13\ 07.25$ & $-56\ 53\ 24.00$\\
$5753$ & $105$ & $20\ 13\ 07.25$ & $-56\ 53\ 24.00$\\
$6292$ & $\phn47$ & $20\ 13\ 07.25$ & $-56\ 53\ 24.00$\\
$6295$ & $\phn50$ & $20\ 13\ 07.25$ & $-56\ 53\ 24.00$\\
$6296$ & $\phn50$ & $20\ 13\ 07.25$ & $-56\ 53\ 24.00$
\enddata
\end{deluxetable}
\subsection{Data Reduction}
\label{subsec:obs_data_reduce}
\chandra observations of the galaxy cluster A3667 are summarized in
Table~\ref{tab:a3667_data}. Listed are the observation identification
numbers, exposure times, and pointing centers of each of the eight
archival \chandra observations of A3667 used in this analysis. The data
are reduced with CIAO version 4.0 and calibration data base version
3.4.2. The data are processed starting with the level 1 events data,
removing cosmic ray afterglows, correcting for charge transfer
inefficiency and optical blocking filter contamination, and other
standard corrections, in addition to generating a customized bad pixel
file. The data are filtered for \asca grades 0, 2, 3, 4, 6 and status=0
events and the good time interval data provided with the observations
are applied. Periods of high background count rate are excised using an
iterative procedure involving creating light curves in background
regions with 500 s bins, and excising time intervals that are in excess
of 4 $\sigma$ from the median background count rate. This sigma
clipping procedure is iterated until all remaining data lie within 4
$\sigma$ of the median. The final events list is limited to energies
0.7-7.0 keV to exclude the low and high energy data that are more
strongly affected by calibration uncertainties. Finally, the images are
binned by a factor of eight, resulting in a pixel size of 3.94\arcsec.
This pixel size matches the resolution of the synthetic clusters
considered in \S\ref{sec:synthetic}. In particular, the ratio of pixel
size to the cluster core radius of the \chandra image is similar to the
synthetic cluster grid spacing compared to the synthetic cluster core
radius, namely, for $\theta_\mathrm{c} \sim 180\arcsec$
\citep{rb02,knopp96}, $\theta_{\mathrm{pix}} / \theta_\mathrm{c} \sim
d_{\mathrm{grid}} / r_{\mathrm{c}} \sim 0.02$. Exposure maps are constructed for each
observation at an energy of 1 keV. The binned images and exposure maps
for each observation are then combined to make the single image and
exposure map used for the analysis.
A wavelet based source detector is used to find and generate a list of
potential point sources. The list is examined by eye, removing bogus
or suspect detections, and then used as the basis for our point source
mask. Figure~\ref{fig:a3667_image} (left) shows the \chandra merged
image of A3667, the counts image divided by the exposure map, where
the point source mask has been applied. Also shown is the $\mbox{$\delta_\mathrm{Sx}$}$
image (right), discussed below. A cold front \citep{vikhlinin2001} is
clearly visible in the south-eastern region of the $\mbox{$\delta_\mathrm{Sx}$}$ image.
\subsection{Analysis and Results}
\label{subsec:obs_analysis}
In order to determine the center of A3667, a $\beta$ model is fit to the
data with fixed core radius ($180\arcsec$) and $\beta$ (2/3), using
software originally developed for the combined analysis of X-ray and
Sunyaev-Zel'dovich effect observations
\citep{reese00,reese02,bonamente06}. Because A3667 is nearby and
appears very large, \chandra observations do not encompass the entire
cluster but provide a wealth of information on the complexities inherent
in galaxy cluster gas. By using a $\beta$ model fit to the diffuse
emission of the cluster gas we obtain a better measurement of its center
than simply using the brightest pixel or other simple estimates, which
fail to take into account the complex structure manifest in this
cluster. A circular region of radius $\sim 8\arcmin$ centered on A3667
is used in the analysis, corresponding to two and a half times the
cluster's core radius, the largest usable region from the arrangement of
the combined \chandra observations.
The average X-ray surface brightness is required to compute $\mbox{$\delta_\mathrm{Sx}$} =
S_{\mathrm{X}} / \overline{S}_{\mathrm{X}}$. If one computes the average surface brightness,
$\overline{S}_{\mathrm{X}}$, in annular shells, then one will tend to under (over)
estimate $\overline{S}_{\mathrm{X}}$ toward the inner (outer) radius of each annulus.
Therefore, this will lead to an over (under) estimate of $\mbox{$\delta_\mathrm{Sx}$}$
toward the inner (outer) radius of each annulus. To alleviate this
systematic, we adopt the azimuthally averaged X-ray surface brightness
as the model for $\overline{S}_{\mathrm{X}}$, and use cubic spline interpolation between
radial bins. The X-ray surface brightness radial profile for A3667 is
shown in Figure~\ref{fig:a3667_radprof}, along with the interpolated
model (line).
\begin{figure}[!tbh]
\centerline{\includegraphics[height=6.3cm]{f12.ps}}
\caption{\chandra radial profile of the galaxy cluster Abell
3667 (points) with the interpolated model (solid line). This model is
used as the average X-ray surface brightness distribution in the
calculation of $\mbox{$\delta_\mathrm{Sx}$}$.}
\label{fig:a3667_radprof}
\end{figure}
\begin{figure}[!tbh]
\centerline{\includegraphics[height=6.3cm]{f13.ps}}
\caption{Probability distribution of $\mbox{$\delta_\mathrm{Sx}$}$ from \chandra
observations of the galaxy cluster Abell 3667 (blue histogram) along
with the best fit lognormal distribution (red line) with $\sigma_{\mathrm{LN}, \, \mathrm{Sx}} =
0.30$. The lognormal distribution seems to be a reasonable
description of the ICM inhomogeneity in A3667. Also shown are the
best-fit Gaussian model (dashed green) and a Poisson model
(dot-dashed magenta) using the average counts per pixel within the
fitting region.}
\label{fig:a3667_hist}
\end{figure}
\begin{figure}[!tbh]
\centerline{\includegraphics[height=6.3cm]{f14.ps}}
\caption{Power spectrum of $\mbox{$\delta_\mathrm{Sx}$}$ (thick solid) from \chandra
observations of the galaxy cluster Abell 3667, normalized to one at
the largest scale. Also plotted are three power-law power spectra
with spectral indices of -2 (dashed), -3 (dot-dashed), and -4
(dotted) for comparison.}
\label{fig:a3667_ps}
\end{figure}
The probability distribution of $\mbox{$\delta_\mathrm{Sx}$}$, $p(\mbox{$\delta_\mathrm{Sx}$})$, is computed from
the histogram of pixels calculated from the $\mbox{$\delta_\mathrm{Sx}$}$ image and shown in
Figure~\ref{fig:a3667_hist}. The lognormal distribution
(Eq.~[\ref{eq:pdf_delta}]) is fit to the $p(\mbox{$\delta_\mathrm{Sx}$})$ of A3667, where the
only free parameter is the standard deviation of the logarithm of
$\mbox{$\delta_\mathrm{Sx}$}$, $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$. The best fit value for the lognormal model is
$\sigma_{\mathrm{LN}, \, \mathrm{Sx}} = 0.30$. In addition, a Gaussian distribution is also fit to
the data, with its usual two parameters, the mean and standard
deviation. Figure~\ref{fig:a3667_hist} shows the PDF of $\mbox{$\delta_\mathrm{Sx}$}$ for
the \chandra observations of the galaxy cluster A3667 (solid blue
histogram). The best fit lognormal (solid red) and Gaussian (dashed
green) models are also shown. A Poisson distribution (dot-dashed
magenta) is also shown for comparison, using the average counts per
pixel in the fitting region as the parameter for the Poisson
distribution. Clearly, what is seen is not the result of Poisson
statistics. The lognormal model seems to be a reasonable match to the
observed PDF.
However, without information on the power spectrum of the $\mbox{$\delta_\mathrm{Sx}$}$
fluctuations, it is difficult to interpret the value of $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$
(\S\ref{ss:sx_ps}) and relate it to the fluctuations in the density
distribution (Eqs.~[\ref{eq:fitsKa}, \ref{eq:Kalpha}];
Fig.~\ref{fig:mgsigout}). Therefore, we take the Fourier transform of
the $\mbox{$\delta_\mathrm{Sx}$}$ image and compute the average power spectrum in wavenumber
annuli. The power spectrum of $\mbox{$\delta_\mathrm{Sx}$}$ fluctuations is shown in
Figure~\ref{fig:a3667_ps} (thick solid) along with three power-law
spectra with spectral indices of -2 (dashed), -3 (dot-dashed), and -4
(dotted) for comparison. The power spectrum of $\mbox{$\delta_\mathrm{Sx}$}$ has been
normalized to one at the largest scales. A simple power-law model fit
to the power spectrum yields a spectral index of $\alpha_{\mathrm{Sx}} = -2.7$
using the entire spectrum, and a spectral index of $\alpha_{\mathrm{Sx}} = -3.0$
if excluding the larger wavenumbers ($\gtrsim 2$ arcmin$^{-1}$),
roughly where the power spectrum changes shape.
\subsection{Implications}
\label{subsec:obs_disc}
\begin{figure}[!tbh]
\centerline{\includegraphics[width=95mm]{f15.ps}}
\caption{An example of a $\mbox{$\delta_\mathrm{Sx}$}$ map from a cosmological hydrodynamic
simulated cluster (``Centaurus'') both before (left) and after
(right) removal of a quadrant with a large clump. Circles show the
projected virial radius ($R_\mathrm{200}$). Although within the
projected virial radius, $R_\mathrm{200}$, these structures often
reside outside of the three-dimensional virial radius,
$r_\mathrm{200}$.}
\label{fig:clump}
\end{figure}
Both the standard deviation of the logarithm of X-ray surface
brightness fluctuations, $\sigma_{\mathrm{LN}, \, \mathrm{Sx}} = 0.30$, and the power spectrum
power-law index $\alpha_{\mathrm{Sx}} \approx -3$, fall into the range expected
from hydrodynamical galaxy clusters and therefore used in the
synthesized cluster analysis (\S\ref{subsec:hydro_sim}). By combining
these pieces of information, we can relate the information obtained
from the X-ray surface brightness distribution to that of the
underlying density distribution, using the results of the synthesized
cluster analysis. Using the synthetic cluster result that the
spectral indices of the X-ray surface brightness fluctuations and that
of the Gaussian field are simply related by $\alpha_{\mathrm{Sx}} \approx \alpha_q
+0.2$, and the relation between $\sigma_{\mathrm{LN}, \, n}$, $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$, and $\alpha_q$
(Eqs.~[\ref{eq:fitsKa}, \ref{eq:Kalpha}]; Fig.~\ref{fig:mgsigout}),
the \chandra results of $\sigma_{\mathrm{LN}, \, \mathrm{Sx}} = 0.30$ and $\alpha_{\mathrm{Sx}} = -2.7$ imply
that the fluctuations in the underlying density distribution have
$\sigma_{\mathrm{LN}, \, n} = 0.43$. A value of $\alpha_{\mathrm{Sx}} = -3.0$ implies $\sigma_{\mathrm{LN}, \, n} = 0.36$.
The difficult test case of the A3667 X-ray surface brightness seems to
follow the lognormal distribution of density fluctuations, thus
enabling an estimate of the statistical properties of the underlying
ICM density fluctuations.
\section{Application to the Cosmological Hydrodynamic Simulated Clusters}
\label{sec:con}
Results from cosmological hydrodynamic simulations motivated the
lognormal model for ICM inhomogeneity. In \S \ref{sec:synthetic}, we
found that synthetic clusters with lognormal fluctuations show a linear
relation between $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ and $\sigma_{\mathrm{LN}, \, n}$. We now return to clusters
extracted from cosmological hydrodynamic simulations to further explore
these results.
\begin{figure}[!tbh]
\centerline{\includegraphics[width=120mm]{f16.ps}}
\caption{The distribution of $\mbox{$\delta_\mathrm{Sx}$}$ for each of the six clusters from
a cosmological hydrodynamic simulation (points and solid
histogram). Each color indicates the projection along a different,
orthogonal line of sight. For each line of sight, we show the
number of quadrants used for the analysis. For example, ``3/4''
indicates that one quadrant is excluded and three remain. The best
fit lognormal model for each projection is also shown (dotted
lines).
\label{fig:chs}
}
\end{figure}
For each cluster extracted from the simulations, we create X-ray
surface brightness maps towards three orthogonal directions, and
compute $\mbox{$\delta_\mathrm{Sx}$}({\bf R}) = S_{\mathrm X}({\bf R}) / \overline{S}_{\mathrm{X}}(R)$ in a similar
manner as described for the synthetic clusters in \S~\ref{ss:em}. The
regions we consider are within the projected virial radius $R_{\mathrm
200}$. The projected virial radius, $R_{\mathrm{200}}$, is the radius
within which the mean interior density is 200 times that of the
critical density.
\begin{figure}[!tbh]
\centerline{\includegraphics[width=95mm]{f17.ps}}
\caption{ The density fluctuation standard deviation predicted by our
model, $\sigma_{\mathrm{LN}, \, n}(\mbox{model}) = \sigma_{\mathrm{LN}, \, \mathrm{Sx}} / Q (\alpha_q)$ versus that from
the simulations, $\sigma_{\mathrm{LN}, \, n}(\mbox{sim})$. Symbols show different
simulated clusters (see figure legend) and colors indicate different
orthogonal lines of sight. Also plotted is the simple linear
relation $\sigma_{\mathrm{LN}, \, n} \mbox{(model)} = \sigma_{\mathrm{LN}, \, n} \mbox{(sim)}$ for comparison.
}
\label{fig:chsss}
\end{figure}
Although the lognormal distribution is a good fit to the density (and
temperature) of simulated galaxy clusters in three-dimensions, the
projection to X-ray surface brightness suffers from the additional
complexity of projection effects. If large clumps are present, the
distribution of X-ray surface brightness fluctuations, $\mbox{$\delta_\mathrm{Sx}$}$, is not
well approximated by the lognormal distribution. The large clumps
artificially distort the average profile of the cluster and therefore
bias the value of $\mbox{$\delta_\mathrm{Sx}$}$, which depends on the average profile. We
also note that although these clumps fall within the projected virial
radius, $R_{\mathrm 200}$, they usually fall outside of the three
dimensional virial radius, $r_{\mathrm 200}$. We therefore exclude
quadrants that contain large clumps, using $\mbox{$\delta_\mathrm{Sx}$} >10$ as the
exclusion criterion. Then, we recompute $\overline{S}_{\mathrm{X}} (R)$ and $\mbox{$\delta_\mathrm{Sx}$}$. The
complex structure of simulated clusters is illustrated in the $\mbox{$\delta_\mathrm{Sx}$}$
images shown in Figure~\ref{fig:clump}, where examples of a simulated
cluster both before and after removal of a quadrant are displayed.
The circles show the projected virial radius, $R_\mathrm{200}$.
In Figure~\ref{fig:chs} the probability distributions of $\mbox{$\delta_\mathrm{Sx}$}$ for
the simulated clusters (histograms) along with the best-fit lognormal
model (dotted lines) are displayed. Each color indicates the
projection along a different, orthogonal line of sight. Overall, the
probability distributions of $\mbox{$\delta_\mathrm{Sx}$}$ are reasonably well approximated
by the lognormal function, consistent with the results from the
synthetic clusters (\S~\ref{ss:synthetic_clusters}).
We now come full circle to compare our results from the synthetic
clusters directly to the simulations. In order to do this, we look at
the relationship between $\sigma_{\mathrm{LN}, \, n} \mbox(\mathrm{sim})$ measured in the
simulated clusters and $\sigma_{\mathrm{LN}, \, n} \mbox(\mathrm{model})$ predicted from the
synthetic cluster results, equations~ (\ref{eq:fitsKa}) and
(\ref{eq:Kalpha}), where we adopt $\alpha_q = \alpha_{\mathrm{Sx}} - 0.2$ (see \S
\ref{ss:sx_ps}). The value of $\alpha_{\mathrm{Sx}}$ for each simulated cluster is
obtained by fitting a power-law model, $P(K) \propto K^{\alpha_{\mathrm{Sx}}}$, to
the power spectra of $\mbox{$\delta_\mathrm{Sx}$}$. Because the resolution of the simulations
is much poorer than that of the synthetic clusters, we must recompute
the coefficients $c_1$ and $c_2$ in equation \ref{eq:Kalpha} from a set
of lower resolution synthetic clusters. Assuming $r_{\mathrm{c}} \sim 100$ $h^{-1}$
kpc for the simulated clusters, we choose the resolution $\sim 0.1
d_{\mathrm{grid}}/r_{\mathrm{c}}$, noting that this value corresponds to the {\it maximum}
resolution of the simulations. Performing the same procedure described
in \S\ref{sec:synthetic}, we obtain $c_1 = 3.99 \times 10^{-2}$ and
$c_2= 3.36 \times 10^{-2}$.
We compare $\sigma_{\mathrm{LN}, \, n} \mbox{(model)}$ and $\sigma_{\mathrm{LN}, \, n} \mbox(\mathrm{sim})$ in
Figure~\ref{fig:chsss}. Each color corresponds to a different line of
sight. Although there is large scatter, these results indicate that
it is possible to estimate $\sigma_{\mathrm{LN}, \, n}$ within a factor of two only using
the information obtained from the X-ray surface brightness
distribution.
\section{Summary}
\label{sec:sum}
We have developed a method of extracting statistical information on the
ICM inhomogeneity from X-ray observations of galaxy clusters. With a
lognormal model for the fluctuations motivated by cosmological
hydrodynamic simulations, we have created synthetic clusters, and have
found that their X-ray surface brightness fluctuations retain the
lognormal nature. In addition, the result that $\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ and $\sigma_{\mathrm{LN}, \, n}$ are
linearly related implies that one can, in principle, estimate the
statistical properties of the three dimensional density inhomogeneity
($\sigma_{\mathrm{LN}, \, n}$) from X-ray observations of galaxy clusters ($\sigma_{\mathrm{LN}, \, \mathrm{Sx}}$ and
$\alpha_{\mathrm{Sx}}$).
We have compared the predictions of our model to \chandra X-ray
observations of the galaxy cluster A3667. For the first time in a
real galaxy cluster we were able to detect the lognormal signature of
X-ray surface brightness fluctuations, which was originally motivated
by simulations. Based on the synthetic cluster results, this enabled
an estimate of the statistical properties of the inhomogeneity of the
ICM of A3667. In the context of lognormally distributed
inhomogeneity, we obtain $\sigma_{\mathrm{LN}, \, n} \approx 0.4$ for the gas density
fluctuations of A3667. It is encouraging that the value of the
fluctuation amplitude for Abell 3667 is in reasonable agreement with
typical values from the simulated clusters.
Finally we check the validity and limitation of our method using
several clusters from cosmological hydrodynamic simulations. Unlike
the fairly idealized synthetic clusters, simulated clusters exhibit
complex structure more akin to real galaxy clusters. As a result, the
empirical relation between the two- and three-dimensional fluctuation
properties calibrated with synthetic clusters when applied to
simulated clusters shows large scatter. Nevertheless we are able to
reproduce the true value of the fluctuation amplitude of simulated
clusters within a factor of two from their two-dimensional X-ray
surface brightness alone.
Our current methodology combined with existing observational data is
useful in describing and inferring the statistical properties of the
three dimensional inhomogeneity in galaxy clusters. The fluctuations
in the ICM have several implications in properly interpreting galaxy
cluster data. In particular, our current model may be useful in
interpreting data from current and future galaxy cluster surveys using
the Sunyaev-Zel'dovich effect, which have the potential to provide
tight constraints on cosmology.
\acknowledgments
We thank Naomi Ota, Noriko Y. Yamasaki, and Kazuhisa Mitsuda for
useful discussions and Klaus Dolag for providing a set of simulated
clusters. HK is supported by a JSPS (Japan Society for Promotion of
Science) Grant-in-Aid for science fellows. EDR gratefully acknowledges
support from a JSPS Postdoctoral Fellowship for Foreign Researchers
(P07030). This work is also supported by Grant-in-Aid for Scientific
research from JSPS and from the Japanese Ministry of Education,
Culture, Sports, Science and Technology (Nos. 20$\cdot$10466,
19$\cdot$07030, 16340053, 1874012, 20340041, and 20540235), and by the
JSPS Core-to-Core Program ``International Research Network for Dark
Energy''.
{ | {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,645 |
The City of DeBary contracts with the Volusia County Sheriff's Office (VCSO) for law enforcement services.
The cost of providing of providing law enforcement services through the VCSO for the fiscal year 2017-2018 will be $3,461,258 which is a 5.3% imcrease over this fiscal year's cost of $3,286,645.
Approve the Interlocal Agreement for the Provision of Law Enforcement Services with the Volusia County Sheriff's Office for FY 2017-2018 in the amount of #3,461,258. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,822 |
\section{Conclusion}
We presented the \textsc{AutoSeM}{} framework, a two-stage multi-task learning pipeline, where the first stage automatically selects the relevant auxiliary tasks for the given primary task and the second stage automatically learns their optimal mixing ratio. We showed that \textsc{AutoSeM}{} performs better than strong baselines on several GLUE tasks. Further, we ablated the importance of each stage of our \textsc{AutoSeM}{} framework and also discussed the intuition of selected auxiliary tasks.
\section{Experiment Setup}
\label{sec:setup}
\noindent\textbf{Datasets}:
We evaluate our models on several datasets from the GLUE benchmark~\cite{wang2018glue}: RTE, QNLI, MRPC, SST-2, and CoLA. For all these datasets, we use the standard splits provided by~\newcite{wang2018glue}. For dataset details, we refer the reader to the GLUE paper.\footnote{We did not include the remaining tasks as primary tasks, because STS-B is a regression task; MNLI is a very large dataset and does not benefit much from MTL with other tasks in the GLUE benchmark; and QQP and WNLI have dev/test discrepancies and adversarial label issues as per the GLUE website's FAQ: \url{https://gluebenchmark.com/faq}}
\noindent\textbf{Training Details}:
We use pre-trained ELMo\footnote{\url{https://allennlp.org/elmo}} to obtain sentence representations as inputs to our model~\cite{peters2018Deep}, and the Gaussian Process implementation is based on Scikit-Optimize\footnote{\url{https://scikit-optimize.github.io}}, and we adopt most of the default configurations. We use accuracy as the validation criterion for all tasks. For all of our experiments except QNLI and SST-2, we apply early stopping on the validation performance plateau.\footnote{In our initial experiments, we found early stopping on larger datasets led to sub-optimal performance, and hence we used a pre-specified maximum number of steps instead.} The set of candidate auxiliary tasks consists of all 2-sentence classification tasks when the primary task is a classification of two sentences, whereas it consists of all two-sentence and single-sentence classification tasks when the primary task is a classification of a single sentence.\footnote{We made this design decision because there are only two single-sentence tasks in GLUE, so we mix them with 2-sentence tasks to allow more auxiliary choices.} Since the utility estimates from the multi-armed bandit controller are noisy, we choose the top two tasks based on expected task utility estimates, and include additional tasks if their utility estimate is above 0.5. All the results reported are the aggregate of the same experiment with two runs (with different random seeds) unless explicitly mentioned.\footnote{We use the average of validation results across runs as the tuning criterion, and use the ensemble of models across runs for reporting the test results.} We use a two-layer LSTM-RNN with hidden size of 1024 for RTE and 512 for the rest of the models, and use Adam Optimizer~\cite{Kingma2014AdamAM}. The prior parameters of each task in stage-1 are set to be $\alpha_0=1$, $\beta_0=1$, which are commonly used in other literature. For stage-1, the bandit controller iteratively selects batches of data from different tasks during training to learn the approximate importance of each auxiliary task~\cite{graves2017automated}. In stage-2 (Gaussian Process), we sequentially draw samples of mixing ratios and evaluate each sample after full training~\cite{snoek2012practical}. Without much tuning, we used approximately 200 rounds for the stage-1 bandit-based approach, where each round consist of approximately 10 mini-batches of optimization. For stage-2, we experimented with 15 and 20 as the number of samples to draw and found that 15 samples for MRPC and 20 samples for the rest of the tasks work well. This brings the total computational cost for our two-stage pipeline to be approximately (15+1)x and (20+1)x, where x represents the time taken to run the baseline model for the given task. This is significantly more efficient than a grid-search based manually-tuned mixing ratio setup (which would scale exponentially with the number of tasks).
\section{Introduction}
Multi-task Learning (MTL)~\cite{caruana1997multitask} is an inductive transfer mechanism which leverages information from related tasks to improve the primary model's generalization performance. It achieves this goal by training multiple tasks in parallel while sharing representations, where the training signals from the auxiliary tasks can help improve the performance of the primary task. Multi-task learning has been applied to a wide range of natural language processing problems~\cite{luong2015multi,pasunuru2017multitask,hashimoto2017ajm,ruder2017sluice,kaiser2017one,mccann2018natural}.
Despite its impressive performance, the design of a multi-task learning system is non-trivial.
In the context of improving the primary task's performance using knowledge from other auxiliary tasks~\cite{luong2015multi,pasunuru2017multitask}, two major challenges include selecting the most relevant auxiliary tasks and also learning the balanced mixing ratio for synergized training of these tasks.
One can achieve this via manual intuition or hyper-parameter tuning over all combinatorial task choices, but this introduces human inductive bias or is not scalable when the number of candidate auxiliary tasks is considerable. To this end, we present \textsc{AutoSeM}{}, a two-stage Bayesian optimization pipeline to this problem.
In our \textsc{AutoSeM}{} framework\footnote{We make all our code and models publicly available at: \url{https://github.com/HanGuo97/AutoSeM}}, the first stage addresses automatic task selection from a pool of auxiliary tasks. For this, we use a non-stationary multi-armed bandit controller (MAB)~\cite{bubeck2012regret,raj2017taming} that dynamically alternates among task choices within the training loop, and eventually returns estimates of the utility of each task w.r.t. the primary task. We model the utility of each task as a Beta distribution, whose expected value can be interpreted as the probability of each task making a non-negative contribution to the training performance of the primary task. Further, we model the observations as Bernoulli variables so that the posterior distribution is also Beta-distributed. We use Thompson sampling~\cite{chapelle2011empirical,russo2018tutorial} to trade off exploitation and exploration.
The second stage then takes the auxiliary tasks selected in the first stage and automatically learns the training mixing ratio of these tasks, through the framework of Bayesian optimization, by modeling the performance of each mixing ratio as a sample from a Gaussian Process (GP) to sequentially search for the optimal values~\cite{rasmussen2004gaussian,snoek2012practical}.
For the covariance function in the GP, we use the Matern kernel which is parameterized by a smoothness hyperparameter so as to control the level of differentiability of the samples from GP.
Further, following~\citet{hoffman2011portfolio}, we use a portfolio of optimistic and improvement-based policies as acquisition functions~\cite{shahriari2016taking} for selecting the next sample point from the GP search space.
We conduct several experiments on the GLUE natural language understanding benchmark~\cite{wang2018glue}, where we choose each of RTE, MRPC, QNLI, CoLA, and SST-2 as the primary task, and treat the rest of the classification tasks from the GLUE benchmark as candidate auxiliary tasks. Results show that our \textsc{AutoSeM}{} framework can successfully find useful auxiliary tasks and automatically learn their mixing ratio, achieving significant performance boosts on top of strong baselines for several primary tasks, e.g., 5.2\% improvement on QNLI, 4.7\% improvement on RTE, and 2.8\%/0.8\% improvement on MRPC.
We also ablate the usefulness of our two stages of auxiliary task selection and automatic mixing ratio learning. The first ablation removes the task selection stage and instead directly performs the second GP mixing ratio learning stage on all auxiliary tasks. The second ablation performs the task selection stage (with multi-armed bandit) but replaces the second stage Gaussian Process with manual tuning on the selected tasks. Our 2-stage model performs better than both these ablations, showing that both of our stages are crucial. Further, we also discuss the learned auxiliary task choices in terms of their intuitive relevance w.r.t. the corresponding primary task.
\section*{Acknowledgments}
We thank the reviewers for their helpful comments. This work was supported by DARPA (YFA17-D17AP00022), ONR (N00014-18-1-2871), Google, Facebook, Baidu, Salesforce, and Nvidia.
The views contained in this article are those of the authors and not of the funding agency.
\section{Models}
We will first introduce our baseline model and its integration for multiple classification tasks in a multi-task learning (MTL) setup. Next, we will introduce our \textsc{AutoSeM}{} framework, an automatic way of selecting auxiliary tasks and learning their optimal training mixing ratio w.r.t. the primary task, via a Beta-Bernoulli bandit with Thompson Sampling and a Gaussian Process framework.
\subsection{Bi-Text Classification Model}
\label{subsec:baseline}
Let ${\bm{s}}_1$ and ${\bm{s}}_2$ be the input sentence pair in our classification task, where we encode these sentences via bidirectional LSTM-RNN, similar to that of~\newcite{conneau2017supervised}. Next, we do max-pooling on the output hidden states of both encoders where ${\bm{u}}$ and ${\bm{v}}$ are the outputs from the max-pooing layer for ${\bm{s}}_1$ and ${\bm{s}}_2$ respectively. Later, we map these two representations (${\bm{u}}$ and ${\bm{v}}$) into a single rich dense representation vector ${\bm{h}}$:
\begin{equation}
{\bm{h}} = [{\bm{u}} ; {\bm{v}}; {\bm{u}} \star {\bm{v}}; |{\bm{u}}-{\bm{v}}|]
\end{equation}
where $[;]$ represents the concatenation and ${\bm{u}}\star {\bm{v}}$ represents the element-wise multiplication of ${\bm{u}}$ and ${\bm{v}}$. We project this final representation ${\bm{h}}$ to label space to classify the given sentence pair (see Fig.~\ref{fig:baseline}). We also use ELMo~\cite{peters2018Deep} representations for word embeddings in our model. For this, we extract the three ELMo layer representations for each of the sentence pair and use their weighted sum as the ELMo output representation, where the weights are trainable.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{Figs/baseline.pdf}
\caption{Overview of our baseline model where we use different projection layers for each task during MTL, while sharing rest of the model parameters.
\label{fig:baseline}
\vspace{-10pt}
}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{Figs/tsgp-2.pdf}
\vspace{-5pt}
\caption{Overview of our \textsc{AutoSeM}{} framework. \textbf{Left}: the multi-armed bandit controller used for task selection, where each arm represents a candidate auxiliary task. The agent iteratively pulls an arm, observes a reward, updates its estimates of the arm parameters, and samples the next arm. \textbf{Right}: the Gaussian Process controller used for automatic mixing ratio (MR) learning. The GP controller sequentially makes a choice of mixing ratio, observes a reward, updates its estimates, and selects the next mixing ratio to try, based on the full history of past observations.
\label{fig:mtl}
}
\vspace{-15pt}
\end{figure*}
\subsection{Multi-Task Learning}
\label{subsec:mtl-intro}
In this work, we focus on improving a task (primary task) by allowing it to share parameters with related auxiliary tasks via multi-task learning (MTL). Let $\{D_1, ..., D_N\}$ be a set of $N$ tasks, where we set $D_1$ to be the primary task and the rest of them as auxiliary tasks. We can extend our single-task learning baseline (see Sec.~\ref{subsec:baseline}) into multi-task learning model by augmenting the model with $N$ projection layers while sharing the rest of the model parameters across these $N$ tasks (see Fig.~\ref{fig:baseline}). We employ MTL training of these tasks in alternate mini-batches based on a mixing ratio $\eta_1{:}\eta_2{:}..\eta_N$, similar to previous work~\cite{luong2015multi}, where we optimize $\eta_i$ mini-batches of task $i$ and go to the next task.
In MTL, choosing the appropriate auxiliary tasks and properly tuning the mixing ratio can be important for the performance of multi-task models.
The naive way of trying all combinations of task selections is hardly tractable.
To solve this issue, we propose \textsc{AutoSeM}{}, a two-stage pipeline in the next section. In the first stage, we automatically find the relevant auxiliary tasks (out of the given $N-1$ options) which improve the performance of the primary task. After finding the relevant auxiliary tasks, in the second stage, we take these selected tasks along with the primary task and automatically learn their training mixing ratio.
\subsection{Automatic Task Selection: Multi-Armed Bandit with Thompson Sampling}
\label{subsec:task-selection}
Tuning the mixing ratio for $N$ tasks in MTL becomes exponentially harder as the number of auxiliary tasks grows very large. However, in most circumstances, only a small number of these auxiliary tasks are useful for improving the primary task at hand. Manually searching for this optimal choice of relevant tasks is intractable. Hence, in this work, we present a method for automatic task selection via multi-armed bandits with Thompson Sampling (see the left side of Fig.~\ref{fig:mtl}).
Let $\{a_1,...,a_N\}$ represent the set of $N$ arms (corresponding to the set of tasks $\{D_1,..., D_N\}$) of the bandit controller in our multi-task setting, where the controller selects a sequence of actions/arms over the current training trajectory to maximize the expected future payoff. At each round $t_b$, the controller selects an arm based on the noisy value estimates and observes rewards $r_{t_b}$ for the selected arm.
Let $\theta_k \in [0, 1]$ be the utility (usefulness) of task $k$. Initially, the agent begins with an independent prior belief over $\theta_k$. We take these priors to be Beta-distributed with parameters $\alpha_k$ and $\beta_k$, and the prior probability density function of $\theta_k$ is:
\begin{equation}
p(\theta_k) = \frac{\Gamma(\alpha_k+\beta_k)}{\Gamma(\alpha_k)\Gamma(\beta_k)} \theta_k^{\alpha_k-1} (1-\theta_k)^{\beta_k-1}
\end{equation}
\label{eq:beta}
where $\Gamma$ denotes the gamma function. We formulate the reward $r_{t_b} \in \{0,1\}$ at round $t_b$ as a Bernoulli variable, where an action $k$ produces a reward of $1$ with a chance of $\theta_k$ and a reward of $0$ with a chance of $1-\theta_k$. The true utility of task $k$, i.e., $\theta_k$, is unknown, and may or may not change over time (based on stationary vs. non-stationary of task utility). We define the reward as whether sampling the task $k$ improves (or maintains) the validation metric of the primary task,
\begin{equation}
r_{t_b}=
\begin{cases}
1, & \text{if}\ R_{t_b} \geq R_{t_b - 1} \\
0, & \text{otherwise}
\end{cases}
\end{equation}
\label{eq:initial-binary-reward}
where $R_{t_b}$ represents the validation performance of the primary task at time $t_b$. With our reward setup above, the utility of each task ($\theta_k$) can be intuitively interpreted as the probability that multi-task learning with task $k$ can improve (or maintain) the performance of the primary task. The conjugacy properties of the Beta distribution assert that the posterior distribution is also Beta with parameters that can be updated using a simple Bayes rule, which is defined as follows~\cite{russo2018tutorial},
\begin{equation}
\begin{split}
p(\theta_k \lvert r)
& \propto \text{Bern}_{\theta}(r) \text{Beta}_{\alpha, \beta}(\theta_k) \\
& \propto \text{Beta}_{\alpha + r, \beta + 1 - r}(\theta_k)
\end{split}
\end{equation}
\vspace{-10pt}
\begin{equation}
(\alpha_k, \beta_k) =
\begin{cases}
(\alpha_k, \beta_k), & \hspace{-6pt} \text{if}\ x^s_{t_b} \neq k \\
(\alpha_k, \beta_k) \mbox{+} (r_{t_b}, 1 - r_{t_b}), & \hspace{-6pt} \text{if}\ x^s_{t_b} = k
\end{cases}
\end{equation}
\label{eq:stationary-posterior-update}
where $x^s_{t_b}$ is the sampled task at round $t_{b}$. Finally, at the end of the training, we calculate the expected value of each arm as follows:
\begin{equation}
\mathbb{E}_p [\theta_k] = \frac{\alpha_k}{\alpha_k + \beta_k}
\end{equation}
\label{eq:utility}
Here, the expectation measures the probability of improving (or maintaining) the primary task by sampling this task. To decide the next action to take, we apply Thompson Sampling~\citep{russo2018tutorial,chapelle2011empirical} to trade off exploitation (maximizing immediate performance) and exploration (investing to accumulate new information that might improve performance in the future). In Thompson Sampling~\cite{russo2018tutorial}, instead of taking action $k$ that maximizes the expectation (i.e., $\arg\max_k \mathbb{E}_p [\theta_k]$), we randomly sample the primary task improvement probability $\hat{\theta}_k$ from the posterior distribution $\hat{\theta}_k \sim p(\theta_k)$, and take the action $k$ that maximizes the sampled primary task improvement probability, i.e., $\arg\max_k \hat{\theta}_k$.
At the end of the training, the task selection can proceed either via a threshold on the expectation, or take the top-$K$ tasks, and run stage-2 using the selected task subset as auxiliary tasks (details in Sec.~\ref{subsec:gp}).
\paragraph{Stronger Prior for Primary Task} Note that at the beginning of training, model performance is usually guaranteed to improve from the initial random choices. This causes issues in updating arm values because less useful tasks will be given high arm values when they happen to be sampled at the beginning. To resolve this issue, we initially set a slightly stronger prior/arm-value in favor of the arm corresponding to the primary task. Intuitively, the bandit will then sample the primary model more often at the beginning, and then start exploring auxiliary tasks when the primary model's performance stabilizes (as the arm value of the primary model will start decreasing because sampling it in later rounds produces smaller additional improvements).
\paragraph{Non-Stationary Multi-Armed Bandit} Also note that the intrinsic usefulness of each task varies throughout the training (e.g., the primary task might be more important at the beginning, but not necessarily at the end), and thus the agent faces a non-stationary system. In such cases, the agent should always be encouraged to explore in order to track changes as the system drifts. One simple approach to inject non-stationarity is to discount the relevance of previous observations. Thus we introduce a tunable decay ratio $\gamma$, and modify Eq.~\ref{eq:stationary-posterior-update} as follows:
\begin{equation}
(\alpha_k, \beta_k) =
\begin{cases}
(\hat{\alpha}_k, \hat{\beta}_k), & \hspace{-6pt} \text{if}\ k \neq x^s_{t_b} \\
(\hat{\alpha}_k, \hat{\beta}_k) \mbox{+} (r_{t_b}, 1 - r_{t_b}), & \hspace{-6pt} \text{if}\ k = x^s_{t_b}
\end{cases}
\end{equation}
where $\hat{\alpha}_k = (1-\gamma) \alpha_k + \gamma \alpha_0$ and $\hat{\beta}_k = (1-\gamma) \beta_k + \gamma \beta_0$, and $\gamma$ controls how quickly uncertainty is injected into the system ($\alpha_0, \beta_0$ are parameters of the prior). Algorithm~\ref{alg:BernoulliTS} presents the Thompson Sampling algorithm with a Beta-Bernoulli MAB.
\begin{algorithm}[t]
\begin{small}
\caption{$\text{BernThompson}(N, \alpha, \beta, \gamma, \alpha_0, \beta_0)$}\label{alg:BernoulliTS}
\begin{algorithmic}[1]
\For{$t_b=1,2,\ldots $}
\State \# sample model:
\For{$k=1, \ldots, N$}
\State Sample $\hat{\theta}_k \sim \text{Beta}(\alpha_k, \beta_k)$
\EndFor
\State \# select and apply action:
\State $x^s_{t_b} \leftarrow \arg\max_k \hat{\theta}_k$
\State Apply $x^s_{t_b}$ and observe $r_{t_b}$
\State \# non-stationarity
\For{$k=1, \ldots, N$}
\State $\hat{\alpha}_k = (1-\gamma) \alpha_k + \gamma \alpha_0$
\State $\hat{\beta}_k = (1-\gamma) \beta_k + \gamma \beta_0$
\If {$k \neq x^s_{t_b}$}
\State $(\alpha_{k}, \beta_{k}) \leftarrow (\hat{\alpha}_k, \hat{\beta}_k)$
\Else
\State $(\alpha_{k}, \beta_{k}) \leftarrow (\hat{\alpha}_k, \hat{\beta}_k) \mbox{+} (r_{t_b}, 1-r_{t_b})$
\EndIf
\EndFor
\EndFor
\end{algorithmic}
\end{small}
\end{algorithm}
\subsection{Automatic Mixing Ratio Learning via Gaussian Process}
\label{subsec:gp}
The right side of Fig.~\ref{fig:mtl} illustrates our Gaussian Process controller for automatic learning of the MTL training mixing ratio (see definition in Sec.~\ref{subsec:mtl-intro}). Given the selected auxiliary tasks from the previous section, the next step is to find a proper mixing ratio of training these selected tasks along with the primary task.\footnote{Note that ideally Gaussian Process can also learn to set the mixing ratio of less important tasks to zero, hence allowing it to essentially also perform the task selection step. However, in practice, first applying our task selection Thompson-Sampling model (Sec.~\ref{subsec:task-selection}) allows GP to more efficiently search the mixing ratio space for the small number of filtered auxiliary tasks, as shown in results of Sec.~\ref{subsec:analysis}.}
Manual tuning of this mixing ratio via a large grid search over the hyperparameter values is very time and compute expensive (even when the number of selected auxiliary tasks is small, e.g., 2 or 3). Thus, in our second stage, we instead apply a non-parametric Bayesian approach to search for the approximately-optimal mixing ratio. In particular, we use a `Gaussian Process' to sequentially search for the mixing ratio by trading off exploitation and exploration automatically.
Next, we describe our Gaussian Process approach in detail.
A Gaussian Process~\cite{rasmussen2004gaussian,snoek2012practical,shahriari2016taking}, $\text{GP}(\mu_0, k)$, is a non-parametric model that is fully characterized by a mean function $\mu_0: \mathcal{X} \mapsto \mathbb{R}$ and a positive-definite kernel or covariance function $k: \mathcal{X} \times \mathcal{X} \mapsto \mathbb{R}$. Let ${\bm{x}}_1, {\bm{x}}_2, ..., {\bm{x}}_n$ denote any finite collections of $n$ points, where each ${\bm{x}}_i$ represents a choice of the mixing ratio (i.e., the ratio $\eta_1{:}\eta_2{:}..\eta_N$ described in Sec.~\ref{subsec:mtl-intro}), and $f_i = f({\bm{x}}_i)$ is the (unknown) function values evaluated at ${\bm{x}}_i$ (true performance of the model given the selected mixing ratio). Let $y_1, y_2, ..., y_n$ be the corresponding noisy observations (the validation performance at the end of training). In the context of GP Regression (GPR), ${\bm{f}} =\{f_1, ..., f_n\}$ are assumed to be jointly Gaussian~\cite{rasmussen2004gaussian}, i.e., ${\bm{f}} \lvert {\bm{X}} \sim \mathcal{N}({\bm{m}}, {\bm{K}})$, where, ${\bm{m}}_i = \mu_0({\bm{x}}_i)$ is the mean vector, and ${\bm{K}}_{i,j} = k({\bm{x}}_i, {\bm{x}}_j)$ is the covariance matrix. Then the noisy observations ${\bm{y}} = y_1, ..., y_n$ are normally distributed around ${\bm{f}}$ as follows: ${\bm{y}} \lvert {\bm{f}} \sim \mathcal{N}({\bm{f}}, \sigma^2 {\bm{I}})$.
Given $\mathcal{D}=({\bm{x}}_1, y_1), ..., ({\bm{x}}_{n_0}, y_{n_0})$, the set of random initial observations, where ${\bm{x}}_i$ represents a mixing ratio and $y_i$ represents the corresponding model's validation performance. Next, we model the GP based on these initial observations as described above. We sample a next point ${\bm{x}}_{n_0+1}$ (a mixing ratio in our case) from this GP and get its corresponding model performance $y_{n_0+1}$, and update the GP again by now considering the $n_0+1$ points~\cite{rasmussen2004gaussian}. We continue this process for a fixed number of steps. Next, we will discuss how we perform the sampling (based on acquisition functions) and the kernels used for calculating the covariance.
\begin{table*}
\begin{center}
\begin{small}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Models & RTE & MRPC & QNLI & CoLA & SST-2 \\
\hline
BiLSTM+ELMo (Single-Task) ~\cite{wang2018glue} & 50.1 & 69.0/80.8 & 69.4 & \textbf{35.0} & 90.2 \\
BiLSTM+ELMo (Multi-Task) ~\cite{wang2018glue} & 55.7 & 76.2/83.5 & 66.7 & 27.5 & 89.6 \\
\hline
\hline
Our Baseline &
54.0 & 75.7/83.7 & 74.0 & 30.8 & 91.3 \\
Our \textsc{AutoSeM}{} &
\textbf{58.7} & \textbf{78.5/84.5} & \textbf{79.2} & 32.9 & \textbf{91.8} \\
\hline
\end{tabular}
\end{small}
\end{center}
\vspace{-10pt}
\caption{Test GLUE results of previous work, our baseline, and our \textsc{AutoSeM}{} MTL framework. We report accuracy and F1 for MRPC, Matthews correlation for CoLA, and accuracy for all others.
\label{table:test-results}
\vspace{-12pt}
}
\end{table*}
\paragraph{Acquisition Functions}
Here, we describe the acquisition functions for deciding where to sample next. While one could select the points that maximize the mean function, this does not always lead to the best outcome~\cite{hoffman2011portfolio}. Since we also have the variance of the estimates along with the mean value of each point ${\bm{x}}_i$, we can incorporate this information into the optimization.
In this work, we use the GP-Hedge approach~\cite{hoffman2011portfolio,auer1995gambling}, which probabilistically chooses one of three acquisition functions: probability of improvement, expected improvement, and upper confidence bound. Probability of improvement acquisition functions measure the probability that the sampled mixing ratio ${\bm{x}}_i$ leads to an improvement upon the best observed value so far ($\tau$), $\mathbb{P}(f({\bm{x}}_i) > \tau)$. Expected improvement additionally incorporates the amount of improvement, $\mathbb{E}[(f({\bm{x}}_i) - \tau)\mathbb{I}(f({\bm{x}}_i) > \tau)]$. The Gaussian Process upper confidence bound (GP-UCB) algorithm measures the optimistic performance upper bound of the sampled mixing ratio~\cite{srinivas2009gaussian}, $\mu_i({\bm{x}}_i) + \lambda\sigma_i({\bm{x}}_i)$, for some hyper-parameter $\lambda$.
\paragraph{Matern Kernel} The covariance function (or kernel) defines the nearness or similarity of two points in the Gaussian Process. Here, we use the automatic relevance determination (ARD) Matern kernel~\cite{rasmussen2004gaussian}, which is parameterized by $\nu > 0$ that controls the level of smoothness. In particular, samples from a GP with such a kernel are differentiable $\floor{\nu - 1}$ times. When $\nu$ is half-integer (i.e. $\nu = p + 1/2$ for non-negative integer $p$), the covariance function is a product of an exponential and a polynomial of order $p$. In the context of machine learning, usual choices of $\nu$ include $3/2$ and $5/2$~\cite{shahriari2016taking}.
\section{Related Work}
\label{section:related-works}
Multi-task learning~\cite{caruana1998multitask}, known for improving the generalization performance of a task with auxiliary tasks, has successfully been applied to many domains of machine learning, including natural language processing~\cite{collobert2008unified,girshick2015fast,luong2015multi,pasunuru2017multitask,Pasunuru2017TowardsIA}, computer vision~\cite{misra2016cross,kendall2017multi,dai2016instance}, and reinforcement learning~\cite{teh2017distral,parisotto2015actor,jaderberg2016reinforcement}. Although there are many variants of multi-task learning~\cite{ruder2017sluice,hashimoto2017ajm,luong2015multi,mccann2018natural}, our goal is to improve the performance of a primary task using a set of relevant auxiliary tasks, where different tasks share some common model parameters with alternating mini-batches optimization, similar to~\citet{luong2015multi}.
To address the problem of automatic shared parameter selection,~\newcite{ruder2017learning} automatically learned the latent multi-task sharing architecture, and~\newcite{xiao2018gated} used a gate mechanism that filters the feature flows between tasks.
On the problem of identifying task relatedness,~\citet{ben2003exploiting} provided a formal framework for task relatedness and derived generalization error bounds for learning of multiple tasks.
~\citet{bingel2017identifying} explored task relatedness via exhaustively experimenting with all possible two task tuples in a non-automated multi-task setup. Other related works explored data selection, where the goal is to select or reorder the examples from one or more domains (usually in a single task) to either improve the training efficiency or enable better transfer learning. These approaches have been applied in machine translation~\cite{van2017dynamic}, language models~\cite{moore2010intelligent,duh2013adaptation}, dependency parsing~\cite{sogaard2011data}, etc. In particular,~\citet{ruder2017learning2} used Bayesian optimization to select relevant training instances for transfer learning, and~\citet{tsvetkov2016learning} applied it to learn a curriculum for training word embeddings via reordering data.~\citet{graves2017automated} used the bandit approach (Exp3.S algorithm) in the context of automated curriculum learning, but in our work, we have two stages with each stage addressing a different problem (automatic task selection and learning of the training mixing ratio). Recently,~\newcite{sharma2017online} used multi-armed bandits (MAB) to learn the choice of hard vs. easy domain data selection as input feed for the model.
\newcite{guo2018dynamic} used MAB to effectively switch across tasks in a dynamic multi-task learning setup.
In our work, we use MAB with Thompson Sampling for the novel paradigm of automatic auxiliary task selection; and next, we use a Matern-kernel Gaussian Process to automatically learn an exact (static) mixing ratio (i.e., relatedness ratio) for the small number of selected tasks.
Many control problems can be cast as a multi-armed bandits problem, where the goal of the agent is to select the arm/action from one of the $N$ choices that minimizes the regrets~\cite{bubeck2012regret}. One problem in bandits learning is the trade-off between exploration and exploitation, where the agent needs to make a decision between taking the action that yields the best payoff on current estimates or exploring new actions whose payoffs are not yet certain.
Many previous works have explored various exploration and exploitation strategies to minimize regret, including Boltzmann exploration~\cite{kaelbling1996reinforcement}, adversarial bandits~\cite{auer2002nonstochastic}, UCB~\cite{auer2002finite}, and information gain using variational approaches~\cite{houthooft2016vime}. In this work, for task selection, we use Thompson Sampling~\citep{russo2018tutorial,chapelle2011empirical}, an algorithm for sequential decision making problems, which addresses a broad range of problems in a computationally efficient manner and is therefore enjoying wide use.
Gaussian Process (GP) is a non-parametric Bayesian approach, and it can capture a wide variety of underlying functions or relations between inputs and outputs by taking advantage of the full information provided by the history of observations and is thus very data-efficient~\cite{rasmussen2004gaussian,shahriari2016taking,schulz2018tutorial}. Gaussian Processes have been widely used as a black-box optimizer and hyper-parameter optimization~\cite{snoek2012practical,brochu2010tutorial,GPflowOpt2017,cully2018limbo,swersky2013multi,golovin2017google}. In our work, we use Gaussian Process for automatic learning of the multi-task mixing ratio in our stage-2 among the selected tasks from stage-1.
\section{Results}
\subsection{Baseline Models}
Table~\ref{table:test-results} shows the results of our baseline and previous works~\cite{wang2018glue}. We can see that our single-task baseline models achieve stronger performance on almost all tasks in comparison to previous work's single-task models.\footnote{\label{note1}Note that we do not report previous works which \emph{fine-tune} large external language models for the task (e.g., OpenAI-GPT and BERT), because they are not fairly comparable w.r.t. our models. Similarly, we report the non-attention based best GLUE models (i.e., BiLSTM+ELMo) for a fair comparison to our non-attention baseline. Our approach should ideally scale to large pre-training/fine-tuning models like BERT, given appropriate compute resources.} Next, we present the performance of our \textsc{AutoSeM}{} framework on top of these strong baselines.
\subsection{Multi-Task Models}
Table~\ref{table:test-results} also presents the performance of our \textsc{AutoSeM}{} framework-based MTL models. As can be seen, our MTL models improve significantly (see Table~\ref{table:validation-results} for standard deviations) upon their corresponding single-task baselines for all tasks, and achieve strong improvements as compared to the fairly-comparable\textsuperscript{\ref{note1}} multi-task results of previous work~\cite{wang2018glue}.\footnote{Note that even though the performance improvement gaps of~\newcite{wang2018glue} (MTL vs. baseline) and our improvements (\textsc{AutoSeM}{} vs. our improved baseline) are similar, these are inherently two different setups. \newcite{wang2018glue} MTL is based on a `one model for all' setup~\cite{kaiser2017one,mccann2018natural}, whereas our approach interpretably chooses the 2-3 tasks that are most beneficial for the given primary task. Also see Sec.~\ref{sec:setup} for comparison of training speeds for these two setups.}
During the task selection stage of our \textsc{AutoSeM}{} framework, we observe that MultiNLI is chosen as one of the auxiliary tasks in all of our MTL models. This is intuitive given that MultiNLI contains multiple genres covering diverse aspects of the complexity of language~\cite{conneau2017supervised}. Also, we observe that WNLI is sometimes chosen in the task selection stage; however, it is always dropped (mixing ratio of zero) by the Gaussian Process controller, showing that it is not beneficial to use WNLI as an auxiliary task (intuitive, given its small size). Next, we discuss the improvements on each of the primary tasks and the corresponding auxiliary tasks selected by \textsc{AutoSeM}{} framework.
\noindent\textbf{RTE}:
Our \textsc{AutoSeM}{} approach achieves stronger results w.r.t. the baseline on RTE (58.7 vs. 54.0). During our task selection stage, we found out that QQP and MultiNLI tasks are important for RTE as auxiliary tasks. For the second stage of automatic mixing ratio learning via Gaussian Process, the model learns that a mixing ratio of 1:5:5 works best to improve the primary task (RTE) using related auxiliary tasks of QQP and MultiNLI.
\noindent\textbf{MRPC}:
\textsc{AutoSeM}{} here performs much better than the baseline on MRPC (78.5/84.5 vs. 75.7/83.7). During our task selection stage, we found out that RTE and MultiNLI tasks are important for MRPC as auxiliary tasks. In the second stage, \textsc{AutoSeM}{} learned a mixing ratio of 9:1:4 for these three tasks (MRPC:RTE:MultiNLI).
\noindent\textbf{QNLI}:
Again, we achieve substantial improvements with \textsc{AutoSeM}{} w.r.t. baseline on QNLI (79.2 vs. 74.0). Our task selection stage learned that WNLI and MultiNLI tasks are best as auxiliary tasks for QNLI.
We found that the Gaussian Process further drops WNLI by setting its mixing ratio to zero, and returns 20:0:5 as the best mixing ratio for QNLI:WNLI:MultiNLI.
\noindent\textbf{CoLA}:
We also observe a strong performance improvement on CoLA with our \textsc{AutoSeM}{} model w.r.t. our baseline (32.9 vs. 30.8). During our task selection stage, we found out that MultiNLI and WNLI tasks are important for CoLA as auxiliary tasks. In the second stage, GP learns to drop WNLI, and found the mixing ratio of 20:5:0 for CoLA:MultiNLI:WNLI.
\noindent\textbf{SST-2}:
Here also our \textsc{AutoSeM}{} approach performs better than the baseline (91.8 vs. 91.3). The task selection stage chooses MultiNLI, MRPC, and WNLI as auxiliary tasks and the stage-2 Gaussian Process model drops MRPC and WNLI by setting their mixing ratio to zero (learns ratio of 13:5:0:0 for SST-2:MultiNLI:MRPC:WNLI).
\begin{table}
\centering
\begin{tabular}{|l|c|c|}
\hline
Name & Validation & Test \\
\hline
Baseline & 78.3 & 75.7/83.7 \\
w/o Stage-1 & 80.3 & 76.3/83.8 \\
w/o Stage-2 & 80.3 & 76.7/83.8 \\
Final MTL & 81.2 & 78.5/84.5 \\
\hline
\end{tabular}
\vspace{-7pt}
\caption{Ablation results on the two stages of our \textsc{AutoSeM}{} framework on MRPC.
\label{table:ablation-results}
\vspace{-9pt}}
\end{table}
\section{Analysis}
\subsection{Ablation on MTL stages}
\label{subsec:analysis}
In this section, we examine the usefulness of each stage of our two-stage MTL pipeline.\footnote{We present this ablation only on MRPC for now, because GP stage-2 takes a lot of time without the task selection stage.}
\noindent\textbf{Removing Stage-1}:
The purpose of the Beta-Bernoulli MAB in stage-1 is to find useful auxiliary tasks for the given primary task. Here, to understand its importance, we remove the task selection part, and instead directly run the Gaussian Process (GP) model on all tasks (see `w/o Stage-1' row in Table~\ref{table:ablation-results}). We can see that by removing the task selection stage, the Gaussian Process model can still outperform the baseline, indicating the usefulness of the GP, but the large mixing ratio search space causes the GP to be unable to efficiently find the best mixing ratio setting.
\noindent\textbf{Removing Stage-2}:
Given the selected tasks from stage-1, the goal of the Gaussian Process in stage-2 is to efficiently find the approximately-optimal mixing ratio. To examine its usefulness, we replace the Gaussian Process controller by manually tuning a grid of mixing ratios, where the number of tuning experiments equals to the number of steps used in the Gaussian Process model (for a fair comparison). Table~\ref{table:ablation-results} shows the results by removing stage-2. We can see that a grid search over hyper-parameters can improve upon the baseline, indicating the usefulness of stage-1 task selection, but a reasonable-sized fair-comparison grid search (i.e., not exhaustive over all ratio values) is not able to match our stage-2 GP process that leverages prior experimental results to more efficiently find the best setting.
\subsection{Stability of MTL Models}
In this section, we provide the mean and standard deviation of our baseline and multi-task models (over three runs) on the validation set. Note that the test set is hidden, so we cannot do these studies on it. As seen in Table~\ref{table:validation-results}, our multi-task models clearly surpass the performance of baseline models w.r.t. standard deviation gaps, in all tasks.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{Figs/bandit-visualization.pdf}
\vspace{-4pt}
\caption{Visualization of task utility estimates from the multi-armed bandit controller on SST-2 (primary task). The x-axis represents the task utility, and the y-axis represents the corresponding probability density. Each curve corresponds to a task and the bar corresponds to their confidence interval.
\label{fig:visualization}
\vspace{-1pt}
}
\end{figure}
\subsection{Visualization of Task Selection}
In Fig.~\ref{fig:visualization}, we show an example of the task utility estimates from the stage-1 multi-armed bandit controller (Eq.~\ref{eq:beta}) on SST-2. The x-axis represents the task utility, and the y-axis represents the probability density over task utility. Each curve represents a task (the blue curve corresponds to the primary task, SST-2, and the rest of the curves correspond to auxiliary tasks), and the width of the bars represents the confidence interval of their estimates. We can see that the bandit controller gives the highest (and most confident) utility estimate for the primary task, which is intuitive given that the primary task should be the most useful task for learning itself. Further, it gives 2-3 tasks moderate utility estimates (the corresponding expected values are around 0.5), and relatively lower utility estimates for the remaining tasks (the corresponding expected values are lower than 0.5).
\subsection{Educated-Guess Baselines}
We additionally experimented with `educated-guess' baseline models, where MTL is performed using manual intuition mixtures that seem a priori sensible.\footnote{These educated-guess models replace our stage-1 automatic auxiliary task section with manual intuition task-mixtures; but we still use our stage-2 Gaussian Process for mixing ratio learning, for fair comparison.} For example, with MRPC as the primary task, our first educated-guess baseline is to choose other similar paraphrasing-based auxiliary tasks, i.e., QQP in case of GLUE. This MRPC+QQP model achieves 80.8, whereas our \textsc{AutoSeM}{} framework chose MRPC+RTE+MultiNLI and achieved 81.2. Furthermore, as our second educated-guess baseline, we added MultiNLI as an auxiliary task (in addition to QQP), since MultiNLI was helpful for all tasks in our MTL experiments. This educated-guess MRPC+QQP+MultiNLI model achieves 80.9 (vs. 81.2 for our \textsc{AutoSeM}{} model). This suggests that our \textsc{AutoSeM}{} framework (that automatically chose the seemingly less-related RTE task for MRPC) is equal or better than manual intuition based educated-guess models.
\begin{table}
\small
\centering
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
Name & RTE & MRPC & QNLI & CoLA & SST-2 \\
\hline
\multicolumn{6}{|c|}{\textsc{Baselines}} \\
\hline
Mean & 58.6 & 78.3 & 74.9 & 74.6 & 91.4 \\
Std & 0.94 & 0.31 & 0.30 & 0.44 & 0.36 \\
\hline
\multicolumn{6}{|c|}{\textsc{Multi-Task Models}} \\
\hline
Mean & 62.0 & 81.1 & 76.0 & 75.7 & 91.8 \\
Std & 0.62 & 0.20 & 0.18 & 0.18 & 0.29 \\
\hline
\end{tabular}
\vspace{-5pt}
\caption{Validation-set performance mean and standard deviation (based on three runs) of our baselines and Multi-task models in accuracy.
\label{table:validation-results}
\vspace{-13pt}}
\end{table}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,812 |
{"url":"https:\/\/pointatinfinityblog.wordpress.com\/category\/uncategorized\/","text":"# Measuring V.2: Density and Arithmetic\u00a0Progressions\n\nIn our last post, we introduced the notions of density, upper density, and lower density as means of measuring subsets of natural numbers. We ended with two exercises. We recall them now; solutions are at the end of this post.\n\nExercise 1:\u00a0Find subsets $A,B \\subseteq \\mathbb{N}$ such that:\n\n\u2022 $A \\cap B = \\emptyset$;\n\u2022 $A \\cup B = \\mathbb{N}$;\n\u2022 $\\overline{d}(A) = \\overline{d}(B) = 1$;\n\u2022 $\\underline{d}(A) = \\underline{d}(B) = 0$.\n\nExercise 2:\u00a0Show that, if $n \\in \\mathbb{N}$ and $A_0, \\ldots, A_n$ are subsets of $\\mathbb{N}$ such that $A_0 \\cup A_1 \\cup \\ldots \\cup A_n = \\mathbb{N}$, then there is $k \\leq n$ such that $\\overline{d}(A_k) > 0$.\n\nBefore getting to the solutions, though, let\u2019s take an interesting detour through a simple topic that has been the focus of a large amount of groundbreaking mathematics over the last century: arithmetic progressions in natural numbers. The journey will stop at three beautiful theorems and one beautiful conjecture and will involve some of the mathematical giants of the twentieth and twenty-first centuries.\n\nDefinition:\u00a0An arithmetic progression of natural numbers is an increasing sequence of natural numbers such that the difference between successive terms of the sequence is constant. The\u00a0length\u00a0of an arithmetic progression is the number of terms in the sequence.\n\nFor example, the sequence $\\langle 4,7,10,13 \\rangle$ is an arithmetic progression of length 4, as the difference between each pair of successive terms is 3. The sequence of even numbers, $\\langle 0,2,4,6,\\ldots \\rangle$, and the sequence of odd numbers, $\\langle 1,3,5,7, \\ldots \\rangle$ are both arithmetic progressions of infinite length.\n\nIn an earlier post, we introduced the mathematical field of Ramsey Theory. Roughly speaking, Ramsey Theory studies the phenomenon of order necessarily arising in sufficiently large structures. Ramsey Theory takes its name from Frank Ramsey, who published a seminal paper in 1930. Arguably the first important theorem in Ramsey Theory, though, came three years earlier and is due to the Dutch mathematician Bartel Leendert van der Waerden.\n\nTheorem (van der Waerden, 1927):\u00a0Suppose that the natural numbers are partitioned into finitely many sets, $A_0, A_1, \\ldots, A_n$ (so we have $\\bigcup_{i \\leq n} A_i = \\mathbb{N}$). Then there is an $i \\leq n$ such that $A_i$ contains arithmetic progressions of every finite length.\n\nThis is naturally seen as a statement in Ramsey Theory: Arithmetic progressions are very orderly things, and Van der Waerden\u2019s Theorem states that, no matter how deviously you divide the natural numbers into finitely many pieces, you cannot avoid the appearance of arbitrarily long arithmetic progressions in one of the pieces. It is also worth noting that Van der Waerden\u2019s Theorem cannot be strengthened to ensure the existence of infinite arithmetic progressions: it is relatively easy to partition the natural numbers into even just two sets such that neither set contains an arithmetic progression of infinite length (try it!).\n\nYou might notice a similarity between the statement of Van der Waerden\u2019s Theorem and our Exercise 2: both say that, if the natural numbers are divided into finitely many pieces, at least one of the pieces must be \u201clarge\u201d in a certain sense. In Van der Waerden\u2019s Theorem, \u201clarge\u201d means containing arbitrarily long arithmetic progressions, while in Exercise 2, it means having positive upper density.\n\nIs there a connection between these two notions of largeness? It turns out that there is! In 1936,\u00a0Erd\u0151s and Tur\u00e1n conjectured that, if $A \\subseteq \\mathbb{N}$ and $A$ has positive density, then $A$ contains arbitrarily long arithmetic progressions. Almost forty years later, the conjecture was confirmed (and, in fact, strengthened, as it was proven for upper density rather than density) in a celebrated proof by Hungarian mathematician Endre Szemer\u00e9di.\n\nTheorem (Szemer\u00e9di, 1975):\u00a0Suppose that $A \\subseteq \\mathbb{N}$ and $\\overline{d}(A) > 0$. Then $A$ contains arithmetic progressions of every finite length.\n\nNote that Szemer\u00e9di\u2019s Theorem is a true generalization and strengthening of Van der Waerden\u2019s theorem: when combined with Exercise 2, Szemer\u00e9di\u2019s Theorem directly implies Van der Waerden\u2019s Theorem. However, Szemer\u00e9di\u2019s Theorem does not give any information about sets of natural numbers with zero upper density. There is one particularly interesting such set, which we encountered in our last post: the set of prime numbers, $P = \\{2,3,5,7,11, \\ldots \\}$.\n\nAccording to experts, it is likely that it was conjectured as early as 1770, by Lagrange and Waring, that the set of primes contains arbitrarily long arithmetic progressions, and the question experienced renewed interest in the wake of the proof of Szemer\u00e9di\u2019s Theorem. An answer finally came in 2004, in a paper by Ben Green and Terence Tao.\n\nTheorem (Green-Tao, 2004):\u00a0The set of prime numbers contains arithmetic progressions of every finite length.\n\nThe story has not ended yet, as many very natural questions about arithmetic progressions remain unsolved. Perhaps the most famous is the following conjecture of\u00a0Erd\u0151s, strengthening his conjecture with Tur\u00e1n that was eventually confirmed by Szemer\u00e9di.\n\nConjecture (Erd\u0151s):\u00a0Suppose $A \\subseteq \\mathbb{N}$ (with $0 \\not\\in A$) and $\\sum_{n \\in A} \\frac{1}{n} = \\infty$. Then $A$ arithmetic progressions of every finite length.\n\nThis conjecture, unlike Szemer\u00e9di\u2019s Theorem, would cover the case in which $A$ is the set of all primes, so this conjecture would generalize both Szemer\u00e9di\u2019s Theorem and the Green-Tao Theorem. The problem currently carries a cash prize of \\$5000 (and, of course, the promise of mathematical fame).\n\nSolution to Exercise 1:\u00a0Recursively define a sequence $\\langle a_n \\mid n \\in \\mathbb{N} \\rangle$ by letting $a_0 = 1$, and, given $a_0, \\ldots, a_n$, letting $a_{n+1} = (n+1)(a_0 + a_1 + \\ldots + a_n)$. The first few terms of the sequence are thus $1,1,4,18,96, \\ldots$. Now divide $\\mathbb{N}$ into pairwise disjoint blocks $\\langle A_n \\mid n \\in \\mathbb{N} \\rangle$, arranged one after the other, in which, for all $n \\in \\mathbb{N}$, $|A_n| = a_n$. The first few blocks are thus as follows:\n\n\u2022 $A_0 = \\{0\\}$;\n\u2022 $A_1 = \\{1\\}$;\n\u2022 $A_2 = \\{2,3,4,5\\}$;\n\u2022 $A_3 = \\{6,7,\\ldots,23\\}$;\n\u2022 $A_4 = \\{24,25,\\ldots,119\\}$.\n\nNow let $A$ be the union of all of the even-numbered blocks, i.e. $A = \\bigcup_{n \\in \\mathbb{N}}A_{2n}$, and let $B$ be the union of all of the odd-numbered blocks.\n\nWe clearly have $A \\cap B = \\emptyset$ and $A \\cup B = \\mathbb{N}$. We next show $\\overline{d}(A) = 1$. Recall that $\\overline{d}(A) = \\limsup_{n \\rightarrow \\infty} \\frac{|A \\cap n|}{n}$. Given $n \\in \\mathbb{N}$, let $m_n$ be the largest element of $A_{2n}$. Think about the quantity $\\frac{|A \\cap (m_n+1)|}{m_n+1}$. Since $A_{2n} \\subseteq A$, we clearly have $|A \\cap (m_n+1)| \\geq |A_{2n}| = a_{2n}$. We also have $m_n+1 = |A_0| + |A_1| + \\ldots + |A_{2n}| = a_0 + a_1 + \\ldots + a_{2n}$. But this yields:\n\n$\\frac{|A \\cap (m_n+1)|}{m_n+1} \\geq \\frac{a_{2n}}{a_0 + \\ldots + a_{2n}} = \\frac{(2n)(a_0 + \\ldots + a_{2n-1})}{a_0 + \\ldots + a_{2n-1} + (2n)(a_0 + \\ldots + a_{2n-1})} = \\frac{2n}{2n+1}$.\n\nAs $n$ gets arbitrarily large, $\\frac{2n}{2n+1}$ gets arbitrarily close to $1$, so we in fact get $\\overline{d}(A) = 1$. The proofs of $\\overline{d}(B) = 1$ and $\\underline{d}(A) = \\underline{d}(B) = 0$ are essentially the same and are thus left to the reader.\n\nSolution to Exercise 2:\u00a0We will in fact show something stronger: there is $k \\leq n$ such that $\\overline{d}(A_k) \\geq \\frac{1}{n+1}$. Suppose for sake of a contradiction that this is not the case. Then, by the definition of upper density, for each $k \\leq n$ we can find $m_k \\in \\mathbb{N}$ such that, for all natural numbers $m \\geq m_k$, $\\frac{|A_k \\cap m|}{m} < \\frac{1}{n+1}$. Now we can choose a natural number $m^*$ large enough so that, for all $k \\leq n$, $m_k \\leq m^*$.\n\nFor each $k \\leq n$, by our choice of $m_k$, we know that $\\frac{|A_k \\cap m^*|}{m^*} < \\frac{1}{n+1}$. Also, by the fact that $A_0 \\cup \\ldots \\cup A_n = \\mathbb{N}$, we have that $|A_0 \\cap m^*| + \\ldots + |A_n \\cap m^*| \\geq m^*$. But this yields:\n\n$1 = \\frac{m^*}{m^*} \\leq \\frac{|A_0 \\cap m^*| + \\ldots + |A_n \\cap m^*|}{m^*} = \\frac{|A_0 \\cap m^*|}{m^*} + \\ldots + \\frac{|A_n \\cap m^*|}{m^*} <$\n\n$<\\frac{1}{n+1} + \\ldots + \\frac{1}{n+1} = (n+1)\\frac{1}{n+1} = 1$.\n\nWe have thus shown $1 < 1$. This is, of course, a contradiction, which finishes the exercise.\n\nCover Image:\u00a0Vir Heroicus Sublimis\u00a0by Barnett Newman","date":"2019-05-26 01:20:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 82, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9508504271507263, \"perplexity\": 189.6142788601404}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232258620.81\/warc\/CC-MAIN-20190526004917-20190526030917-00074.warc.gz\"}"} | null | null |
using System;
using _2._3._19;
using Quick;
var quickNormal = new QuickSort();
var quickMedian3 = new QuickSortMedian3();
var quickMedian5 = new QuickSortMedian5();
var arraySize = 200000; // 初始数组大小。
const int trialTimes = 4; // 每次实验的重复次数。
const int trialLevel = 6; // 双倍递增的次数。
Console.WriteLine("n\tmedian5\tmedian3\tnormal\tmedian5/normal\t\tmedian5/median3");
for (var i = 0; i < trialLevel; i++)
{
double timeMedian3 = 0;
double timeMedian5 = 0;
double timeNormal = 0;
for (var j = 0; j < trialTimes; j++)
{
var a = SortCompare.GetRandomArrayInt(arraySize);
var b = new int[a.Length];
var c = new int[a.Length];
a.CopyTo(b, 0);
a.CopyTo(c, 0);
timeNormal += SortCompare.Time(quickNormal, a);
timeMedian3 += SortCompare.Time(quickMedian3, b);
timeMedian5 += SortCompare.Time(quickMedian5, c);
}
timeMedian5 /= trialTimes;
timeMedian3 /= trialTimes;
timeNormal /= trialTimes;
Console.WriteLine(
arraySize
+ "\t"
+ timeMedian5
+ "\t"
+ timeMedian3
+ "\t"
+ timeNormal
+ "\t"
+ timeMedian5 / timeNormal
+ "\t"
+ timeMedian5 / timeMedian3);
arraySize *= 2;
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 8,336 |
package uk.ac.ox.zoo.seeg.abraid.mp.common.dto.json;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonPropertyOrder;
import uk.ac.ox.zoo.seeg.abraid.mp.common.domain.DiseaseOccurrence;
import uk.ac.ox.zoo.seeg.abraid.mp.common.service.workflow.support.ModellingLocationPrecisionAdjuster;
@JsonPropertyOrder({ "longitude", "latitude", "weight", "admin", "gaul", "disease", "date" })
public class JsonModellingDiseaseOccurrence extends JsonBiasModellingDiseaseOccurrence {
@JsonProperty("Weight")
private double weight;
public JsonModellingDiseaseOccurrence(ModellingLocationPrecisionAdjuster precisionAdjuster,
double longitude, double latitude, double weight,
int admin, String gaul, int disease, String date) {
super(precisionAdjuster, longitude, latitude, admin, gaul, disease, date);
setWeight(weight);
}
public JsonModellingDiseaseOccurrence(ModellingLocationPrecisionAdjuster precisionAdjuster,
DiseaseOccurrence occurrence) {
this(precisionAdjuster,
occurrence.getLocation().getGeom().getX(),
occurrence.getLocation().getGeom().getY(),
occurrence.getFinalWeighting(),
occurrence.getLocation().getPrecision().getModelValue(),
extractGaulString(occurrence.getLocation()),
occurrence.getDiseaseGroup().getId(),
extractDateString(occurrence.getOccurrenceDate()));
}
public double getWeight() {
return weight;
}
public void setWeight(double weight) {
this.weight = weight;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,882 |
{"url":"https:\/\/www.physicsread.com\/latex-congruence-modulo\/","text":"# How do you write congruence modulo(mod n) in LaTeX?\n\ncongruence modulo syntax will consist of two individual commands, \\equiv and \\mod commands.\n\n\\documentclass{article}\n\\usepackage{mathtool}\n\\begin{document}\n% Use mathtools for \\mod\n$$b \\equiv c \\mod{m}$$\n% \\pmod is default command\n$$a \\equiv b \\pmod{n}$$\n$$a \\equiv b \\pmod{\\frac{m}{(k,m)}}$$\n$$a \\equiv b \\left(\\mod{\\frac{m}{(k,m)}}\\right)$$\n\\end{document}\n\nOutput :\n\nBut, you notice the output above, where a lot of space has been created by using the \\mod and \\pmod commands. To solve this problem you need to use \\bmod command, or manually solve using \\mathrm command.\n\n\\documentclass{article}\n\\begin{document}\n$$b \\equiv c\\;(\\bmod{m})$$\n$$10+5 \\equiv 3\\;(\\bmod{12})$$\n$$\\frac{p}{q} \\equiv f\\prod^{n-1}_{i=0}p_i\\;(\\mathrm{mod}\\;m)$$\n\\end{document}\n\nOutput :\n\n## Some congruence modulo proparties in LaTeX\n\nBest practice is shown by discussing some properties below.\n\n\\documentclass{article}\n\\usepackage{mathabx}\n\\begin{document}\n\\begin{enumerate}\n\\item Equivalence: $a \\equiv \\modx{0}\\Rightarrow a=b$\n\\item Determination: either $a\\equiv b\\; \\modx{m}$ or $a \\notequiv b\\; \\modx{m}$\n\\item Reflexivity: $a\\equiv a \\;\\modx{m}$.\n\\item Symmetry: $a\\equiv b\\; \\modx{m}\\Rightarrow b\\equiv a \\;\\modx{m}$.\n\\end{enumerate}\n\\end{document}\n\nOutput :\n\nOne request!\n\nDon't forget to share if I have added any value to your education life. See you again in another tutorial. thank you!","date":"2023-02-06 19:45:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9649777412414551, \"perplexity\": 5829.973316861149}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500357.3\/warc\/CC-MAIN-20230206181343-20230206211343-00624.warc.gz\"}"} | null | null |
Q: bolt CMS not routing to singular page of contenttype? I have a contenttype called textimonials in my contenttypes.yml file, like so:
#Testimonials
testimonials:
name: Testimonials
singular_name: Testimonial
fields:
name:
type: text
class: large
position:
type: text
body:
type: textarea
height: 150px
listing_template: testimonials.twig
record_template: testimonial.twig
Now the documentation says the following:
Whenever your browser gets a page on a Bolt website, it uses an URL
like /entries or /page/lorem-ipsum. Bolt knows how to handle URLs like
these, and displays the information the browser requested. Bolt does
this by mapping the URL to a so-called Route. This Route is the
controller that (when called) fetches the content from the database,
selects the template to use, renders the HTML page according to that
template and the content and serves it to the browser.
At the same time, if you create a new record, Bolt will know what the
URL for that content is. So when that URL is requested by a browser,
it can map it back to the correct content.
For example, if you have a 'Pages' contenttype, with 'Page' as a
singular_name, your site will automatically have pages like:
http://example.org/pages
http://example.org/page/lorem-ipsum-dolor
Well i have bolt installed on localhost , so now when i navigate to http://localhost:8080/boltCMS/testimonials , i see my testimonials.twig , but when i navigate to http://localhost:8080/boltCMS/testimonials/1 , i get a error of :
Page testimonials/1 not found.
Why ? my database is populated , so why am i getting this error ?
The templates and routes documentation can be found HERE.
Thank you.
A: There is also a singular_slug setting for a contenttype.
Bolt tries to automatically work it out, but you can configure it to be whatever you want too.
A: Try the singular_name 'Testimonial'.
http://localhost:8080/boltCMS/Testimonial/1
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,047 |
\section{Introduction}
\label{sec:intro}
The classic equations for tidal evolution in two-body systems derived or utilized in seminal papers [e.g.,
\citet{macd64,gold66moon,gold66,mign79,mign81}], reviews [e.g., \citet{burn77,weid89,peal99}], and
textbooks [e.g.,~\citet{murr99,danb92}] are based upon the underlying assumption that the two spherical
components in the system are separated by several times the radius of the larger primary component. While
this assumption is valid in planet-satellite systems\footnote{There are small natural satellites of the
outer planets that orbit very close to their primaries, but we must keep in mind that these satellites are
part of much more complex dynamical systems than simple two-component binaries in addition to having
negligible masses compared to their primaries.} such as Earth-Moon, Jupiter-Galilean satellites, and
Saturn-Titan, as well as for Pluto-Charon and the majority of binary main-belt asteroids (with 100-km-scale
primaries), it is not completely accurate for all binary asteroids, especially those in the near-Earth
region. Based upon the compilation by~\citet{wals06} of measured and estimated binary asteroid component
size and semimajor axis parameters, nearly 75\% of near-Earth and Mars-crossing binaries have inter-component
separations between 3 and 5 primary radii. An updated compilation of parameters by~\citet{prav07} including
small main-belt binaries, those with primaries less than 10~km in diameter, confirms that 75\% of binary
systems among these three populations have close mutual orbits. In addition, double asteroids, those
systems with equal-size components that were not counted in the above tallies, such as (69230)
Hermes~\citep{marg03,prav03,marg06iau}, (90) Antiope~\citep{merl00iauc,mich04,desc07}, (854) Frostia, (1089)
Tama, (1313) Berna, and (4492) Debussy~\citep{behr06}, have separations within 5 primary radii. The favored
formation mechanism for near-Earth, Mars-crossing, and small main-belt binaries is rotational fission or
mass shedding~\citep{marg02s,rich06,desc08} most likely due to YORP spin-up~\citep{prav07}, a torque on the
asteroid spin state due to re-emission of absorbed sunlight~\citep{rubi00,vokr02}, where the typical
binaries produced have equatorial mutual orbits with semimajor axes between 2 and 4.5 primary radii and
eccentricities below 0.15~\citep{wals08yorp}. Though all binary systems in these three populations may
not have separations of less than 5 primary radii at present, if formed via spin-up, these systems likely
have tidally evolved outward from a closer orbit.
Complex generalized formulae for tidal evolution are presented by~\citet{kaul64} and \citet{mign80} as
extensions of the work of~\citet{darw79a,darw79b,darw80} that account for higher-order terms in the
expansion of the tidal potential, though, nearly universally, even by Darwin, Kaula, and Mignard
themselves, only the leading order is applied in practice under the assumption of a distant secondary
and the negligibility of higher-order terms. To date, the most common application of higher-order
expansions of the tidal potential is in the Mars-Phobos system where tides on Mars raised by Phobos
orbiting at 2.76 Mars radii are causing the gradual infall of Phobos's orbit. As the separation between
Mars and Phobos decreases, higher-order terms in the potential expansion must gain importance. With
this in mind, attempts to understand the observed secular acceleration of Phobos and the past history
of its orbit date back to~\citet{redm64} and have continued with~\citet{smit76},~\citet{lamb79},
and~\citet{szet83}, among others, with~\citet{bill05} presenting the most recent treatment of the subject.
Because many binaries exist in a regime where traditional assumptions break down, and because tidal
evolution is most important at small separations, we are motivated to examine tidal interactions in
close orbits. Here, we expand the gravitational potential between two spherical bodies to arbitrary
order as well as allow for a secondary of non-negligible mass. We then present the resulting equations
for the evolution of the component spin rates and the semimajor axis due to the tidal bulges raised on
both components when restricted to systems with mutual orbits that are both circular and equatorial as
suggested for small binaries formed via spin-up. The effect of accounting for close orbits is examined
and compared to the effect of uncertainties in physical parameters of the binary system.
\section{Tidal Potential of Arbitrary Order}
\label{sec:tidesl}
The potential $V$ per unit mass at a point on the surface of the primary body of mass $M_{\rm p}$, radius
$R_{\rm p}$, and uniform density $\rho_{\rm p}$ due to a secondary of mass $M_{\rm s}$, radius $R_{\rm s}$,
and uniform density $\rho_{\rm s}$ orbiting on a prograde circular path in the equator plane of the
primary with semimajor axis $a$ measured from the center of mass of the primary is
\begin{equation}
V=-G\,\frac{M_{\rm s}}{\Delta},
\label{eq:Vfull}
\end{equation}
\noindent
where $G$ is the gravitational constant and $\Delta$ is the distance between the center of the secondary
and the point of interest given by
\begin{equation}
\Delta=a\left[1-2\left(\frac{R_{\rm p}}{a}\right)\cos\psi+\left(\frac{R_{\rm p}}{a}\right)^2\right]^{1/2},
\end{equation}
\noindent
with $\psi$ measured from the line joining the centers of the primary and secondary [e.g., \citet{murr99}].
In the spherical polar coordinate system ($r$, $\theta$, $\phi$) shown in Fig.~\ref{fig:sec2}, with the
polar angle $\theta$ measured from the rotation axis of the primary and the azimuthal angle $\phi$ measured
from an arbitrary reference direction fixed in space, the separation angle $\psi$ between the secondary and
the point of interest on the primary is
\begin{equation}
\cos\psi~=~\cos\theta_{\rm p}\cos\theta_{\rm s}+\sin\theta_{\rm p}\sin\theta_{\rm s}\cos\left(\phi_{\rm p}-\phi_{\rm s}\right).
\label{eq:psi}
\end{equation}
\noindent
For widely separated binary systems where the semimajor axis $a$ is much larger than the radius of the
primary $R_{\rm p}$, the potential is expanded in powers of the small term $R_{\rm p}/a$ such that
\begin{equation}
V=-G\,\frac{M_{\rm s}}{a}\left[1+\left(\frac{R_{\rm p}}{a}\right)\cos\psi+\left(\frac{R_{\rm p}}{a}\right)^2\,\frac{1}{2}\left(3\cos^2\psi-1\right)+\ldots\right].
\label{eq:Vexpansion}
\end{equation}
\noindent
The first term is independent of the position of the point of interest and thus produces no force on the
primary. The second term provides the force that keeps the mass element at the point of interest in a
circular orbit about the center of mass of the system. The third term is the tidal potential
\begin{equation}
U=-G\,\frac{M_{\rm s}R_{\rm p}^2}{a^3}\,\frac{1}{2}\left(3\cos^2\psi-1\right)
\label{eq:Ushort}
\end{equation}
\noindent
that is the focus of past studies of tidal evolution where the the bodies are widely separated such as in
the Earth-Moon system. However, truncation of the expansion of $V$ in (\ref{eq:Vexpansion}) at three terms
accurately estimates the true potential in (\ref{eq:Vfull}) only for separations exceeding 5$R_{\rm p}$.
For smaller separations, as are often found among binary asteroids, higher orders in the expansion of $V$
are necessary.
The full expansion of the potential $V$ in~(\ref{eq:Vexpansion}) may be written concisely as the sum over
Legendre polynomials $P_{\ell}(\cos\psi)$, i.e., zonal harmonic or azimuthally independent
surface harmonic functions, as
\begin{equation}
V=-G\,\frac{M_{\rm s}}{a}\,\sum_{\ell=0}^{\infty}\left(\frac{R_{\rm p}}{a}\right)^{\ell} P_{\ell}\left(\cos\psi\right),
\label{eq:Vlegendre}
\end{equation}
\noindent
where the $\ell=2$ term of the expansion of $V$ is the dominant tidal term in (\ref{eq:Ushort}). The full
tidal potential $U$ including all orders becomes
\begin{equation}
U=-G\,\frac{M_{\rm s}}{a}\,\sum_{\ell=2}^{\infty}\left(\frac{R_{\rm p}}{a}\right)^{\ell} P_{\ell}\left(\cos\psi\right).
\label{eq:Usum}
\end{equation}
\noindent
While we will derive the tidal evolution equations in terms of an arbitrary order $\ell$,
Table~\ref{tab:legendre} lists the order $\ell$ of the expansion necessary for accurate reproduction of the
potential $V$ at small separations. At 2$R_{\rm p}$, the potential must be expanded to at least $\ell=6$,
requiring four additional, but manageable, terms in the expansion. This separation is convenient in terms
of tidal evolution as it is the contact limit of a binary system with two equal-size components and a
reasonable initial separation for the onset of tidal evolution in a newly formed binary system, regardless
of component size, especially for systems formed through primary spin-up and mass shedding~\citep{wals08yorp}.
Proceeding inward of 2$R_{\rm p}$ rapidly requires an unwieldy number of terms in the expansion
(e.g., twice as many additional terms are needed at 1.5$R_{\rm p}$).
\section{Roche Limit}
\label{sec:roche}
The well-known classical fluid Roche limit is located at $a=2.46R_{\rm p}$~\citep{chan69} for equal
density components, so that if one considers a secondary just outside the fluid limit, one must include
the Legendre polynomials of orders $\ell\le4$ in the expansion for the potential felt by the primary.
For solid, cohesionless\footnote{A cohesionless material has zero shear strength in the absence of
confining pressure. The interlocking of the constituent particles under pressure, however, can give the
material shear strength.} bodies (gravitational aggregates or so-called rubble piles) modeled as a dry
soil, the Roche limit falls approximately between $1.5R_{\rm p}$ and $2R_{\rm p}$~\citep{hols06,hols08,shar09}.
The cohesionless Roche limit is based upon a binary system that is not tidally evolving, but the secondary
remains stressed by its self-gravity, rotation (synchronized to the orbital period), and the difference in
gravity from its near, primary-facing side to its far side. \citet{hols06,hols08} illustrate that the mass
ratio of the components has a negligible effect on the Roche limit, but one would expect that allowing the
secondary to have a more rapid spin or allowing for higher-order tidal terms due to its proximity to the
primary will increase the internal stresses and push the Roche limit farther from the primary, though, as
noted by~\citet{shar09}, these issues have not been studied in detail.
With a modest amount of cohesion, the secondary may exist within the stated Roche limit~\citep{hols08}.
For the rough properties of a near-Earth binary of $\rho_{\rm p, s}=2$~g/cm$^{3}$ and $R_{\rm s}=100$~m, a
cohesion value of $<100$~Pa is enough to hold the secondary together at the surface of the
primary\footnote{The cohesion needed to prevent disruption scales as the square of both the density and
size of the secondary. Thus, for a main-belt binary with a $R_{\rm s}=10$~km, the necessary cohesion is of
order 10$^{6}$~Pa, similar to monolithic rock.}. For comparison, the surface material of comet Tempel 1
excavated by the Deep Impact mission projectile is estimated to have a shear strength of
$<65$~Pa~\citep{aher05} and an effective strength of 10$^{3}$~Pa~\citep{rich07}; fine-grained terrestrial
sand is found to have cohesion values up to 250~Pa~\citep{schellart00}. Therefore, it is not unreasonable
that in the tidal field of the primary, the secondary can stably exist at the very least within the fluid
Roche limit (even if cohesionless), if not also within the cohesionless Roche limit (with a cohesion
comparable to comet regolith or sand), justifying our later choice to work to order $\ell=6$ corresponding
to a separation of 2$R_{\rm p}$.
\section{External Potential of Arbitrary Order}
\label{sec:extpotl}
The tidal potential $U_{\ell}$ of arbitrary order $\ell\ge2$ felt by the primary, taken from (\ref{eq:Usum}),
may be written concisely as
\begin{equation}
U_\ell~=~-g_{\rm p}\,\zeta_{\ell, \rm p}\,P_{\ell}\left(\cos\psi\right),
\end{equation}
\noindent
where $g_{\rm p}=GM_{\rm p}/R_{\rm p}^2$ is the surface gravity of the primary and
\begin{equation}
\zeta_{\ell, \rm p}~=~\frac{M_{\rm s}}{M_{\rm p}}\left(\frac{R_{\rm p}}{a}\right)^{\ell+1}R_{\rm p}.
\label{eq:zeta}
\end{equation}
\noindent
The combination $\zeta_{\ell, \rm p}P_{\ell}\left(\cos\psi\right)$ is the equilibrium tide height, due
to the tidal potential of order $\ell$, that defines the equipotential surface about a primary that
is completely rigid (inflexible). Because the mass ratio $M_{\rm s}/M_{\rm p}\le1$ and we assume
$a\ge2R_{\rm p}$, the quantity $\zeta_{\ell, \rm p}/R_{\rm p}\le 1/8$ for all binary systems, and typically
$\zeta_{\ell, \rm p}/R_{\rm p}\ll1$.
For a body with realistic rigidity, the tidal potential $U_{\ell}$ physically deforms the surface of the
primary by a small distance $\lambda_{\ell, \rm p}R_{\rm p}S_{\ell}$ as a function of position on the
primary, where $\lambda_{\ell, \rm p}\ll1$ and $S_{\ell}$ is a surface harmonic function. \citet{darw79a}
and \citet{love27} lay the groundwork for showing that, in general, the deformation of a homogeneous
density, incompressible sphere
\begin{equation}
\lambda_{\ell, \rm p}\,R_{\rm p}\,S_{\ell}~=~-\,h_{\ell, \rm p}\,\frac{U_{\ell}}{g_{\rm p}}~=~h_{\ell,\,\rm p}\,\zeta_{\ell,\,\rm p}\,P_{\ell}\left(\cos\psi\right)
\label{eq:lambdadef}
\end{equation}
\noindent
is given in terms of the displacement Love number $h_{\ell, \rm p}$~\citep{munk60},
\begin{equation}
h_{\ell, \rm p}~=~\frac{2\ell+1}{2\left(\ell-1\right)}\frac{1}{1+\frac{\left(2\ell^2+4\ell+3\right)\mu_{\rm p}}{\ell g_{\rm p}\rho_{\rm p}R_{\rm p}}},
\label{eq:hlove}
\end{equation}
\noindent
introducing $\mu_{\rm p}$ as the rigidity or shear modulus of the primary\footnote{\citet{darw79a}
realized the correspondence between elastic and viscoelastic media and provides a generalized form for
the deformation of a viscous spheroid, a function equivalent to~(\ref{eq:lambdadef}) he calls $\sigma$,
that when applied to an elastic spheroid, in terms of rigidity rather than viscosity, is equivalent to
the expression found here.}. For bodies less than 200 km in radius, as all components of binary asteroid
systems are, the rigidity $\mu$ dominates the stress due to self-gravity
$g\rho R\sim G\rho^2R^2$~\citep{weid89}, even for rubble-pile structures (i.e., the model proposed
by~\citet{gold09}), such that the Love number $h_{\ell}\ll1$ for small bodies. With
$h_{\ell, \rm p}$ and $\zeta_{\ell, \rm p}/R_{\rm p}$ small, and noting from~(\ref{eq:lambdadef}) that
$\lambda_{\ell, \rm p}=h_{\ell, \rm p}\,\zeta_{\ell, \rm p}/R_{\rm p}$, the assumption of a small
deformation factor $\lambda_{\ell, \rm p}$ is justified.
Of particular interest is the external potential felt by the secondary now that the primary has
been deformed. It is this external potential that produces the tidal torque that transfers
angular momentum through the system. Here, we slightly alter our spherical coordinate system such
that $\theta$ now measures the angle from the axis of symmetry of the tidal bulge, as in~\citet{murr99},
such that the surface of the nearly spherical primary is now given by
\begin{equation}
R~=~R_{\rm p}\left(1+\sum_{\ell=2}^{\infty}\lambda_{\ell, \rm p} P_{\ell}\left(\cos\theta\right)\right).
\label{eq:shell}
\end{equation}
\noindent
The potential felt at a point external to the primary is the sum of the potential of a spherical
primary with radius $R_{\rm p}$ and that of the deformed shell. However, only that due to the deformed
shell, called the non-central potential by~\citet{murr99}, will contribute to the torque.
In Fig.~\ref{fig:sec4}, the reciprocal of the distance $\Delta$ between the external point
($r$, $\theta$, $\phi$) and a point on the surface of the primary
($r^{\prime}$, $\theta^{\prime}$, $\phi^{\prime}$) separated by an angle $\psi$, where
$r^{\prime}=R$ from~(\ref{eq:shell}), is
\begin{equation}
\frac{1}{\Delta}~=~\frac{1}{r}\sum_{\ell=0}^{\infty}\left(\frac{R_{\rm p}}{r}\right)^{\ell}P_{\ell}\left(\cos\psi\right)~+~O\left(\lambda_{\ell^{\prime}, \rm p}\right).
\label{eq:delta}
\end{equation}
\noindent
The use of $\ell^{\prime}$ denotes terms based upon the surface deformation rather than the expansion
of the distance between the points of interest. The non-central potential (per unit mass of the
object disturbed by the potential) due to the deformed shell with mass element
$\displaystyle{\rho_{\rm p}R_{\rm p}^3\sum_{\ell^{\prime}=2}^{\infty}\lambda_{\ell^{\prime}, \rm p}P_{\ell^{\prime}}\left(\cos\theta^{\prime}\right)d\left(\cos\theta^{\prime}\right)d\phi^{\prime}}$ is
\begin{equation}
U_{\rm nc}~=~-G\rho_{\rm p} R_{\rm p}^2\left(\frac{R_{\rm p}}{r}\right)\sum_{\ell^{\prime}=2}^{\infty}\sum_{\ell=0}^{\infty}\lambda_{\ell^{\prime}, \rm p}\left(\frac{R_{\rm p}}{r}\right)^{\ell}\int\int P_{\ell^{\prime}}\left(\cos\theta^{\prime}\right)P_{\ell}\left(\cos\psi\right)d\left(\cos\theta^{\prime}\right)d\phi^{\prime},
\end{equation}
\noindent
where the double integral goes over the surface of the primary. The integral of the product of
two surface harmonics like the Legendre polynomials over a surface is zero unless $\ell=\ell^{\prime}$
such that for a specific order $\ell\ge2$~\citep{macr67},
\begin{eqnarray}
U_{\ell, \rm nc} & = & -G\rho_{\rm p} R_{\rm p}^2\left(\frac{R_{\rm p}}{r}\right)\lambda_{\ell, \rm p}\,\times\,\frac{4\pi}{2\ell+1}\left(\frac{R_{\rm p}}{r}\right)^{\ell}P_{\ell}\left(\cos\theta\right)\nonumber\\
& = & -\frac{3}{2\ell+1}h_{\ell, \rm p}\zeta_{\ell, \rm p}g_{\rm p}\left(\frac{R_{\rm p}}{r}\right)^{\ell+1}P_{\ell}\left(\cos\theta\right).
\end{eqnarray}
\noindent
By defining the more familiar potential Love number
\begin{equation}
k_{\ell, \rm p}~=~\frac{3}{2\ell+1}h_{\ell, \rm p}~=~\frac{3}{2\left(\ell-1\right)}\frac{1}{1+\frac{\left(2\ell^2+4\ell+3\right)\mu_{\rm p}}{\ell g_{\rm p}\rho_{\rm p}R_{\rm p}}},
\label{eq:klove}
\end{equation}
\noindent
which is of a similar order as $h_{\ell, \rm p}$, the non-central potential is written in the form
\begin{equation}
U_{\ell, \rm nc}~=~-k_{\ell, \rm p}g_{\rm p}\zeta_{\ell, \rm p}\left(\frac{R_{\rm p}}{r}\right)^{\ell+1}P_{\ell}\left(\cos\theta\right)
\label{eq:Uexternal}
\end{equation}
\noindent
such that $U_{\ell, \rm nc}$ at the surface of the primary is simply $k_{\ell, \rm p}U_{\ell}$.
Because $\mu \gg g\rho R$ for small bodies, the Love number $k_{\ell, \rm p}$ may be approximated by
\begin{equation}
k_{\ell, \rm p}~\simeq~\frac{3}{2\left(\ell-1\right)}\,\frac{\ell}{2\ell^2+4\ell+3}\frac{g_{\rm p}\rho_{\rm p}R_{\rm p}}{\mu_{\rm p}}~=~\frac{2\pi}{\ell-1}\,\frac{\ell}{2\ell^2+4\ell+3}\frac{G\rho_{\rm p}^2 R_{\rm p}^2}{\mu_{\rm p}}.
\label{eq:kapprox}
\end{equation}
\noindent
Taking the external point to be the position of the secondary orbiting at a distance $a$ from the
primary, the complete\footnote{Here, by complete we mean accounting for all orders $\ell$. We have,
however, limited the result to first order in the Love number $k_{\ell, \rm p}$ because terms of order
$\lambda_{\ell, \rm p}$ were ignored in (\ref{eq:delta}). These would have produced higher-order terms
in the Love number in the final form of the potential in (\ref{eq:Ufull}), but because we have argued
$\lambda_{\ell, \rm p}$ and $k_{\ell, \rm p}$ are both small quantities, terms of second and higher
order in the Love number are negligible.} non-central potential per unit secondary mass due to tides
raised on the primary is
\begin{eqnarray}
U_{\rm nc} & = & -g_{\rm p}\sum_{\ell=2}^{\infty}k_{\ell, \rm p}\zeta_{\ell, \rm p}\left(\frac{a}{R_{\rm p}}\right)^{-\left(\ell+1\right)}P_{\ell}\left(\cos\theta\right) \nonumber \\
& = & -\frac{GM_{\rm s}}{R_{\rm p}}\sum_{\ell=2}^{\infty}k_{\ell, \rm p}\left(\frac{a}{R_{\rm p}}\right)^{-2\left(\ell+1\right)}P_{\ell}\left(\cos\theta\right).
\label{eq:Ufull}
\end{eqnarray}
\noindent
The non-central potential drops off quickly with increasing separation as the separation to the sixth
power for $\ell=2$ and by an additional square of the separation for each successive order. The
$\theta$ term in the Legendre polynomial accounts for the angular separation between the external
point of interest and the tidal bulge of the primary. For the specific location of the secondary, we
define the angle $\delta$ as the geometric lag angle between the axis of symmetry of the tidal
bulge and the line connecting the centers of the two components.
\section{Tidal Dissipation Function $Q$}
\label{sec:Q}
In addition to the rigidity $\mu$, the response of a homogeneous, incompressible sphere to a disturbing
potential is characterized by the tidal dissipation function $Q$ defined by
\begin{equation}
Q^{-1}~=~\frac{1}{2\pi E^{*}}\oint \left(-\frac{dE}{dt}\right)dt,
\end{equation}
\noindent where $E^{*}$ is the maximum energy stored in the tidal distortion and the integral is the
energy dissipated over one cycle [see~\citet{gold63} or~\citet{efro09} for detailed discussions]. This
definition is akin to the quality factor in a damped, linear oscillator and does not depend on the
details of how the energy is dissipated. Friction in the response of the body to a tide-raising
potential plus the rotation of the body itself (at a spin rate $\omega$ compared to the mean motion $n$
in the mutual orbit about the center of mass of the system) lead to misalignment by the geometric lag
angle $\delta$.
The geometric lag relates to a phase lag by $\epsilon_{\ell m p q}=-m\,\delta\,{\rm sign}\left(\omega-n\right)$,
where the $\ell m p q$ notation follows~\citet{kaul64}, and we have implicitly assumed a single tidal
bulge as done by~\citet{gers55} and~\citet{macd64} by using a single positive geometric lag $\delta$
independent of the tidal frequencies\footnote{The definition of the phase lag~\citep{kaul64,efro09},
when one ignores changes in the periapse and node, is
$\epsilon_{\ell m p q}= \left[\left(\ell-2p+q\right)n-m\omega\right]\Delta t_{\ell m p q}$, where the
bracketed term is the tidal frequency and $\Delta t_{\ell m p q}$ is the positive time lag in the response
of the material to the tidal potential. In the potential expansion by~\citet{kaul64}, only terms satisfying
$\ell-2p=m$ and $q=0$ survive for mutual orbits that are circular and equatorial such that
$\epsilon_{\ell m p q}=-m\,|\omega-n|\,\Delta t_{\ell m p q}\,{\rm sign}\left(\omega-n\right)=-m\,\delta\,{\rm sign}\left(\omega-n\right)$,
assuming a constant time lag and a single (positive) value for geometric lag for all viable combinations
of ${\ell m p q}$.}. The tidal dissipation function $Q$, in turn, relates to the phase angle as
$Q^{-1}_{\ell m p q}=|{\rm cot}\,\,\epsilon_{\ell m p q}| \simeq |\epsilon_{\ell m p q}| + O(\epsilon_{\ell m p q}^{2})$~\citep{efro09}
provided energy dissipation is weak ($Q_{\ell m p q} \gg 1$). The absolute value of
$\epsilon_{\rm \ell m p q}$ is required on physical grounds to ensure that $Q_{\ell m p q}$ is positive.
Since the tidal dissipation function is related to the phase lag, a different $Q_{\ell m p q}$
technically applies to each tidal frequency. Compared to the dominant order $\ell=2$, where only the
${\ell m p q}=2200$ term survives in the setup of our problem,
\begin{equation}
Q_{\ell m p q}^{-1}=m\delta=\frac{m}{2}\,Q_{2200}^{-1}=\frac{m}{2}\,Q^{-1}
\label{eq:Q}
\end{equation}
\noindent
in general, where we define $Q \equiv Q_{2200}$ such that $Q_{\ell m p q}$ for any tidal frequency is
proportional to a single value of $Q$. This simple relation between $Q_{\ell m p q}$ and the $Q$ of the
dominant tidal frequency is a direct result of our assumption of a single geometric lag independent of
tidal frequency. Such a choice may not be the most realistic physical model\footnote{In our model, $Q$
varies inversely with the tidal frequency. \citet{efro09} argue in favor of a rheological model where
$Q$ scales to a positive fractional power of the tidal frequency (at least for terrestrial planets). It
is unclear what rheological model is proper for gravitational aggregates like binary asteroids.}, but
does allow for simpler mathematical manipulation. Because $Q$ is necessarily positive regardless of the
sign of the phase lag, we append ${\rm sign}\left(\omega-n\right)$ to our forthcoming equations, where
the spin rate $\omega$ relates to the tidally distorted component. If $\omega > n$, the bulge leads; if
$\omega < n$, the bulge lags behind.
\section{Tidal Torques on the Components}
\label{sec:torques}
The force on the secondary due to the distorted primary at order $\ell$ is $-M_{\rm s}\nabla U_{\ell, \rm nc}$,
and because we have restricted the problem to a circular, equatorial mutual orbit, the tidal bulge
remains in the orbit plane, and the sole component of the force is tangential to the mutual orbit.
Returning to the notation where $\psi$ measures the angle from the axis of symmetry of the tidal
bulge\footnote{In this notation, the tidal potential in~(\ref{eq:Usum}) deforms the shape of the component
according to~(\ref{eq:lambdadef}) and produces the external potential~(\ref{eq:Ufull}) all in terms of the
single angle $\psi$.}, the force at the location of the secondary is proportional to
$\displaystyle{-\left.\partial P_{\ell}/\partial\psi\right|_{\psi=\delta}}$ and pointed in the
$-\hat{\psi}$ direction. The value of $\delta$ is taken to be positive as stated in the previous section
such that, for $\delta$ small, the quantity $\displaystyle{-\left.\partial P_{\ell}/\partial\psi\right|_{\psi=\delta}}$
is positive and the primary bulge attracts the secondary. For a prograde mutual orbit with
$\omega_{\rm p}>n$, the primary bulge pulls the secondary ahead in the orbit; if $\omega_{\rm p}<n$, the
primary bulge retards the motion of the secondary (see Fig.~\ref{fig:sec6}). The resulting torque vector
acting upon the orbit of the secondary, which is located at position $\textbf{r}$ with respect to the
center of mass of the primary, is given by
\textbf{$\Gamma_{\ell, \rm p}$}$=\textbf{r}\times\left(-M_{\rm s}~\nabla U_{\ell, \rm nc}\right)$. Thus,
the torque vector, in general, is proportional to
$\displaystyle{-\left.\partial P_{\ell}/\partial\psi\right|_{\psi=\delta}\,\left(\hat{\psi}\times\hat{r}\right)}$.
As defined, the direction (sign) of $\hat{\psi}\times\hat{r}$ depends on whether the tidal bulge leads or
lags, and we indicate this in the magnitude of the torque via the term ${\rm sign}\left(\omega-n\right)$
such that the torque on the orbit of the secondary due to the $\ell^{\rm th}$-order deformation of the
primary is
\begin{eqnarray}
\Gamma_{\ell, \rm p} & = & -M_{\rm s}\frac{\partial U_{\ell, \rm nc}}{\partial \psi_{\rm p}}\nonumber\\
& = & k_{\ell, \rm p}\frac{GM_{\rm s}^2}{R_{\rm p}}\left(\frac{a}{R_{\rm p}}\right)^{-2\left(\ell+1\right)}\left(\left.-\frac{\partial P_{\ell}\left(\cos\psi_{\rm p}\right)}{\partial \psi_{\rm p}}\right|_{\psi_{\rm p}=\delta_{\rm p}}\right){\rm sign}\left(\omega_{\rm p}-n\right).
\label{eq:torque}
\end{eqnarray}
\noindent
where $\delta_{\rm p}$ is the geometric lag angle between the primary's tidal bulge and the line
of centers. A positive (negative) torque increases (decreases) the energy of the orbit at a rate
$\Gamma_{\rm p}\,n$. An equal and opposite torque alters the rotational energy of the primary at a rate
$-\Gamma_{\rm p} \omega_{\rm p}$ such that the total energy $E$ of the system is dissipated over time at
a rate $\dot{E}=-\Gamma_{\rm p}\left(\omega_{\rm p}-n\right)<0$ as heat inside the primary. Though energy
is dissipated, angular momentum is conserved due to the equal and opposite nature of the torques on the
orbit and the rotation of the primary. Conservation of angular momentum results in the evolution of the
mutual orbit and is discussed in the following section.
A similar torque arises from tides raised on the secondary. By the symmetry of motion about the center
of mass, the torque $\Gamma_{\ell, \rm s}$ is given by swapping the subscripts p and s in (\ref{eq:torque})
such that
\begin{eqnarray}
\Gamma_{\ell, \rm s} & = & k_{\ell, \rm s}\frac{GM_{\rm p}^2}{R_{\rm s}}\left(\frac{a}{R_{\rm s}}\right)^{-2\left(\ell+1\right)}\left(-\left.\frac{\partial P_{\ell}\left(\cos\psi_{\rm s}\right)}{\partial\psi_{\rm s}}\right|_{\psi_{\rm s}=\delta_{\rm s}}\right)\,{\rm sign}\left(\omega_{\rm s}-n\right)\label{eq:torques}\\
& = & k_{\ell, \rm p}\frac{GM_{\rm s}^2}{R_{\rm p}}\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^{2\ell-3}\frac{\mu_{\rm p}}{\mu_{\rm s}}\left(\frac{a}{R_{\rm p}}\right)^{-2\left(\ell+1\right)}\left(-\left.\frac{\partial P_{\ell}\left(\cos\psi_{\rm s}\right)}{\partial\psi_{\rm s}}\right|_{\psi_{\rm s}=\delta_{\rm s}}\right)\,{\rm sign}\left(\omega_{\rm s}-n\right),\nonumber
\end{eqnarray}
\noindent
where $\delta_{\rm s}$ is the geometric lag angle between the tidal bulge of the secondary and the line
of centers. This torque changes the orbital energy at a rate $\Gamma_{\rm s}\,n$, and the equal and
opposite torque alters the rotational energy of the secondary at a rate $-\Gamma_{\rm s}\omega_{\rm s}$,
dissipating energy as heat in the secondary at a rate $\dot{E}=-\Gamma_{\rm s}\left(\omega_{\rm s}-n\right)$.
Torques on the primary and secondary weaken for higher orders of $\ell$ and increasing separations, as
expected, and do so in the same manner as the non-central potential in (\ref{eq:Ufull}). Once the
rotation rate of a component synchronizes with the mean motion of the mutual orbit, the associated torque
goes to zero due to the ${\rm sign}\left(\omega-n\right)$ term\footnote{If the mutual orbit were not
circular, a radial tide owing to the eccentricity would continue to act despite the synchronization of
the component spin rate to the mean motion.}. Note that we have ignored interactions between the tidal
bulges of the components as these will depend on the square (or higher powers) of the Love numbers, which
we have argued are negligible (see Footnote 5).
\section{Spin Rate and Semimajor Axis Evolution for Close Orbits}
\label{sec:eqns}
During tidal evolution, angular momentum is transferred between the spins of the components and the
mutual orbit. For simplicity, assume that the primary and secondary have spin axes parallel to the
normal of the mutual orbit plane and rotate in a prograde sense. Then, the torque on the distorted
primary alters its spin with time at a rate $\dot{\omega_{\rm p}}=-\Gamma_{\rm p}/I_{\rm p}$, where
$I_{\rm p}=\alpha_{\rm p}M_{\rm p}R_{\rm p}^2$ is the moment of inertia of the primary. The pre-factor
$\alpha$ is $2/5$ for a uniform density sphere, but can vary with the internal structure of the body,
and is left as a variable here such that the change in spin rate of the primary is
\begin{equation}
\dot{\omega}_{\ell, \rm p}~=~-\frac{k_{\ell, \rm p}}{\alpha_{\rm p}}\,\frac{\kappa^2}{1+\kappa}\left(\frac{a}{R_{\rm p}}\right)^{-2\ell+1}n^2\left(-\left.\frac{\partial P_{\ell}\left(\cos\psi_{\rm p}\right)}{\partial \psi_{\rm p}}\right|_{\psi_{\rm p}=\delta_{\rm p}}\right){\rm sign}\left(\omega_{\rm p}-n\right),
\label{eq:wp}
\end{equation}
\noindent
recalling that $-\partial P_{\ell}/\partial\psi\ge0$ for small angles and defining the mass ratio
$\kappa \equiv M_{\rm s}/M_{\rm p}=\left(\rho_{\rm s}/\rho_{\rm p}\right)\left(R_{\rm s}/R_{\rm p}\right)^3$.
Also note that $n^2$, which is proportional to $\left(a/R_{\rm p}\right)^{-3}$, was introduced via
Kepler's Third Law, $n^2a^3=G\left(M_{\rm p}+M_{\rm s}\right)$. For rapidly spinning primaries with
$\omega_{\rm p}>n$, the torque will slow the rotation.
To conserve angular momentum in the system, the change in spin angular momentum, given by the torque
$-\Gamma_{\ell, \rm p}$, plus the change in orbital angular momentum must be zero. The orbital
angular momentum for a circular mutual orbit $M_{\rm p}M_{\rm s}/\left(M_{\rm p}+M_{\rm s}\right)\,na^{2}$
changes with time as $\left(1/2\right) M_{\rm p}M_{\rm s}/\left(M_{\rm p}+M_{\rm s}\right)\,na\dot{a}$
such that conservation requires
\begin{equation}
\left(\frac{\dot{a}}{R_{\rm p}}\right)_{\ell, \rm p}~=~2k_{\ell, \rm p}\,\kappa\left(\frac{a}{R_{\rm p}}\right)^{-2\ell}n\left.\left(-\frac{\partial P_{\ell}\left(\cos\psi_{\rm p}\right)}{\partial\psi_{\rm p}}\right|_{\psi_{\rm p}=\delta_{\rm p}}\right) {\rm sign}\left(\omega_{\rm p}-n\right)
\label{eq:adotp}
\end{equation}
\noindent
for each order $\ell$. For rapidly spinning primaries, the orbit will expand as angular momentum is
transferred from the spin of the primary to the mutual orbit and, so long as the geometric lag remains
small, higher orders will cause both more rapid despinning of the primary and faster expansion of the
mutual orbit than $\ell=2$ alone. A large secondary with $\kappa\sim1$ clearly causes the most rapid
tidal evolution. A small secondary with $\kappa\ll1$ will not cause the primary to despin appreciably
due to the $\kappa^2$-dependence of (\ref{eq:wp}), but the separation will evolve more readily as
(\ref{eq:adotp}) scales as $\kappa$.
One can derive the change in the semimajor axis in (\ref{eq:adotp}) by other methods including the work
done on the orbit and Gauss's formulation of Lagrange's planetary equations. Setting the time derivative
of the total energy of the orbit $-GM_{\rm p}M_{\rm s}/2a$, which is $GM_{\rm p}M_{\rm s}\dot{a}/2a^{2}$,
equal to the work done on the orbit $\Gamma_{\ell, \rm p}n$ simplifies to (\ref{eq:adotp}). Using Gauss's
formulation for spherical bodies (see \citet{burn76} for a lucid derivation) and a circular mutual orbit,
\begin{equation}
\dot{a}_{\ell}~=~\frac{2}{n}\left(1+\kappa\right)T_{\ell},
\label{eq:gauss}
\end{equation}
\noindent
where $T_{\ell}$ is the tangential component of the disturbing force (per unit mass) from the previous
section, which is $\left(1/a\right)\partial U_{\ell, \rm nc}/\partial \psi~{\rm sign}\left(\omega_{\rm p}-n\right)$
with $U_{\ell, \rm nc}$ given by (\ref{eq:Uexternal}). The $\left(1+\kappa\right)$ term is not typically
present in the Gauss formulation, but is appended here to the disturbing function\footnote{Algebraically,
from the time rate of change of the orbital energy, $\displaystyle{\dot{a}=2a^{2}\dot{E}/GM_{\rm p}M_{\rm s}}$,
and the change in orbital energy is further related to the velocity of the secondary ${\bf \dot{r}}$ and the
disturbing force ${\bf F}=-M_{\rm s}\nabla U_{\rm nc}$ such that $\dot{E}={\bf \dot{r}}\cdot{\bf F}=naM_{\rm s}T$
for a circular mutual orbit. Replacing $\dot{E}$ by $naM_{\rm s}T$ and using Kepler's Third Law,
$n^{2}a^{3}=G\left(M_{\rm p}+M_{\rm s}\right)=GM_{\rm p}\left(1+\kappa\right)$, in the expression for $\dot{a}$
gives (\ref{eq:gauss}) for a specific order $\ell$. If $M_{\rm s}$ were ignored in Kepler's Third Law, the
more familiar form of Gauss's formulation would emerge: $\dot{a}=2T/n$.} due to the non-inertial nature of
the coordinate system centered on the primary~\citep{rubi73} and is necessary, as stated in~\citet{darw80},
because the primary ``must be reduced to rest.'' One can also argue the term is necessary to account for
the reaction of one body to the tidal action of the other~\citep{ferr08} as the perturbation is internal to
the binary system rather than an external element (e.g., drag force, third body). The $\left(1+\kappa\right)$
term is absent in the formulae of~\citet{kaul64}, which is reasonable if the secondary is of negligible mass,
but we wish to allow for an arbitrary mass ratio. Substitution of the disturbing force into (\ref{eq:gauss})
produces (\ref{eq:adotp}). By including the $\left(1+\kappa\right)$ term in Kaula's equation (38), evaluating
Kaula's $F$ and $G$ functions with zero inclination and eccentricity, and recalling that Kaula's
$\epsilon_{\ell m p q} = -m\,\delta\,{\rm sign}\left(\omega_{\rm p}-n\right)$ in our notation, we find our
evolution of the semimajor axis in (\ref{eq:adotp}) is a special case of Kaula's generalization\footnote{The
product of Kaula's $F_{\ell m p}$ and $G_{\ell p q}$ functions is non-zero for circular, equatorial orbits
only if $\ell-2p=m$ and $q=0$. The prefactors on each $\psi$ in the Legendre polynomials listed in
Table~\ref{tab:legendre} are the values of $m$ for each order $\ell$ that satisfy $\ell-2p=m$. Thus, the
cosine terms in the Legendre polynomials we list correspond to $\ell m p q$ of 2200, 3110, 3300, 4210, 4400,
5120, 5310, 5500, 6220, 6410, and 6600. This correspondence allows us to link our equations written in terms
of Legendre polynomials and a geometric lag to Kaula's equations written in terms of the phase lag
$\epsilon_{\ell m p q}$. While the combinations 2010, 4020, and 6030 satisfy $\ell-2p=m$, terms with $m=0$
cannot contribute to the tidal evolution of the system because, by definition, these terms do not produce a
phase lag. These three terms are responsible for the $\psi$-independent components of the Legendre polynomials
with $\ell=2,4,6$ that vanish upon differentiation with respect to $\psi$.}, as one would expect.
The tidal evolution of the secondary follows similarly. The torque on the distorted secondary alters its
spin with time at a rate $\dot{\omega_{\rm s}}=-\Gamma_{\rm s}/I_{\rm s}$,
\begin{eqnarray}
\dot{\omega}_{\ell, \rm s} & = & -\frac{k_{\ell, \rm s}}{\alpha_{\rm s}}\frac{1}{\kappa\left(1+\kappa\right)}\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^{2\ell-1}\left(\frac{a}{R_{\rm p}}\right)^{-2\ell+1}n^2\left(-\left.\frac{\partial P_{\ell}\left(\cos\psi_{\rm s}\right)}{\partial \psi_{\rm s}}\right|_{\psi_{\rm s}=\delta_{\rm s}}\right){\rm sign}\left(\omega_{\rm s}-n\right)\label{eq:ws}\\
& = & -\frac{k_{\ell, \rm p}}{\alpha_{\rm p}}\frac{\kappa}{1+\kappa}\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^{2\ell-5}\frac{\alpha_{\rm p}}{\alpha_{\rm s}}\,\frac{\mu_{\rm p}}{\mu_{\rm s}}\,\left(\frac{a}{R_{\rm p}}\right)^{-2\ell+1}n^2\,\left(-\left.\frac{\partial P_{\ell}\left(\cos\psi_{\rm s}\right)}{\partial \psi_{\rm s}}\right|_{\psi_{\rm s}=\delta_{\rm s}}\right){\rm sign}\left(\omega_{\rm s}-n\right),\nonumber
\end{eqnarray}
\noindent
and alters the semimajor axis of the mutual orbit at a rate of
\begin{equation}
\left(\frac{\dot{a}}{R_{\rm p}}\right)_{\ell, \rm s}~=~2k_{\ell, \rm p}\,\kappa\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^{2\ell-3}\,\frac{\mu_{\rm p}}{\mu_{\rm s}}\,\left(\frac{a}{R_{\rm p}}\right)^{-2\ell}n\,\left.\left(-\frac{\partial P_{\ell}\left(\cos\psi_{\rm s}\right)}{\partial\psi_{\rm s}}\right|_{\psi_{\rm s}=\delta_{\rm s}}\right) {\rm sign}\left(\omega_{\rm s}-n\right).
\label{eq:adots}
\end{equation}
\noindent
The Legendre polynomials in Table~\ref{tab:legendre} are written as sums of terms of the form $\cos\,m\psi$
where $m$ is an integer (see Footnote 11). Thus, the derivative
$\displaystyle{\left.\partial P_{\ell}/\partial\psi\right|_{\psi=\delta}}$ is a sum of terms of the form
$\sin\,m\delta$. For small geometric lag angles $(Q \gg 1)$,
$\displaystyle{\left.-\partial P_{\ell}/\partial\psi\right|_{\psi=\delta} \ge 0}$ and
$\sin\,m\delta \simeq m\delta$ such that
$\displaystyle{\left.-\partial P_{\ell}/\partial\psi\right|_{\psi=\delta}}$
$\displaystyle{\propto Q^{-1}}$. Because the derivative of a Legendre polynomial is proportional to $Q^{-1}$,
only the size ratio of the components and their material properties, in terms of their respective $\mu Q$
values, determine the relative strength of the torques and the relative contributions to the orbit expansion,
\begin{equation}
\left|\frac{\Gamma_{\ell, \rm s}}{\Gamma_{\ell, \rm p}}\right|~=~\left|\frac{\dot{a}_{\ell, \rm s}}{\dot{a}_{\ell, \rm p}}\right|~=~\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^{2\ell-3}\,\frac{\mu_{\rm p}Q_{\rm p}}{\mu_{\rm s}Q_{\rm s}},
\label{eq:aratio}
\end{equation}
\noindent
with the relative contribution of the secondary decreasing at higher orders of $\ell$ and for smaller
secondaries. Note that the relative strength of the torques is independent of the mass and
density\footnote{However, the absolute strengths of the torques in (\ref{eq:torque}) and (\ref{eq:torques})
do depend on the masses and densities of the components.}. For classical $\ell=2$ tides on components with
similar material properties, the torque due to the distorted secondary is a factor of the size ratio
weaker than the torque due to the distorted primary. For each higher order in the expansion, the relative
strength of the torque due to the distorted secondary weakens by the square of the size ratio. The
changes in the spin rates compare as
\begin{equation}
\left|\frac{\dot{\omega}_{\ell, \rm s}}{\dot{\omega}_{\ell, \rm p}}\right|~=~\frac{1}{\kappa}\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^{2\ell-5}\,\frac{\alpha_{\rm p}}{\alpha_{\rm s}}\,\frac{\mu_{\rm p}Q_{\rm p}}{\mu_{\rm s}Q_{\rm s}}~=~\frac{\rho_{\rm p}}{\rho_{\rm s}}\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^{2(\ell-4)}\,\frac{\alpha_{\rm p}}{\alpha_{\rm s}}\,\frac{\mu_{\rm p}Q_{\rm p}}{\mu_{\rm s}Q_{\rm s}}.
\label{eq:wratio}
\end{equation}
\noindent
This differs from a generalization of Darwin's result [c.f. \citet{darw79b}, p. 521] because we
have included the ratio of the Love numbers of the components. At the dominant orders, $\ell=2$ and $3$,
with similar densities, shapes, and material properties, the spin rate of the secondary changes faster
than the primary. However, interestingly, for $\ell=4$, the contributions to the changes in spin rates are
equal, and for orders $\ell>4$, the contribution to the change in spin rate of the primary is greater than
that of the secondary. As with the torques, the relative strength of the changes in spin rates weakens by
the square of the size ratio for each successive order $\ell$. For smaller secondaries, the changes in
spin rates are smaller than for similar mass components, and, for all cases, the process of changing the
spin of the primary is slower than for the secondary.
Evaluating the Love number $k_{\ell, \rm p}$ in (\ref{eq:kapprox}) and $\partial P_{\ell}/\partial\psi_{\rm p}$
from Table~\ref{tab:legendre} explicitly for orders $\ell\le6$, assuming a small geometric lag angle
$\delta_{\rm p}$, and applying (\ref{eq:Q}), the spin of the primary changes as
\begin{eqnarray}
\dot{\omega}_{\rm p} & = & -\frac{8}{19}\frac{1}{\alpha_{\rm p}}\frac{\pi^2 G^2\rho_{\rm p}^{3}R_{\rm p}^2}{\mu_{\rm p}Q_{\rm p}}\,\kappa^2\left(\frac{a}{R_{\rm p}}\right)^{-6}\,{\rm sign}\left(\omega_{\rm p}-n\right)\nonumber\\
& & \quad \times\left[1+\frac{19}{22}\left(\frac{a}{R_{\rm p}}\right)^{-2}+\frac{380}{459}\left(\frac{a}{R_{\rm p}}\right)^{-4}+\frac{475}{584}\left(\frac{a}{R_{\rm p}}\right)^{-6}+\frac{133}{165}\left(\frac{a}{R_{\rm p}}\right)^{-8}\right],
\label{eq:wpall}
\end{eqnarray}
\noindent
where $n$ has been replaced with Kepler's Third Law to show the full dependence upon the separation
of the components $a/R_{\rm p}$. Using either (\ref{eq:ws}) or (\ref{eq:wratio}), the spin of the
secondary changes as
\begin{eqnarray}
\dot{\omega}_{\rm s} & = & -\frac{8}{19}\frac{1}{\alpha_{\rm s}}\frac{\pi^2 G^2\rho_{\rm p}^{3}R_{\rm p}^2}{\mu_{\rm s}Q_{\rm s}}\,\kappa\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^{-1}\left(\frac{a}{R_{\rm p}}\right)^{-6}\,{\rm sign}\left(\omega_{\rm s}-n\right)\nonumber\\
& & \quad \times\left[1+\frac{19}{22}\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^2\left(\frac{a}{R_{\rm p}}\right)^{-2}+\frac{380}{459}\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^4\left(\frac{a}{R_{\rm p}}\right)^{-4}\right.\nonumber\\
& & \quad \quad \left.+\frac{475}{584}\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^6\left(\frac{a}{R_{\rm p}}\right)^{-6}+\frac{133}{165}\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^8\left(\frac{a}{R_{\rm p}}\right)^{-8}\right].
\label{eq:wsall}
\end{eqnarray}
\noindent
Assuming similar densities for the components, the change in the spin rate of the primary scales as the
size ratio of the components to the sixth power ($\propto \kappa^{2}$); the spin rate of the secondary
scales only as the square of the size ratio at leading order, reinforcing from (\ref{eq:wratio}) how the
spin of the secondary evolves more rapidly than that of the primary, especially for small size ratios.
For close orbits, the separation of the components changes as angular momentum is transferred to
or from the spins of the components such that the overall change in the orbital separation for
$\ell \le 6$ is the sum of (\ref{eq:adotp}) and (\ref{eq:adots}),
\begin{eqnarray}
\frac{\dot{a}}{R_{\rm p}} & = & \frac{8\sqrt{3}}{19}\frac{\pi^{3/2}G^{3/2}\rho_{\rm p}^{5/2}R_{\rm p}^2}{\mu_{\rm p}Q_{\rm p}}\,\kappa\left(1+\kappa\right)^{1/2}\left(\frac{a}{R_{\rm p}}\right)^{-11/2}\nonumber\\
& & \enspace \times\left[{\rm sign}\left(\omega_{\rm p}-n\right)+\left(\frac{R_{\rm s}}{R_{\rm p}}\right)\frac{\mu_{\rm p}Q_{\rm p}}{\mu_{\rm s}Q_{\rm s}}{\rm sign}\left(\omega_{\rm s}-n\right)\right.\nonumber\\
& & \enspace +\frac{19}{22}\left(\frac{a}{R_{\rm p}}\right)^{-2}\left({\rm sign}\left(\omega_{\rm p}-n\right)+\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^3\frac{\mu_{\rm p}Q_{\rm p}}{\mu_{\rm s}Q_{\rm s}}{\rm sign}\left(\omega_{\rm s}-n\right)\right)\nonumber\\
& & \enspace +\frac{380}{459}\left(\frac{a}{R_{\rm p}}\right)^{-4}\left({\rm sign}\left(\omega_{\rm p}-n\right)+\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^5\frac{\mu_{\rm p}Q_{\rm p}}{\mu_{\rm s}Q_{\rm s}}{\rm sign}\left(\omega_{\rm s}-n\right)\right)\nonumber\\
& & \enspace +\frac{475}{584}\left(\frac{a}{R_{\rm p}}\right)^{-6}\left({\rm sign}\left(\omega_{\rm p}-n\right)+\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^7\frac{\mu_{\rm p}Q_{\rm p}}{\mu_{\rm s}Q_{\rm s}}{\rm sign}\left(\omega_{\rm s}-n\right)\right)\nonumber\\
& & \enspace \left.+\frac{133}{165}\left(\frac{a}{R_{\rm p}}\right)^{-8}\left({\rm sign}\left(\omega_{\rm p}-n\right)+\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^9\frac{\mu_{\rm p}Q_{\rm p}}{\mu_{\rm s}Q_{\rm s}}{\rm sign}\left(\omega_{\rm s}-n\right)\right)\right].
\label{eq:adotboth}
\end{eqnarray}
\noindent
Inside the square brackets, having a secondary of negligible size ($R_{\rm s}/R_{\rm p}\rightarrow0$)
has the same effect as having a synchronous secondary ($\omega_{\rm s}=n$); both make the contribution
from the secondary vanish. Of course, if one considers the factor outside the square brackets, having a
secondary of negligible size makes the mass ratio $\kappa$ negligible, while having a synchronous secondary
does not directly affect $\kappa$. The change in the mean motion of the mutual orbit follows from Kepler's
Third Law and (\ref{eq:adotboth}) as
\begin{equation}
\frac{\dot{n}}{n}~=~-\frac{3}{2}\left(\frac{a}{R_{\rm p}}\right)^{-1}\left(\frac{\dot{a}}{R_{\rm p}}\right).
\label{eq:ndot}
\end{equation}
\noindent
Note that in the above equations (\ref{eq:wpall}--\ref{eq:ndot}), any difference in density between the
components is accounted for in the mass ratio $\kappa$; otherwise, only the size ratio of the components is
involved in the terms due to tides raised on the secondary. Obviously, the contribution of the secondary
is most important when the components are of similar size. Not only is the contribution of the secondary
weakened because of its smaller size, it should also be despun faster than the primary such that its
contribution turns off when $\omega_{\rm s}=n$ long before the primary does the same. Furthermore, each
equation has a strong inverse dependence on the separation of the components even at $\ell=2$, and while
the inclusion of higher-order terms will be strongest at small separations, the orbit of a typical
outwardly evolving system will expand to a wider separation rapidly.
\section{Effect of Close Orbit Expansion on Tidal Evolution}
Inclusion of higher-order terms for the changes in spin rates and semimajor axis in
(\ref{eq:wpall}--\ref{eq:adotboth}) speeds up the evolution of the system and decreases the tidal
timescales. Using up to order $\ell=6$ compared to $\ell=2$ results in the spin rates of the
components changing up to 28\% faster at 2$R_{\rm p}$, but falling off quickly with increasing
separation (Fig.~\ref{fig:wdot}) to less than 4\% at 5$R_{\rm p}$. The size ratio of the components
only affects $\dot{\omega}_{\rm s}$, where the higher-order terms are weaker for smaller secondaries.
Similarly, for the change in semimajor axis with time, assuming both components are causing the
separation to change in the same sense (${\rm sign}\left(\omega_{\rm p}-n\right)$ and
${\rm sign}\left(\omega_{\rm s}-n\right)$ have the same value), using up to order $\ell=6$
(Fig.~\ref{fig:adot}) results in a faster evolution by 21\% to 28\% at 2$R_{\rm p}$ and decreases
quickly with increasing separation. Unlike the changes in spin rates, the largest effect on the
evolution of the semimajor axis occurs when the size ratio is either unity (equal size) or negligible
or when the spin of the secondary has synchronized to the mean motion such that the tidal torque
on the secondary vanishes. The change in semimajor axis with time is least affected by the
higher-order terms for a size ratio of 0.53 with all other size ratios falling within these bounds.
According to (\ref{eq:ndot}) for the change in the mean motion with time, the value of $\dot{n}/n$
using higher-order terms compared to $\ell=2$ has the same form as the change in semimajor axis in
Fig.~\ref{fig:adot}.
The strengths of the contributions of the extra terms in the close-orbit correction to the change in
semimajor axis are listed in Table~\ref{tab:astrength}. At 2$R_{\rm p}$, higher-order terms with
$\ell \ge 3$ account for nearly 25\% of the change in semimajor axis with time. Although the $\ell=6$
term is necessary for accurate reproduction of the potential between the bodies to within 1\% at
2$R_{\rm p}$, it does not alter the change in semimajor axis with time at the 1\% level because of the
stronger dependence of (\ref{eq:adotp}) on separation compared to (\ref{eq:Vlegendre}). The net
contribution of the higher-order terms in Table~\ref{tab:astrength} decreases by roughly 5\% at each
value of the separation from Table~\ref{tab:legendre} with only the $\ell=3$ term having much
consequence beyond 3$R_{\rm p}$.
The total change in the component spin rates as a function of separation, shown in Fig.~\ref{fig:wa},
is given by integration of the ratio of~(\ref{eq:wpall}) and~(\ref{eq:adotboth}) for the primary
and the ratio of~(\ref{eq:wsall}) and~(\ref{eq:adotboth}) for the secondary. Depending on the size
ratio of the components, the total change in the spin rate of the primary is enhanced by up to 6\% at
2$R_{\rm p}$ over using $\ell=2$ tides only, but not by more than a few percent at larger separations.
For the secondary, perhaps counter-intuitively, despite the spin of the secondary evolving more rapidly
with time by adding higher-order terms (Fig.~\ref{fig:wdot}), its evolution with respect to the
separation is less than when using $\ell=2$ only; the deficit is as large as 22\% at 2$R_{\rm p}$ when
the size of the secondary is negligible. This is because for smaller secondaries, the effect of
higher-order terms on $\dot{\omega}_{\rm s}$ in~(\ref{eq:wsall}) is reduced, while the effect of
higher-order terms on $\dot{\omega}_{\rm p}$ is independent of the size ratio. Thus, for a rapidly
rotating primary, the higher-order terms transfer more angular momentum from the spin of the primary
to the orbit, expanding the separation faster than by $\ell=2$ tides alone and faster than the spin
rate of the secondary changes such that the net effect on $\Delta\omega_{\rm s}(a)$ is smaller.
Integration of~(\ref{eq:adotboth}) provides the separation as a function of time. For tidal
evolution from an initial separation of 2$R_{\rm p}$ to a final separation of 5$R_{\rm p}$
(Fig.~\ref{fig:at}), the close-orbit correction is strongest at the onset, expanding the
separation more rapidly than $\ell=2$ tides, but only by about 2\% over the same time interval.
The contributions from the higher-order terms lose strength over time as the separation increases
resulting in a net effect of expanding the separation by $\sim$1\% extra by using $\ell=6$
instead of $\ell=2$. From Figs.~\ref{fig:wa} and~\ref{fig:at}, the integrated effects of the
close-orbit correction are small, typically of order a few percent; the effects are more
noticeable in the instantaneous rates of change of the spin rates, separation, or mean motion
due to the rapid fall-off in strength of the higher-order terms with increasing separation and
how rapidly the system tidally evolves from small separations.
Rearrangement and integration of (\ref{eq:adotboth}) allows one to calculate the combination of the
material properties of the components $\mu Q$ (assuming $\mu_{\rm p}Q_{\rm p}=\mu_{\rm s}Q_{\rm s}$)
and the age of the binary $\Delta t$ based on measurable system parameters. For brevity, we retain
only terms due to tides raised on the primary giving
\begin{eqnarray}
\frac{\mu Q}{\Delta t} & = & \frac{8\sqrt{3}}{19}\pi^{3/2}G^{3/2}\rho_{\rm p}^{5/2}R_{\rm p}^{2}\,\kappa\left(1+\kappa\right)^{1/2}\nonumber\\
& & \times \left[\int_{2}^{a_{\rm f}/R_{\rm p}}\frac{x^{11/2}}{1+\frac{19}{22}x^{-2}+\frac{380}{459}x^{-4}+\frac{475}{584}x^{-6}+\frac{133}{165}x^{-8}}\,{\rm d}x\right]^{-1}
\label{eq:muq}
\end{eqnarray}
\noindent
with $x=a/R_{\rm p}$. Because both terms on the left-hand side of (\ref{eq:muq}) are unknown, one
may either estimate the material properties by assuming binary ages~\citep{marg02s,marg03,tayl07dpstides},
estimate binary ages by assuming material properties~\citep{wals06,gold09}, or consider both
avenues~\citep{marc08ecc,marc08circ,tayl10material}. Furthermore, precisely because both terms are
unknown, assuming a value for one has an intimate effect on the calculation of the other as changing
one's value by an order of magnitude changes the result of the other by an order of magnitude. Thus,
when one wishes to find $\mu Q$, for instance, choosing an age for the binary injects a great source
of uncertainty into the calculation.
The close-orbit correction enhances the rate at which the separation changes such that, to provide
the same tidal evolution over the same timescale $\Delta t$, the product $\mu Q$ must increase to
compensate for the inclusion of the higher-order terms. For classical $\ell=2$ tides, the denominator
of the integrand in (\ref{eq:muq}) vanishes such that the effect of including terms up to $\ell=6$
alters $\mu Q$ according to
\begin{equation}
\frac{\mu Q_{\ell=6}}{\mu Q_{\ell=2}}~=~\frac{\int_{2}^{a_{\rm f}/R_{\rm p}}x^{11/2}\,{\rm d}x}{\int_{2}^{a_{\rm f}/R_{\rm p}}\frac{x^{11/2}}{1+\frac{19}{22}x^{-2}+\frac{380}{459}x^{-4}+\frac{475}{584}x^{-6}+\frac{133}{165}x^{-8}}\,{\rm d}x}
\label{eq:muqchange}
\end{equation}
\noindent
and is shown as a function of the final separation in Fig.~\ref{fig:muq}. Note that in
Fig.~\ref{fig:muq}, the contribution of the secondary is included in the numerical integration of
(\ref{eq:adotboth}) although it is not explicitly given in (\ref{eq:muqchange}) above. Evolution
from a close initial separation of 2$R_{\rm p}$ to a wide separation of 10$R_{\rm p}$ results in
only a $\sim$1\% increase in $\mu Q$ over the classical value for all size ratios. Thus, the basic
$\ell=2$ tidal mechanism is sufficient for well-separated binaries. On the other hand, if the final
separation is smaller, as is the case for most near-Earth binaries, the correction is larger,
increasing to 5\% for evolution from 2$R_{\rm p}$ to 5$R_{\rm p}$ and 15\% for evolution from
2$R_{\rm p}$ to 3$R_{\rm p}$. When making a coarse estimate of the material properties of the
system, taking the close orbit into account is not of paramount importance; classical tides will
easily provide an order-of-magnitude estimate of $\mu Q$ for even the closest of binary asteroids,
though the result will be slightly underestimated. Complementarily, if higher-order terms are
included and $\mu Q$ is held fixed, the age of the binary must decrease by the same factor as in
Fig.~\ref{fig:muq} meaning that $\ell=2$ tides provide an upper bound on ages for systems with a
given $\mu Q$ value.
The use of higher-order terms up to $\ell=6$ is sufficient for exploring the tidal evolution of
binary systems with separations greater than 2$R_{\rm p}$. Additional terms with $\ell>6$ make
inconsequential changes to tidal evolution at these separations as illustrated by the rapid fall-off
of the contributions of the higher-order terms beyond 2$R_{\rm p}$ in Table~\ref{tab:astrength}.
Moreover, terms with $\ell>6$ leave Figs.~\ref{fig:wdot}--\ref{fig:muq} unchanged, only having an
effect within 2$R_{\rm p}$. Thus, if one wishes to proceed inward of 2$R_{\rm p}$, simply using
orders of up to $\ell=6$ is insufficient as higher-order terms gain importance the closer one
proceeds to the primary. Though we stated earlier that the number of terms required can rapidly
become unwieldy, one can approximate their strength. For an arbitrary order $\ell > 2$, the
term within the square brackets of (\ref{eq:adotboth}) is approximately
\begin{equation}
0.8\left(\frac{a}{R_{\rm p}}\right)^{-2\left(\ell-2\right)}\left({\rm sign}\left(\omega_{\rm p}-n\right)+\left(\frac{R_{\rm s}}{R_{\rm p}}\right)^{2\ell-3}\frac{\mu_{\rm p} Q_{\rm p}}{\mu_{\rm s} Q_{\rm s}}{\rm sign}\left(\omega_{\rm s}-n\right)\right),
\label{eq:xterm}
\end{equation}
\noindent
allowing additional terms to be included without explicit calculation of the Love numbers
$k_{\ell, \rm p}$ or manipulation of the Legendre polynomials. Similar terms follow for the
changes in spin rates. One must keep in mind that the approximation in~(\ref{eq:xterm}) is only
valid so long as the small angle approximation holds
$\left(\sin\,m\delta\simeq m\delta\propto Q^{-1}\right)$ with $m\le\ell$, which requires $Q>10$ to
retain 1\% accuracy at $m=6$ and larger $Q$ as $m$ increases\footnote{We have applied (\ref{eq:Q}) to
estimate the value of $Q$ required. In general, the small angle approximation holds to within 1\% for
$Q_{\ell m p q} \sim 4$ or greater.} (e.g., $Q>20$ for $m=10$). Also, having separations of
less than 2$R_{\rm p}$ requires smaller secondaries, since contact occurs at a separation of
$(1+R_{\rm s}/R_{\rm p})\,R_{\rm p}$, which reduces the contribution of the secondary due to dependencies
upon the size ratio, in addition to demanding consideration of the Roche limit for the system (see
Section~\ref{sec:roche}).
\section{Comparison to Measurement Errors}
Take, for example, 66391 (1999 KW4), the best-studied of the near-Earth binary
systems~\citep{ostr06,sche06s}. Even with exhaustive analysis of radar imagery, production of
three-dimensional shape models of both components, and investigation of the system dynamics, physical
parameters of the system are not known with extreme precision. The densities of the primary and
secondary components are known to approximately 12\% and 25\%, respectively. The uncertainty in the
density of the primary alone can cause error of more than 30\% in $\dot{\omega}_{\rm p}$,
$\dot{\omega}_{\rm s}$, and $\dot{a}/R_{\rm p}$ according to (\ref{eq:wpall}--\ref{eq:adotboth}), more
than the close-orbit correction causes in Figs.~\ref{fig:wdot} and~\ref{fig:adot}. The higher
estimated density of the secondary in the 1999 KW4 system of 2.81 g/cm$^{3}$, compared to 1.97 g/cm$^{3}$
for the primary, directly affects the mass ratio $\kappa$ applied in the equations of tidal evolution as
one typically assumes similar densities for the components. Ignoring the density uncertainties, this
difference in component densities alone causes a 43\% change in $\kappa$ that, in turn, affects
$\dot{\omega}_{\rm p}$ by a factor of two and $\dot{\omega}_{\rm s}$ and $\dot{a}/R_{\rm p}$ by
approximately 40\% as well, again, a larger effect than the close-orbit correction to tidal evolution.
The calculated value of $\mu Q$ in~(\ref{eq:muq}) is affected by density and mass ratio uncertainties
in the same way as $\dot{a}/R_{\rm p}$. Furthermore, uncertainties in densities and the dependence
of the mass ratio $\kappa$ on density differences between the components apply at all separations
unlike the close-orbit correction, which falls off quickly with increasing separation.
One must also consider the effect of the initial separation of the components at the onset of tidal
evolution, a property that is not known for individual systems, but can be estimated from simulations
of binary formation mechanisms [e.g.,~\citet{wals06,wals08yorp}] and given a lower bound by
the contact limit at $\left(1+R_{\rm s}/R_{\rm p}\right)\,R_{\rm p}$. Assuming evolution over the
same timescale, if the system had an initial separation $a_{\rm i}$ instead of 2$R_{\rm p}$, the
effect on $\mu Q$ calculated with classical $\ell=2$ tides raised only on the primary is
\begin{equation}
\frac{\mu Q_{\rm i}}{\mu Q_{2}}~=~\frac{1-\left(\frac{2}{a_{\rm f}/R_{\rm p}}\right)^{13/2}}{1-\left(a_{\rm i}/a_{\rm f}\right)^{13/2}}.
\label{eq:initsep}
\end{equation}
\noindent
For a final separation $a_{\rm f}$ from 3$R_{\rm p}$ to 10$R_{\rm p}$, unless the actual initial
separation $a_{\rm i}$ is within 10\% of the final separation ($>$0.9$a_{\rm f}$), the value of $\mu Q$
is affected by less than a factor of two by assuming an initial separation of 2$R_{\rm p}$. Using up
to $\ell=6$ and allowing for tides raised on the secondary with any size ratio do not cause a significant
difference in this result.
A similar result is found for the dependence on the final separation of the components, which one
typically takes to be the current separation. If $a_{\rm f}$ is the final (current) separation,
then changing the separation to $a_{\rm f^{\,\prime}}$ due to, say, a measurement error causes the
calculated $\mu Q$ value for tidal evolution from 2$R_{\rm p}$ to change as
\begin{equation}
\frac{\mu Q_{\rm f^{\,\prime}}}{\mu Q_{\rm f}}~=~\frac{1-\left(\frac{2}{a_{\rm f}/R_{\rm p}}\right)^{13/2}}{\left(a_{\rm f^{\,\prime}}/a_{\rm f}\right)^{13/2}-\left(\frac{2}{a_{\rm f}/R_{\rm p}}\right)^{13/2}}.
\label{eq:finalsep}
\end{equation}
\noindent
We find $\mu Q$ is affected by less than a factor of two if the final (current) separation is known
within 10\%. From the dependence on the initial and final separations, it is clear that the tidal
evolution near the final separation dominates over the early evolution where the close-orbit correction
is necessary. In fact, if instead of calculating $\mu Q$, one considers the time taken to tidally
evolve to a final separation $a_{\rm f} \ge 4R_{\rm p}$ (by assuming a value of $\mu Q$ instead of an age),
the evolution of the separation from 0.9$a_{\rm f}$ to $a_{\rm f}$ takes roughly the same amount of time
as the evolution from $a_{\rm i}\le 2R_{\rm p}$ to 0.9$a_{\rm f}$. Thus, precisely when the close-orbit
correction is most prominent is also when the system requires the least amount of time to evolve, which
causes the mild effect of the close-orbit correction found in Figs.~\ref{fig:at} and~\ref{fig:muq}.
Returning to a concrete example, for the 1999 KW4 system, using the equivalent spherical radius of
the primary shape model, the separation of the components $a/R_{\rm p}$ is known to 3\% as
3.87$\pm$0.12~\citep{ostr06}. By~(\ref{eq:finalsep}), this small uncertainty can result in a roughly
20\% error in the calculated $\mu Q$, more than twice the effect of the close-orbit correction in
Fig.~\ref{fig:muq} at 3.87$R_{\rm p}$. Together with the dependence of the $\mu Q$ calculation on
the density values for the components, the accuracy of measurements of physical parameters in the
1999 KW4 system is more important than accounting for the proximity of the components to one another.
\section{Discussion}
\label{sec:disc}
We have derived the equations of tidal evolution to arbitrary order in the Legendre polynomial expansion
of the separation between two spherical bodies in a circular and equatorial mutual orbit allowing for
accurate representation of evolution within five primary radii. Equations written in terms of the Love
number $k_{\ell}$ are applicable to any binary system, while equations where the Love number has been
evaluated have assumed the bodies involved have rigidities that dominate their self-gravitational stress
(characteristic of bodies less than roughly 200~km in radius). Because higher-order terms cause tidal
evolution to proceed faster, choosing to ignore them produces upper limits on tidal evolution timescales
and lower limits on material properties in terms of the product of rigidity and the tidal dissipation
function. However, we have shown that the correction for close orbits has only a minor integrated effect
on outward tidal evolution and the calculation of material properties, comparable to or less than the
effect of uncertainties in measurable properties such as density, mass ratio, and semimajor axis (scaled
to the radius of the primary component). In the case of outward evolution, the binary system evolves
rapidly through the range of separations where the close-orbit correction is strongest, so one can safely
ignore the correction to obtain order-of-magnitude estimates of timescales and material properties using
the classical equations for tidal evolution. Accounting for higher orders is more applicable to studying,
famously in the case of Phobos, observed secular accelerations and the infall of a secondary to the surface
of its primary where the higher-order terms instead gain strength.
Though we have presented the expansion of the gravitational potential and the resulting equations of tidal
evolution in the context of two asteroids in mutual orbit, the essence of this work could be generalized
for use in the determination of the Roche limit and the study of close flybys. The use of a higher-order
expansion of the gravitational potential in terms of Legendre polynomials is warranted whenever the
separation of two bodies is within five times the radius of one of the bodies\footnote{The potential felt
by the primary requires higher-order terms with $\ell>2$ if the separation is less than 5$R_{\rm p}$;
the potential felt by the secondary requires higher-order terms with $\ell>2$ if the separation is less than
5$R_{\rm s}$.} (see Table~\ref{tab:legendre}). Historically, in the context of disruption of a body at the
Roche limit or due to a close flyby of a larger body~\citep{srid92,rich98,hols06,hols08,shar06,shar09},
stresses are only considered in the much smaller secondary while the primary is assumed to be rigid. For
small secondaries, the cohesionless Roche limit of 1.5--2$R_{\rm p}$ is much larger than 5$R_{\rm s}$ such
that higher-order terms in the potential expansion are not necessary. However, as larger secondaries are
considered ($R_{\rm s}/R_{\rm p}>0.1$), higher-order terms in the gravitational potential will further stress
the secondary near the Roche limit. Also, with components of increasingly similar size, the assumption of a
rigid primary is not appropriate; the tidal stress on the primary will deform it from a spherical shape and
produce an external potential as in Section~\ref{sec:extpotl} that will in turn further stress the secondary.
If the components are not spin-locked, tidal torques will also play a role in stressing the secondary. Thus,
if evaluating the Roche limit for components of similar size and/or components that are not spin-locked,
one must consider the description presented here. For disruption during a close flyby, or simply modification
of the spin state of the passing body~\citep{sche01,sche00,sche04}, one must consider the proximity of the
flyby in terms of the expansion of the gravitational potential and whether or not tidal bulges can be raised
on the components that would produce torques capable of further altering the spin state of either component.
It is also important to remember that the higher-order theory presented here has implicitly assumed initially
spherical bodies. Extension of this work from spheres to ellipsoids or to arbitrary shapes would affect the
mutual gravitational potential, linear and angular momentum balance, and orbital equations as described
by~\citet{sche09} and~\citet{shar10}. Once the shape is made nonspherical in the absence of a tidal
potential, the system is subject to a ``direct'' torque that naturally occurs from the changing gravitational
pull felt by the orbiting component due to the nonspherical shape of the other component. Accounting for the
tidal potential introduces the ``indirect'' torque described here due to the deformation of one component
by the gravitational presence of the other component. Because the amplitude of the tidal bulge on asteroids,
the parameter $\lambda$ in this work, can be very small due to its direct dependence on the ratio of
self-gravitational stress to rigidity, its direct dependence on the mass ratio, and its inverse dependence
on the separation raised to the third (or higher) power, natural deviations from a spherical shape may exceed
the amplitude of the tidal bulge. However, one must recall that the direct torques due to a nonspherical
shape will change direction as the body rotates under the orbiting component tending to cancel the pre- and
post-encounter effects of the torque as opposed to the indirect torque that is in a consistent direction so
long as the bulge always leads or lags the orbiting component. It may be important to consider direct torques
due to natural departures from a spherical shape via the use of shape models: oblate or prolate spheroids,
triaxial ellipsoids, or vertex models such as those made for the components of the 1999 KW4 binary system and
other asteroids.
\section*{Acknowledgements}
The authors are indebted to the two referees whose detailed reviews and insightful suggestions improved the
clarity and quality of the manuscript. The authors are especially grateful to Michael Efroimsky for many
discussions on the finer points of tidal theory and celestial mechanics. This work was supported by NASA
Planetary Astronomy grants NNG04GN31G and NNX07AK68G to Jean-Luc Margot.
\bibliographystyle{icarus}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 439 |
Q: Override variable in subclass in Kotlin I have this super class:
abstract class Node(rawId: String) {
open val id: String
init {
id = Base64.toBase64(this.javaClass.simpleName + "_" + rawId)
}
}
And this subclass that extends Node:
data class Vendor (
override var id: String,
val name: String,
val description: String,
val products: List<Product>?
): Node(id)
When I initialize the Vendor class like this:
new Vendor(vendor.getId(), vendor.getGroup().getName(), description, products);
I can see the init block in Node get fired as expected. However, when I get the id from the Vendor object, it is the rawId and not the encoded Id.
So I am a bit confused about the initialization order/logic in Kotlin classes. I want the encoding code to be common across all subclasses. Is there a better way to do it?
A: The problem is because you are overriding the id field in the subclass and hence it would always remain the rawId value.
Since the base class has already an id field which has to be an encoded value, you don't need to override it in the subclass. You need to provide the rawId to the Node class in your Vendor class and let the base class take care of the id value to be instantiated with. You can have your abstract class as
abstract class Node(rawId: String) {
val id: String = Base64.toBase64(this.javaClass.simpleName + "_" + rawId)
}
and then define your subclass as
data class Vendor (
val rawId: String,
val name: String,
val description: String,
val products: List<Product>?
): Node(rawId)
Then with
Vendor newVendor = new Vendor(vendor.getId(), vendor.getGroup().getName(), description, products);
newVendor.getId() // would be the encoded id as you expect
since Vendor is a subclass of Node, the id field is also available to the Vendor object with the encoded value.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,859 |
The love of a family is life's greatest blessing – Come and get a Family Photo Engraved Necklace to mark your memories last a lifetime! Upload your best photo and we will engrave it on a disc pendant with fancy details, you can also add any characters up to 60 on the back of pendant. Amazing necklace made of sterling silver, you can't miss it! | {
"redpajama_set_name": "RedPajamaC4"
} | 2,232 |
Sydney Royal Competitions
Key dates and Schedule
Honouring the traditions of local agricultural shows and produce markets around Australia, the Sydney Royal Fine Food Show celebrates heritage, quality, provenance and innovation.
Why should I enter?
Meet the Judges, supporters and find the answers to frequently asked questions about this competition here.
Sydney Royal Competitions encourage and reward excellence, with the aim to support a viable and prosperous future for our agricultural communities.
President's Medal
Since 2006, the Royal Agricultural Society of NSW presents the ultimate award in agricultural excellence, The President's Medal. It is unique in that the Medal is not awarded solely on taste; the Medal recognises a producers overall financial, social and environmental integrity through the entire production cycle from gate to plate. It draws from Sydney Royal Champions from throughout the year, examines, and celebrates truly inspirational, innovative agricultural food and beverage achievers.
Failed connection | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,652 |
\section{Introduction}
Mastermind is a classic board game invented in 1970 by the Israeli telecommunication expert Mordechai Meirowitz and can go back to the early work of Erd\H{o}s and R\'enyi \cite{erdos1963} in 1963.
In this game, there are two players. The first player, callled {\it codemaker}, privately creacts a secret consisting of a sequence of a given set of colors. The goal of the
second player, called {\it codebreaker}, is to determine this secret in as few guesses as possible.
After each guess, the codemaker provides a certain number of black and white pegs indicating
how close the guess is to the real secret. The game is over when the codebreaker has made a guess identical to the secret.
As mentioned in \cite{Anders2020}, before the commercial board game version of Mastermind was released in 1971, variations of this game have been played earlier under other names, such as the
pen-and-paper based games of Bulls and Cows, and Jotto. The game has been played on
TV as a game show in multiple countries under the name of Lingo. Recently, a similar
web-based game has gained much attention under the name of Wordle.
From the 1960s up to now,
Mastermind has attracted a lot of attention, not only from the public as a popular game, but also from the academic community as a scientific issue, especially from the field of mathematics and computer science (e.g. a partial list of references \cite{erdos1963,knuth1976,chvatal1983,Goodrich2009black,doerr2016,Jiang2019a}). Mastermind has been shown to have a closed relation to information theory and graph theory \cite{Metric2007,Jiang2019a}. Also, it has been used to study other problems, such as being a benchmark problem for intelligent algorithms (genetic and evolutionary algorithms) \cite{Kalisker2003}, understanding the intrinsic difficulty of a problem for heuristics\cite{Droste2006}, simulating pair attacks on genomic data\cite{Goodrich2009a}, and cracking bank ciphers\cite{Focardi2010}.
\begin{figure}[htp]
\centering
\includegraphics[width=0.35\textwidth]{mastermind}
\caption{Mastermind game with four pegs and six colors. The image comes from Ref.\cite{doerr2016} for academic purposes.}
\label{figMastermind}
\end{figure}
In the original commercial version, as shown in Figure \ref{figMastermind}, there are four pegs (positions), each of which could be selected from a set of six colors.
The first player secretly chooses a color combination of four pegs. The goal of the second player is to identify the secret in as few guesses as possible. In each round, he guesses an arbitrary color combination of length $4$ to tell the first player, and he receives two numbers (i.e., $BW_s(x)$ formally defined later) about how similar the guess is to the secret. In 1977, Knuth \cite{knuth1976} proved that $5$ queries are necessary and sufficient for a deterministic algorithm to identify the secret.
In 1983, Chv{\'a}tal \cite{chvatal1983} first studied the generalized version of Mastermind, i.e. $n$ positions and $k$ colors, which will be the topic of this paper. In the following, We first give a formal definition of the generalized version.
\subsection{Mastermind}
Let $[k]=\{0,1,\dots k-1\}$. The Mastermind game with $n$ positions and $k$ colors is formally described as follows. At the start of the game, the codemaker chooses a secret string $s\in [k]^n$. In each round, the coderbreaker guesses a string $x\in[k]^n$ and the codemaker replies with $B_s(x)$ or $BW_s(x)$. The codebreaker ``wins'' the game if he gets the secret string $s$, with the goal being to win the game by using as few queries to $B_s(x)$ or $BW_s(x)$ as possible. When we mention ``complexity'' in this paper, it always means the number of queries used by the codebreaker.
\begin{itemize}
\item[(a)] \textbf{Black-peg query}: A black-peg query means an invocation to the function $B_s$ associated with $s\in [k]^n$ that returns $B_s(x) =|\{i \in \{1,2,\dots, n\}: s_i = x_i \}|$ for any $x\in [k]^n$ indicating the number of positions where $s$ and $x$ coincide.
\item[(b)] \textbf{Black-white-peg query}: Similarly, a black-white-peg query means an invocation to the function $BW_s$ that returns $BW_s(x)=\{B_s(x),W_s(x)\}$ for any $x\in [k]^n$, with $ W_s(x) = \max_{\sigma \in P_n} |\{i \in \{1,2,\dots, n\}: s_i = x_{\sigma(i)} \}| -B_s(x)$ indicating the number of right colors but
being in the wrong position, where $P_n$ denotes the set of all permutations of the set $\{1,2,\dots,n\}$.
\end{itemize}
Mastermind has sparked a flurry of research, with scholars studying different variants of Mastermind. The components of Mastermind variants are as follows\cite{chvatal1983,Berger2018}:
\begin{itemize}
\item {\bf The color number $k$ and the position number $n$.} For example, the original version considered by Knuth \cite{knuth1976} is with $k=6$ and $n=4$. When $k=2$, the problem reduces to identifying a binary string. The degree to which people understand the complexity of Mastermind depends on the relationship between $k$ and $n$. Thus, the following cases were usually separately considered in the literature: $k=n$, $k<n^{1-\epsilon}$ with $\epsilon > 0$, and $k>n$. On the contrary, we will show that all these cases can be addressed in a uniform method in the quantum situation.
\item {\bf The types of query information.} When only the black-peg query $B_s(x)$ is allowed, the game is called black-peg Mastermind. When the black-white-peg query $BW_s(x)$ is allowed, it is called black-white-peg Mastermind. From the definitions, it can be seen that the black-white-peg query offers more information than the black-peg query, which has been verified in classical computing, since black-white-peg Mastermind has a lower complexity than the corresponding black-peg version as shown in \cite{doerr2016,Anders2020,Anders2022}. Surprisingly, we will show that the capabilities of the black-white-peg query and the black-peg query are the same in the quantum situation.
\item {\bf The query strategy.}
Depending on the strategy of how an algorithm makes a query, algorithms can be divided into two kinds: adaptive and non-adaptive. In the adaptive strategy, the queries are made sequentially one by one, and the next query can depend on the previous queries and the answers. In the non-adaptive strategy, all the query strings must be supplied in parallel at once, and then the secret is determined according to the returned answers without submitting any additional queries. Mastermind in the non-adaptive case was also called static Mastermind \cite{Goddard2003}.
\item {\bf Whether errors are allowed.} Deterministic algorithms output results without errors. Randomized (probabilistic) and quantum algorithms usually output results with bound error. Sometimes, one can de-randomize a randomized algorithm, obtaining a deterministic one. Also, {\it exact } quantum algorithms that output results with certainty have received much attention. For example, the Deutsch-Jozsa algorithm \cite{deutsch1992rapid} and the Bernstein-Vazirani algorithm\cite{Bernstein1997} are all exact. Simon's algorithm can also be improved to be exact \cite{BrassardH97}. In this paper, the quantum algorithms constructed for Mastermind are exact.
\item {\bf Whether repeated colors are allowed.} {Unless otherwise specified, the Mastermind game we are talking about in this paper allows color repetition, that is, both $s$ and $x$ are allowed to have the same color in different positions. However, it is worth mentioning that there are some papers considering Mastermind without color repetition \cite{Ouali2018,Glazik2021,Larcher2022}.}
\end{itemize}
\subsection{Our Results}
While there has been a spiraling and interesting exploration process for classical strategies for playing Mastermind, a nature problem is:
What is the quantum complexity of Mastermind? Furthermore, can we construct quantum algorithms for playing Mastermind more efficiently?
We have a systematic study on the above problems in this paper, obtaining a full characterization of the quantum complexity and optimal quantum algorithms for Mastermind in both non-adaptive and adaptive settings. Our results (indicated by the word ``Quantum'') and comparison with the latest results in classical computing are summarized in Tables. \ref{tableNonadaptive} and \ref{tableAdaptive}.
\begin{table}[htb]
\setlength{\belowcaptionskip}{10pt}
\begin{center}
\caption{Results for black-peg Mastermind in the non-adaptive setting. It is worth pointing out that the quantum lower bound $\Omega(k)$ holds for both the black-peg and black-white-peg Mastermind, and the quantum algorithm constructed by us achieving this bound uses only black-peg queries. The classical bound for $k\leq n$ also holds for the two types of queries.}
\label{tableNonadaptive}
\begin{tabular}{|c|c|c|}
\hline \textbf{} & $ k \leq n $ & \textbf{$k > n$} \\
\hline \makecell{ Deterministic \\ Randomized } & \makecell{$\Theta(\frac{n \log k }{\max\{\log(n/k), 1\}})$ \\ \cite{chvatal1983,doerr2016} } &
\makecell{$\Omega(n\log k) \sim O(k \log k)$ \\ \cite{doerr2016,Berger2018} }\\
\hline Quantum (Theorem \ref{theorem:Nonadapt_klogk}, Fact \ref{fact:1}) & \multicolumn{2}{c|}{$\Omega(k) \sim O(k \log k)$} \\
\hline
Quantum (Theorem \ref{theorem:Nonadapt_OK},Fact \ref{fact:1}) & \multicolumn{2}{c|}{$\Theta(k)$} \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[htb]
\setlength{\belowcaptionskip}{10pt}
\begin{center}
\caption{Results for black-peg and black-white-peg Mastermind in the adaptive setting.}
\label{tableAdaptive}
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline \textbf{} & black-peg Mastermind & black-White-peg Mastermind \\
\hline \makecell{ Deterministic \cite{Anders2022} \\ Randomized \cite{Anders2020} } & $\Theta(n\frac{\log k}{\log n} +k)$ & $\Theta(n\frac{\log k}{\log n} + \frac{k}{n})$ \\
\hline Quantum (Theorem \ref{theorem:adaptive}, Fact \ref{fact:1}) & \multicolumn{2}{c|}{$\Theta (\sqrt{k})$} \\
\hline
\end{tabular}}
\end{center}
\end{table}
More specifically, our main results are as follows. \begin{itemize}
\item [(i)]{\bf Non-adaptive setting.} The quantum complexity is proved to be $\Theta(k)$ in this setting. First, it is not difficult to prove the lower bound $\Omega(k)$ for any non-adaptive quantum algorithm for Mastermind no matter black-peg queries or black-white-peg queries are allowed (Fact \ref{fact:1}). Then our focus is to construct efficient quantum algorithms. Two non-adaptive quantum algorithms are constructed for Mastermind with $n$ positions and $k$ colors that uses only black-peg queries and returns the secret with certainty: an $O(k\log k)$-complexity algorithm (Theorem \ref{theorem:Nonadapt_klogk}) and an $O(k)$-complexity algorithm (Theorem \ref{theorem:Nonadapt_OK}). In addition, an algorithm with $O(1)$ queries is constructed for the case $k=2$ (Theorem \ref{theoremNonk2n}). The algorithm-design skills in the three algorithms have substantial differences, and may be helpful for solving other problems.
\item [(ii)] {\bf Adaptive setting.} The quantum complexity is proved to be $\Theta(\sqrt{k})$ in this setting. Similarly, it is easy to prove the lower
bound $\Omega(k)$ for any adaptive quantum algorithm for Mastermind no matter
black-peg queries or black-white-peg queries are allowed (Fact \ref{fact:2}).
Then an optimal adaptive quantum algorithm is constructed for Mastermind with $n$ positions and $k$ colors that uses $O(\sqrt{k})$ black-peg queries and returns the secret with certainty.
\end{itemize}
From the above results, one see some interesting and nontrivial differences between quantum and classical strategies for playing Mastermind.
\begin{enumerate}
\item {\it The codebreaker wins greatly more on quantum computers than on classical computers.} One can see from Tables \ref{tableNonadaptive} and \ref{tableAdaptive} that quantum algorithms always have a substantial speedup advantage over classical counterparts in both non-adaptive and adaptive settings. (1) In the non-adaptive setting, our quantum algorithm needs only $O(k)$ black-peg queries that has no relation with $n$. Contrarily, when $k \leq n$ the classical complexity is $\Theta(\frac{n \log k }{\max\{\log(n/k), 1\}})$ monotonically increasing with respect to $n$, and when $k > n$ the current best classical algorithm with $\Omega(k\log k)$-complexity is still worse than our quantum one. Specially, Mastermind with $k=2$ reduces to a binary string identification problem for which our quantum algorithm can find the string in $O(1)$ queries, whereas any classical algorithm requires $\Theta(\frac{n}{\log n}) $ queries for $n\geq 4$. (2) In the adaptive setting, our quantum algorithm need only $O(\sqrt{k})$ black-peg queries. Contrarily, the classical complexity is $\Theta(n\frac{\log k}{\log n} +k)$ (or $\Theta(n\frac{\log k}{\log n} + \frac{k}{n})$), which is $O(\frac{n}{\log n})$ when $n$ is prominent and is $O(k)$ when $k$ is prominent.
\item {\it The structure of Mastermind can be seen to a different degree in the quantum and classical situations.}
In the field of classical computing, an understanding of
Mastermind since 1960s, after experienced a long-term exploration, gradually becomes clear except for the black-peg Mastermind game with $k>n$. In sharp contrast, in this paper we have a complete characterization of the quantum complexity of Mastermind and obtain optimal quantum algorithms. In addition, classical algorithms are complex, whereas quantum algorithms are more compact. What is the reason behind this contrast?
\item {\it The black-peg query is quantumly equivalent to the black-white-peg query, but NOT classically. } As shown in Table \ref{tableAdaptive}, the black-white-peg query provides more information than the black-peg query in the classical situation, as it can reduce the complexity. However, this never holds in the quantum situation, because quantum algorithms using only black-peg queries can achieve the lower bound $\Omega(k)$ in the non-adaptive setting and $\Omega(\sqrt{k})$ in the non-adaptive setting.
\item {\it Allowing error is {\bf provable to be} no help for improving efficiency in the quantum situation, while it is {\bf very likely to be } no help in the classical situation. } Note that the lower bounds $\Omega(k)$ and $\Omega(\sqrt{k})$ in the non-adaptive and adaptive settings respectively are all achieved by exact quantum algorithms. Thus, allowing errors for quantum algorithms will have no improvement for efficiency. In the classical situation, One can see from Tables \ref{tableNonadaptive} and \ref{tableAdaptive} that it is almost assured that errors have no help for efficiency except for the case $k>n$ in Table \ref{tableNonadaptive}, since current we do not know whether the deterministic and randomized algorithms achieve the same lower bound.
\item {\it There exists a complexity gap between the non-adaptive and adaptive settings for both the quantum and classical situations.} The quantum gap is unambiguously to be $\Theta(k)$ versus $\Theta(\sqrt{k})$, while the classical gap varies with the relation between $k$ and $n$.
\end{enumerate}
\subsection{Our Techniques}
The ideas behind our main algorithms are as follows.
\textbf{Algorithm \ref{nonadaptive}} with complexity $O(k \log k)$ is to apply the generalized version of the Bernstein-Vazirani algorithm\cite{Bernstein1997} which can determine the secret string using just one query if provided an inner-product oracle. Therefore, the problem reduces to how to use black-peg queries as few as possible to implement the inner-product oracle used in the Bernstein-Vazirani algorithm.
\textbf{Algorithm \ref{algorithm:nonadaptive2}} with complexity $O(k )$ is based on a crucial observation: If there is a subroutine returning the result $s^{(0, c)}$ that indicates the positions where $s$ takes the color $0$ or $c$, then the secret string $s$ can be deduced after running the subroutine $k-1$ times for $c\in \{1,2,\dots, k-1\}$.
We prove that there is a quantum algorithm that implements the above subroutine using $O(1)$ black-peg queries, which can be regarded as $n$ synchronous executions of the Deutsch algorithm \cite{deutsch1985quantum,deutsch1992rapid}.
\textbf{Algorithm \ref{algorithm:adaptive}} is adaptive and has complexity $O(\sqrt{k})$. The idea is to apply $n$ Grover searches \cite{Grover1996} synchronously on $n$ secret characters even if the black-peg oracle is provided as a whole instead of $n$ independent oracles. Note that the proportions of target states in the $n$ synchronous Grover searches are all $\frac{1}{k}$ known in advance. Thus, we can apply the exact Grover search\cite{ Brassard2002,long2001grover, hoyer2000arbitrary} to make the algorithm error-free; however, more careful consideration is required on how to implement the general oracle used in the exact Grover search.
\subsection{Related Work}
Some results about the classical complexity and algorithms for Mastermind are summarized in the following, without covering all but some work related to our study.
\noindent\textbf{Non-adaptive complexity for black-peg Mastermind.}
In 1983, Chv{\'a}tal \cite{chvatal1983} first studied black-peg Mastermind in the non-adaptive setting, proving that when $k<n^{1-\epsilon}$ with $\epsilon > 0$, $(2+\epsilon) n \frac{1+2 \log k}{\log (n/k)}$ queries are sufficient to determine any secret string, matching the information-theoretic lower bound $\frac{n \log k}{\log C_{n+2}^2}$ to a constant factor. Until 30 years later, Doerr, Doerr, Sp{\"o}hel and Thomas in a breakthrough paper published in Journal of the ACM \cite{doerr2016} (appearing first in SODA 2013) proved that the non-adaptive query complexity is $\Theta (n \log k / \max\{\log(n/k), 1\})$ for $k \leq n $, which extends Chv{\'a}tal's result. For $k>n$, the best upper bound is $O(k \log k)$\cite{doerr2016,Berger2018}, and has a gap away from the lower bound $\Omega(n\log k)$\cite{doerr2016}.
The non-adaptive complexity of black-peg Mastermind is closely related to two problems. The coin-weighing problem with a spring scale by Shapiro and Fine in 1960 \cite{Shapiro1960} is equivalent to black-peg Mastermind with two colors. The minimum number of queries for black-peg Mastermind with $n$ positions and $k$ colors in the non-adaptive setting is equivalent to the metric dimension of the Hamming graph \cite{Metric2007}. From this perspective, Jiang and Polyanskii \cite{Jiang2019a} recently showed that the minimum number of queries is $(2+o(1))n\frac{\log k}{\log n})$ for any constant $k$.
For $k \leq n $, the non-adaptive complexity of black-white-peg Mastermind is the same as black-peg Mastermind, since the non-adaptive strategy using only black-peg queries has reached the entropy lower bound. Contrarily, for $k>n$ it is still not clear whether the non-adaptive strategy with black-white-peg queries can reduce currently the best bound $O(k\log k)$ achieved by
the one with only black-peg queries \cite{doerr2016,Berger2018}.
\noindent\textbf{Adaptive complexity for black-peg Mastermind.} Chv{\'a}tal \cite{chvatal1983} gave a deterministic adaptive algorithm using $2(n\lceil \log n \rceil -2^{\lceil \log n \rceil }+1)$ guesses for $k=n$.
Subsequently, an algorithm with $n\lceil \log n \rceil + \lceil (2-1/k)n\rceil +k$ queries was proposed by Goodrich \cite{Goodrich2009black} for any parameters $n$ and $k$. This was further improved by J\"ager and Peczarski \cite{Jager2011} to $n \lceil \log n \rceil - n + k + 1$ for the case $k > n$ and $n \lceil \log k \rceil + k$ for the case $k \le n$. For $k=n$, it is worth noting that there is a gap $\log n$ between the upper bound $O(n \log n)$ in the above results and the entropy lower bound $\Omega(n)$. This gap was reduced to $\log \log n$
by Doerr, Doerr, Sp{\"o}hel and Thomas \cite{doerr2016}. They gave the first separation between the adaptive and non-adaptive strategies in the case of $k=n$. Until recently, Martinsson and Su \cite{Anders2020} presented for the first time a randomized algorithm with query complexity $O(n)$, closing the gap with the lower bound $\Omega(n)$, and proved that the randomized complexity of black-peg Mastermind is $\Theta(n\frac{\log k}{\log n} +k)$ for any $n$ and $k$ based on the results of \cite{chvatal1983} and \cite{doerr2016}.
In 2022, Su \cite{Anders2022} achieved the same deterministic complexity utilizing a general query game framework.
\noindent\textbf{Adaptive complexity for black-white-peg Mastermind.}
An upper bound of adaptive complexity of black-white-peg Mastermind was shown to be $2n\log k +4n$ for $n\leq k \leq n^2$ and
$\left \lceil k/n \right \rceil+ 2n\log k +4n$ for $k\geq n$ by Chv{\'a}tal \cite{chvatal1983}. For $k\geq n$, it was improved to $2n\lceil \log n \rceil +2n +\lceil k/n \rceil +2$ by Chen, Cunha, and Homer \cite{Chen1996}. Also, Doerr, Doerr, Sp{\"o}hel and Thomas\cite{doerr2016} proved that $\Omega(n\log \log n + \frac{k}{n})$ queries are enough to determine any secret for $k\geq n$.
Recently, Refs.\cite{Anders2020,Anders2022} proved that the randomized and deterministic complexities are both $\Theta(n\frac{\log k}{\log n} + \frac{k}{n})$ for any $n$ and $k$.
\noindent\textbf{Quantum.}
As mentioned before, from the 1970s up to now,
Mastermind has attracted a lot of attention, not only from the public as a popular game, but also from the academic community as a scientific issue.
Surprisingly, to the best of our knowledge, there has been no publication on the quantum complexity/algorithms of Mastermind except Ref. \cite{hunziker2002quantum}. In fact, Ref. \cite{hunziker2002quantum} was not devoted directly to Mastermind, because neither the word``Mastermind'' nor the word ``game'' appeared there.
Hunzike and Meyer \cite{hunziker2002quantum} considered the problem of identifying a base $k$ string $a$ given an oracle $h_a$ which returns $h_a(x)=dist(x, a)\bmod r$ with $r=\max \{2,6-k\}$, i.e., the Hamming distance between the query string $x$ and the solution $a$ modulo $r$. This problem is similar to the black-peg Mastermind game but with a slightly different oracle.
For the convenience of readers, we include in Appendix \ref{appendix:A} the algorithm C with $k>4$ proposed by \cite{hunziker2002quantum} which succeeds with probability $\frac{1}{2} + \epsilon$ when $n < -k\ln(\frac{1}{2} + \epsilon)$. Hunzike and Meyer \cite{hunziker2002quantum} claimed that the algorithm can be adjusted to an exact version of Grover's algorithm by the methods in \cite{ long2001grover, hoyer2000arbitrary}, but this is NOT true as explained below.
Note that when $k>4$, we have $h_{a}(x) = ({n - \sum_{i = 1}^n \delta_{s_ix_i}}) \bmod 2$, and the quantum oracle works as $O_{h_a} | x \rangle | b \rangle = | x \rangle | b \oplus h_a(x) \rangle $ where $\oplus$ denotes the bitwise XOR. We explain in details how the $O_{h_a}$ oracle works in the algorithm as Eqs. \eqref{equation:meyer1} $\sim$ \eqref{equation:meyer4}. The trick employed there is phase pick-back as shown in Eq. \eqref{equation:meyer1} and Eq. \eqref{equation:meyer2}, adding a fixed phase $-1$ to $| x_i \rangle$ when $x_i$ is equal to $s_i$. It should be pointed out that Eq. \eqref{equation:meyer3.5} holds as $
(-1)^{ l} = (-1)^{ l\bmod 2}$ for any $0\leq l\leq n$, but it will not hold if we replace $-1$ with $ e^{i\phi}$ for general $\phi$, since $
e^{i\phi l} = e^{i\phi l\bmod 2}$ no longer holds.
However, in the exact Grover search \cite{ long2001grover, hoyer2000arbitrary} it is necessary to realize a general phase $e^{i\phi}$. That is why the algorithm given by \cite{hunziker2002quantum} can't be adapted to be exact by the methods in \cite{ long2001grover, hoyer2000arbitrary}.
\begin{align}
O_{h_a} (|x \rangle \otimes \frac{1}{\sqrt{2}}(| 0 \rangle - | 1 \rangle))
\label{equation:meyer1}& = | x \rangle \otimes \frac{1}{\sqrt{2}}(| 0 \oplus h_a(x) \rangle - | 1 \oplus h_a(x) \rangle)\\
\label{equation:meyer2}& = (-1)^{h_a(x)} | x \rangle \otimes \frac{1}{\sqrt{2}}(| 0 \rangle - | 1 \rangle)\\
\label{equation:meyer3}
& = (-1)^{(n - \sum_{i = 1}^n \delta_{s_ix_i}) \bmod 2} | x \rangle \otimes \frac{1}{\sqrt{2}}(| 0 \rangle - | 1 \rangle)\\
\label{equation:meyer3.5}
& = (-1)^{n - \sum_{i = 1}^n \delta_{s_ix_i}} | x \rangle \otimes \frac{1}{\sqrt{2}}(| 0 \rangle - | 1 \rangle)\\
\label{equation:meyer4}
& = (-1)^n\mathop{\bigotimes}_{i = 1}^{n} (-1)^{\delta_{s_ix_i}}|x_i\rangle \otimes \frac{1}{\sqrt{2}}(| 0 \rangle - | 1 \rangle)
\end{align}
\section{Preliminaries}
Some notations and notion used throughout this paper are introduced here, whereas others will be defined when they appear for the first time.
$C^d$ denotes a $d$-dimensional Hilbert space. $\oplus_m$ stands for the operation of modulo $m$ addition. $|A|$ is the cardinality of set $A$. For a positive integer $k$, we denote $\{0, 1, 2, \cdots k - 1\}$ by $[k]$.
For a function $F: A\rightarrow [m]$, we will use the same notation $F$ to denote the quantum implementation for $F$, usually called {\it quantum oracle}, which works as $F\ket{x}\ket{y}=\ket{x}\ket{y\oplus_m F(x)}$ for $x\in A$ and $y\in [m]$. $\delta_{ij}$ indicates whether $i$ equals to $j$: $$\delta_{ij} =
\begin{cases}
& 1, ~~~i=j, \\
& 0, ~~~i\neq j.
\end{cases} $$
The quantum Fourier transform on a $d$-dimensional Hilbert space, denoted by $QFT_d $, is defined by
\begin{equation}
\label{qft}
QFT_d\ket{l}= \frac{1}{\sqrt{d}} \sum _{j=0}^{d-1} \omega^{lj}\ket{j},
\end{equation}
and the inverse $QFT_d$, denoted by $QFT_d^\dagger $, is defined by
\begin{equation}
QFT_d^\dagger \ket{l}= \frac{1}{\sqrt{d}} \sum _{j=0}^{d-1} \omega^{-lj}\ket{j}
\end{equation}
with $\omega=e^{2\pi i /d}$ and $l\in\{0,1,\cdots, d-1\}$.
\section{Non-adaptive Quantum Algorithms}
In this section, we consider non-adaptive quantum algorithms for Mastermind. First it is easy to get a lower bound of quantum complexity in the non-adaptive setting.
\begin{fact}\label{fact:1}
For the Mastermind game with $n$ positions and $k$ colors, any non-adaptive quantum algorithm must require $\Omega(k)$ black-peg or black-white-peg queries.
\end{fact}
\begin{proof} When $n = 1$, one can see that a black-white-peg query equals to a black-peg query, and the problem reduces to the unstructured search problem: searching for one color in $k$ colors. A non-adaptive quantum lower bound for this unstructured search problem has been shown to be $\Omega(k)$ \cite{Koiran2010}.
\end{proof}
We will design an optimal non-adaptive quantum algorithm achieving the lower bound $\Omega(k)$, but it is not done overnight. Our thinking process is recorded in the following subsections. First, an $O(1)$-complexity non-adaptive quantum algorithm is constructed for the case $k = 2$ in Section \ref{subsec: twocolor}. Then we turn to the general case of $n$-position $k$-color in Section \ref{subsec: colork}, presenting a non-adaptive quantum algorithm with complexity $O(k\log k)$.
Finally, an optimal non-adaptive quantum algorithm with complexity $O(k)$ is constructed for the general case in Section \ref{subsec: optcolork}. The algorithm-design skills in the three algorithms have substantial differences, and may be helpful for solving other problems.
\subsection{O(1) Quantum Algorithm for Two Colors}\label{subsec: twocolor}
For Mastermind with $n$ positions and two colors, our result is the following theorem.
\begin{theorem}
\label{theoremNonk2n}
There is a non-adaptive quantum algorithm for the Mastermind game with $n$ positions and $2$ colors that uses $O(1)$ black-peg queries and returns the secret string with certainty.
\end{theorem}
\begin{proof} First, for $s\in [2]^n$ we define a new function $BV_s$ as: $BV_s(x)= \sum_{i} s_i \cdot x_i \bmod 2$ for any $x\in [2]^n$.
It is well known that
any computation that can be performed on a classical computer can be performed with the same efficiency on a quantum computer \cite{NielsenChuang2000}. Therefore, it follows from Lemma \ref{BVblack} that the quantum oracle of $BV_s$ can be constructed by using $O(1)$ queries to the quantum oracle of $B_s$. Then, by the Bernstein-Vazirani algorithm \cite{Bernstein1997}, one can identify the secret string $s$ with certainty using one query to $BV_s$.
\end{proof}
\begin{lemma}\label{BVblack}
Given $s\in [2]^n$ and $x\in
[2]^n$, $BV_s(x)$ can be computed by using $2$ black-peg queries.
\end{lemma}
\begin{proof}
In the following, we use $\{i: P(i)\}$ to denote a subset of $\{1,2, \dots, n \}$ that satisfies the property $P(i)$. Let $a_{00}=|\{i: x_i = 0, s_i = 0\}|$, $a_{01}= |\{i: x_i = 0, s_i = 1\}$|, $a_{10}=|\{i: x_i = 1, s_i = 0\}|$, and $a_{11}=|\{i: x_i = 1, s_i = 1\}$|.
Let $n_0=B_s(0^n)$ where $0^n$ denotes the all $0$ string. Then we have $n_0 = |\{i: s_i = 0\}| = a_{00} + a_{10}$. Let $n_x=B_s(x)$. Then there is $n_x = |\{ i: x_i = s_i\}| = a_{00} + a_{11}$.
It is obvious that $|x| = |\{i: x_i = 1\}| = a_{10} + a_{11}$. By calculation, we have
\begin{equation*}
\begin{split}
s \cdot x
& = |\{i | x_i = s_i = 1\}| = a_{11} \\
& = n_x - (n_0 + n_x - |x|) / 2.
\end{split}
\end{equation*}
Hence, $BV_s(x)=\left(n_x - (n_0 + n_x - |x|) / 2\right) \bmod 2$. This completes the proof.
\end{proof}
\subsection{ $O(k\log k)$ Non-adaptive Quantum Algorithm} \label{subsec: colork}
In this subsection, we study Mastermind with $n$ positions and $k$ colors.
First, two functions will be employed, as described below:
\begin{itemize}
\item $IPK_s$, associated with $s \in [k]^n$, is defined by $IPK_s(x)= \sum_{i} s_i \cdot x_i \bmod k$ for any $x \in [k]^n$.
\item $IPT_s$, associated with $s \in [k]^n$, is defined by $IPT_s(x)= \sum_{i} s_i \cdot x_i \bmod k$ for any $x \in [2]^n$.
\end{itemize}
\begin{algorithm}[htp]
\caption{A non-adaptive quantum algorithm for Mastermind with $n$ positions and $ k$ colors}
\label{algorithm:test}\label{nonadaptive}
\LinesNumbered
\SetKwInOut{KWProcedure}{Procedure}
\SetKwInput{Runtime}{Runtime}
\KwIn {A black-peg oracle $B_s$ for $s\in [k]^n$ such that $B_s| x \rangle |b\rangle = |x\rangle |b\oplus_{n+1} B_s(x)\rangle$. }
\KwOut {The secret string $s$.}
\Runtime{$O(k \log k)$ queries to $B_s$. Succeeds with certainty.}
\KWProcedure{}
Prepare the initial state $\ket{\Phi_0}=\ket{0}^{\otimes n}\ket{k-1}$, where $\ket{0}$ and $\ket{k-1}$ are basis states in a $k$-dimensional Hilbert space.
Apply quantum Fourier transform $QFT_k ^{\otimes n+1}$.
Apply the quantum oracle of $IPK_s$ that calls the black-peg oracle $B_s$ $O(k \log k)$ times in parallel.
Apply inverse quantum Fourier transform $(QFT_k^\dagger) ^{\otimes n+1}$.
Measure the first $n$ registers in the computational basis.
\end{algorithm}
Now one of our main results is the following theorem.
\begin{theorem}\label{theorem:Nonadapt_klogk}
There is a non-adaptive quantum algorithm for the Mastermind game with $n$ positions and $ k$ colors that uses $O(k \log k)$ black-peg queries and returns the secrect string with certainty.
\end{theorem}
\begin{proof}
Our algorithm is described in Algorithm \ref{nonadaptive}. The main idea is to use a generalized version of the Bernstein-Vazirani algorithm \cite{Bernstein1997} by calling $IPK_s$ one time to find $s$ and we further show that $IPK_s$ can be constructed by calling the black-peg function $B_s$ $O(k \log k)$ times in parallel. The process of Algorithm \ref{nonadaptive} can be depicted in Figure \ref{fig:nonadapt1}. Now assume that $IPK_s$ is accessible. The state in Algorithm \ref{nonadaptive} evolves as follows.
\begin{figure}[htbp]
\centering\includegraphics[width=12cm]{algorithm1.pdf}
\caption{The circuit diagram of Algorithm \ref{nonadaptive} is depicted above the dashed line. The idea of how to construct $IPK_s$ is shown below the dashed line, where $A \stackrel{O(t)}{\longleftarrow} B$ means that $A$ can be implemented by $O(t)$ copies of $B$.}
\label{fig:nonadapt1}
\end{figure}
First, we prepare the initial state $\ket{\Phi_0}=\ket{0}^{\otimes n}\ket{k-1}$, where there are $n+1$ registers and each one is associated with a $k$-dimensional Hilbert space.
Second, after quantum Fourier transform $QFT_k ^{\otimes n+1}$, the initial state is changed to
\begin{equation*}
\ket{\Phi_1} = \frac{1}{\sqrt{k^n}} \sum _{x\in [k]^n } \ket{x}\ket{\phi},
\end{equation*}
where
\begin{equation*}
\ket{\phi} = \frac{1}{\sqrt{k}} \sum _{j=0}^{k-1} \omega^{k-j}\ket{j}
\end{equation*}
with $\omega=e^{2\pi i /k}$.
At the third step, apply the quantum oracle of $IPK_s$
\begin{equation*}
IPK_s\ket{x}\ket{y} \longrightarrow \ket{x}\ket{(IPK_s(x) + y) \bmod k}.
\end{equation*}
Then, the state evolves to
\begin{align*}
\ket{\Phi_2}&=IPK_s\ket{\Phi_1}\\
&= \frac{1}{\sqrt{k^n}} \sum _{x\in [k]^n } \omega^{IPK_s(x)} \ket{x}\ket{\phi}~~~~(\text {by Lemma}~ \ref{IPKS})\\
&=\frac{1}{\sqrt{k^n}} \sum _{x\in [k]^n } e^{\frac{2\pi i \left(\sum_{j=1}^{j=n} s_j \cdot x_j\right) \bmod k}{k}}\ket{x}\ket{\phi}\\
&= \frac{1}{\sqrt{k^n}} \sum _{x\in [k]^n } e^{\frac{2\pi i \sum_{j=1}^{j=n} s_j \cdot x_j }{k}}\ket{x}\ket{\phi}\\
&= \frac{1}{\sqrt{k^n}} \sum _{x=x_1\dots x_n\in [k]^n } \prod_{j=1}^{j=n} e^{\frac{2\pi i s_j \cdot x_j }{k}}\ket{x_1\dots x_n}\ket{\phi}\\
&= \frac{1}{\sqrt{k}}\sum_{x_1=0}^{k-1} e^{\frac{2\pi i s_1 \cdot x_1 }{k}}\ket{x_1} \otimes \dots \otimes \frac{1}{\sqrt{k}} \sum_{x_n=0}^{k-1} e^{\frac{2\pi i s_n \cdot x_n }{k}}\ket{x_n}\ket{\phi}.
\end{align*}
At the fourth step, after applying inverse quantum Fourier transform $(QFT^\dagger) ^{\otimes n+1}$, we get the state
\begin{equation*}
\ket{\Phi_3}=\ket{s_1 s_2\dots s_n}\ket{k-1}.
\end{equation*}
Finally, the secret string $s=s_1s_2\dots s_n$ can be obtained with certainty after measuring the first $n$ registers in the computational basis.
In the above procedure, the $IPK_s$ oracle is queried once and can be constructed with $O(k \log k)$ queries to the black-peg function $B_s$ based on Lemma \ref{lemmaIPT}
and Lemma \ref{lemmaIPK}. Hence, the complexity of Algorithm \ref{nonadaptive} with respective to $B_s$ is $O(k \log k)$. Furthermore, as shown in Lemma \ref{lemmaIPT}
and Lemma \ref{lemmaIPK}, all the $O(k \log k)$ queries are applied in parallel, and thus the algorithm is non-adaptive.\end{proof}
\begin{lemma}\label{IPKS} Let $IPK_s\ket{x}\ket{y} \longrightarrow \ket{x}\ket{(IPK_s(x) + y) \bmod k}$. Then, for $$\ket{\phi} = \frac{1}{\sqrt{k}} \sum _{j=0}^{k-1} \omega^{k-j}\ket{j}$$ with $\omega=e^{2\pi i /k}$, we have
\begin{equation*}
IPK_s\ket{x}\ket{\phi} = \omega ^{IPK_s(x)}\ket{x}\ket{\phi}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $m_j=(IPK_s(x)+j) \bmod k$ for $j=0, 1,\cdots, k-1$. Then $IPK_s(x)+j-m_j=t_jk$ for some integer $t_j$, that is,
\begin{equation*}
j=t_jk+m_j-IPK_s(x).
\end{equation*}
Then we have
\begin{align}
IPK_s\ket{x}\ket{\phi}&=\frac{1}{\sqrt{k}} \sum _{j=0}^{k-1} \omega^{k-j}IPK_s\ket{x}\ket{j}\\
&=\frac{1}{\sqrt{k}} \sum _{j=0}^{k-1} \omega^{k-j}\ket{x}\ket{(IPK_s(x)+j) \bmod k}\\
&=\sum _{j=0}^{k-1} \omega^{k-(t_jk+m_j-IPK_s(x))}\ket{x}\ket{m_j} \label{eq1}\\
&=\omega^{IPK_s(x)}\ket{x}\sum _{j=0}^{k-1} \omega^{k-m_j}\ket{m_j} \label{eqj}\\
&=\omega^{IPK_s(x)}\ket{x}\sum _{l=0}^{k-1} \omega^{k-l}\ket{l}\\
&=\omega^{IPK_s(x)}\ket{x}\ket{\phi},
\end{align}
where note that in Eq. (\ref{eq1}), $\omega^{t_jk}=1$ holds for integer $t_j$, and in Eq. (\ref{eqj}), when $j$ traverses all the values in $\{0, 1, \cdots, k-1\}$, so does $m_j$.
\end{proof}
\begin{lemma}\label{lemmaIPK}
Given $s,x \in [k]^n$, $IPK_s(x)$ can be computed by calling $IPT_s$ $ \lceil \log (k) \rceil $ times in parallel.
\end{lemma}
\begin{proof}
Given $s = s_1s_2 \dots s_n \in [k]^n$ and $x = x_1x_2 \dots x_n \in [k]^n$, let $m= \lceil \log(k) \rceil$. $x_i(1) x_i(2) ... x_{i}(n)$ denotes the binary representation of $x_i$. There is $x_i = \sum_{j=1}^{m} 2^{j-1} x_i(j)$ with $x_i(j) \in \{0, 1\}$. Then
\begin{equation*}
\begin{split}
IPK_s(x)&= \sum_{i = 1}^{n } s_i * x_i \bmod k\\
& = \sum_{i = 1} ^{n } \sum_{j = 1} ^{m } 2^{j-1} x_i(j) * s_i \bmod k \\
& = \sum_{j = 1} ^{m } 2^{j-1} (\sum_{i = 1} ^{n} x_i(j) * s_i) \bmod k \\
& = \sum_{j = 1}^{m } 2^{j-1} ( \sum_{i = 1} ^{n} x_i(j) * s_i \bmod k) \bmod k\\
& = \sum_{j = 1}^{m } 2^{j-1} IPT_s(x(j) ) \bmod k,
\end{split}
\end{equation*}
where $x(j) = x_1(j) x_2(j) ... x_{n}(j)$.
\end{proof}
\begin{lemma}\label{lemmaIPT}
Given $s\in [k]^n$ and $x\in [2]^n$, $IPT_s(x)$ can be computed by using $k$ black-peg queries $B_s$ in parallel.
\end{lemma}
\begin{proof}
We now describe how to compute $IPT_s(x)=\sum_{i} s_i \cdot x_i \bmod k $ using black-peg queries $B_s$.
Given $x = x_1x_2 \dots x_n \in [2]^n$, we define $k$ strings $y^{c} \in [k]^n$ for $c = 0, 1, ..., k - 1$ as follows:
\begin{equation*}
y_i^{c}=
\begin{cases}
& c, ~~~x_i=1, \\
& 0, ~~~x_i=0.
\end{cases}
\end{equation*}
Feed the black-peg function $B_s$ with $y^{c}$, and record the results as
\begin{equation*}
n_c = B_s(y^{c}) = |\{ i \in \{1,2,\dots, n\}: s_i = y_i^{c} \}|.
\end{equation*}
Let
\begin{align*}
&V = |\{ i \in \{1, 2, \dots, n\}: x_i = 1\}|,\\
&v_c = |\{i | s_i = c, i \in V\}|,\ c = 0, 1, ..., k - 1, \\
&a = |\{i | s_i = 0, i \in \{1,2,\dots, n\} - V \}|.
\end{align*}
For $c = 0, 1, ..., k - 1$, obviously there are
\begin{align}
&n_c = v_c + a, \label{njvja}\\
&\sum_{c \in [k]} v_c = |V|.\label{vjV}
\end{align}
Combined with Eqs.~\eqref{njvja} and \eqref{vjV}, we have
\begin{equation*}
\sum_{c \in [k]} n_c = |V| + k * a.
\end{equation*}
Hence, we get
\begin{equation*}
v_c = n_c - \frac{\sum_{c \in [k]} n_c - |V| }{k}
\end{equation*}
for $c = 0, 1, ..., k - 1$.
That is, we can use the query results $n_c$ to compute $v_c$ for $c = 0, 1, ..., k - 1$. Now we are ready to compute $IPT_s(x)$:
\begin{equation*}
IPT_s(x)= \sum_{i \in V} s_i \bmod k \\
= \sum_{c \in [k]} c\cdot v_j\bmod k.
\end{equation*}
As a result, $IPT_s(x)$ can be computed by $k$ black-peg queries in parallel.
\end{proof}
\subsection{$O(k)$ Non-adaptive Quantum Algorithm}
\label{subsec: optcolork}
In this subsection, we give an optimal non-adaptive quantum algorithm matching the lower bound $\Omega(k)$.
First, given two colors $c_1, c_2 \in [k]$ with $c_1 \neq c_2 $ and a string $s\in [k]^n$, we define a function ${B_s^{(c_1, c_2)}}$ as
$${B_s^{(c_1, c_2)}}(x) = \sum_{i=1}^n (\delta_{x_i 0}\delta_{s_i c_1} + \delta_{x_i 1}\delta_{s_i c_2})$$
where $x\in [k]^n$, which indicates how many positions $i$ satisfy $s_i$ takes color $c_1$ when $x_i = 0$ or $c_2$ when $x_i = 1$. Its quantum oracle works as
$$ {B_s^{(c_1, c_2)}} | x \rangle | b \rangle = | x \rangle | b \oplus_{n+1} {B_s^{(c_1, c_2)}}(x) \rangle. $$
\begin{algorithm}[htp]
\SetKwFunction{FindTwoColorPositon}{FindTwoColorPositon}
\caption{A non-adaptive quantum algorithm for Mastermind with $n$ positions and $ k \geq 3$ colors using $O(k)$ black-peg queries}
\label{algorithm:nonadaptive2}
\LinesNumbered
\SetKwInOut{KWProcedure}{Procedure}
\SetKwInput{Runtime}{Runtime}
\KwIn {A black-peg oracle $B_s$ for $s\in [k]^n$ such that $B_s| x \rangle |b\rangle = |x\rangle |b\oplus B_s(x)\rangle$. }
\KwOut {The secret string $s$.}
\Runtime{$O(k)$ queries to $B_s$. Succeeds with certainty.}
\KWProcedure{}
\For{c = 1 to k-1}{
Call \textbf{FindTwoColorPositon} with $B_s$ and the pair $( 0 , c )$ as input, and get the result $s^{(0, c)}$ that indicates the positions where $s$ takes the color $0$ or $c$.
}
Set $s^{(0)} = s^{(0, 1)} \wedge s^{(0, 2)}$ ~~~// Here $ k \geq 3$ is required\;
\For{c = 1 to k - 1}{
Set $s^{(c)} = s^{(0, c)} \oplus s^{(0)} $
}
Ouput the secret string $s = \sum_{c = 0}^{k-1} s^{(c)} * c $
\end{algorithm}
Given a string $x \in [k]^n$ and $c \in [k]$, the product of $x$ and $c$ is defined by
$ x * c= (x_1 * c)(x_2*c)\cdots (x_n*c).$
Given two strings $x, y \in [k]^n$, the sum of $x,y$, denoted by $x+y$, is a $n$-length string with the $i$th position being $ (x_i + y_i)\mod k$ for $i = 1, 2, \cdots n$.
Now one main result is given below.
\begin{theorem}\label{theorem:Nonadapt_OK}
There is a non-adaptive quantum algorithm for the Mastermind game with $n$ positions and $ k$ colors that uses $O(k)$ black-peg queries and returns the secret string with certainty.
\end{theorem}
\begin{proof}
As shown in Algorithm \ref{algorithm:nonadaptive2}, the idea is to first apply the subroutine \textbf{FindTwoColorPositon} to get the result $s^{(0, c)}$ for $c\in \{1,2,\dots, k-1\}$ that indicates the positions where $s$ takes the color $0$ or $c$, and then the secret string $s$ can be deduced from the $k-1$ results. Furthermore, we will prove that there is a quantum algorithm that implements the above subroutine using $O(1)$ black-peg queries, which can be regarded as $n$ synchronous executions of the Deutsch algorithm \cite{deutsch1985quantum,deutsch1992rapid}. The subroutine \textbf{FindTwoColorPositon} is described in Algorithm \ref{algorithm:find2color}. In the following, we give an analysis of correctness and complexity for Algorithm \ref{algorithm:nonadaptive2}, whereas the analysis for Algorithm \ref{algorithm:find2color} will be deferred to Lemma \ref{lemmaFind2colorposition}.
\begin{table}[htp]
\begin{center}
\caption{Key symbols}
\label{tablesymbol}
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|}
\hline
\textbf{Symbol} & \textbf{Meaning} & \makecell[c]{\textbf{Example}:\\ $s=01230123$}\\
\hline
$s^{(c)}\in \{0, 1\}^n$ &
\makecell[l]{ $s^{(c)}$ satisfies $s^{(c)}_i=
\left\{
\begin{array}{lr}
1, ~~~s_i = c, \\
0, ~~~\text{otherwise},\end{array}
\right.$\\
which indicates the positions where $s$ takes the color $c$.}
& \makecell[c]{ $ c = 1 $ \\ $s^{(1)} = 01000100$}\\
\hline
$s^{(c_1, c_2)}\in \{0, 1\}^n$ &
\makecell[l]{
$s^{(c_1, c_2)}$ satisfies $s^{(c_1, c_2)}_i=
\left\{
\begin{array}{lr}
1, ~~~s_i = c_1~\text{or}~ c_2, \\
0, ~~~\text{otherwise},
\end{array}
\right.$ \\ which indicates the positions where $s$ takes the color $c_1$ or $c_2$. }
& \makecell[c]{ $ c_1 = 1, c_2 = 3 $ \\
$ s^{(1, 3)} = 01010101 $ }\\
\hline
\end{tabular}}
\end{center}
\end{table}
From the first to third step, we apply \textbf{FindTwoColorPositon} (Algorithm \ref{algorithm:find2color}) to the color pair $(0, c)$ for $c = 1, 2, \cdots k - 1$, and then get the result $s^{(0, c)} \in \{0, 1\}^n$ that, as shown in Table \ref{tablesymbol}, indicates the positions where $s$ takes the color $0$ or $c$. For example, let the secret string $s$ be $``0 1 2 3 0 1 2 3"$ for $n = 8$ and $k = 4$. Then $s^{(0, 1)} = 11001100$ indicates that $s$ takes the color $0$ or $1$ in these positions: $\{1, 2, 5, 6\}$, and $s^{(0, 2)}=10101010$ indicates that $s$ takes the color $0$ or $1$ in these positions: $\{1, 3, 5, 7\}.$
Let's talk more about the symbols shown in Table \ref{tablesymbol}. It is not difficult to verify
\begin{align*}
& s^{(c)} = s^{(c, c_1) } \wedge s^{(c, c_2)}, \\
& s^{(c)} = s^{(c_1, c)} \oplus s^{(c_1)},
\end{align*}
where $\wedge$ denotes the bitwise AND and $\oplus$ denotes the bitwise XOR.
At the fourth step, we get
\begin{align*}
s^{(0)} = s^{(0, 1)} \wedge s^{(0, 2)}
\end{align*}
which indicates the positions where $s$ takes the color $0$. In the example mentioned before,
we have $s^{(0)} = 10001000$ which means that $s$ takes the color $0$ in these positions: $\{1, 5\}$.
From the fifth to the seventh step, we get
\begin{align*}
s^{(c)} = s^{(0, c)} \oplus s^{(0)}
\end{align*}
for $c = 1, 2, \cdots k - 1$, which indicates the positions where $s$ takes the color $c$.
In the example mentioned before,
we have $s^{(1)} = 01000100$ and $s^{(2)} = 00100010$ which indicate that $s$ takes the color $1$ in the positions $\{2, 6\}$ and the color $2$ in $\{3, 7\}$, respectively.
Finally, we get the secret string $s = \sum_{c = 0}^{k-1} s^{(c)} * c $.
Since there are $k - 1$ calls to \textbf{FindTwoColorPositon} and each call will consume $O(1)$ black-peg queries by Lemma \ref{lemmaFind2colorposition}, the complexity of Algorithm \ref{algorithm:nonadaptive2} with respective to $B_s$ is $O(k)$, which concludes the proof.
\end{proof}
\begin{algorithm}[htb]
\SetKwFunction{FindTwoColorPositon}{FindTwoColorPositon}
\SetKwInOut{KWProcedure}{Procedure}
\SetKwInput{Runtime}{Runtime}
\caption{FindTwoColorPositon}
\label{algorithm:find2color}
\LinesNumbered
\KwIn {A black-peg oracle $B_s$ for $s\in [k]^n$ such that $B_s| x \rangle |b\rangle = |x\rangle |b\oplus_{n+1} B_s(x)\rangle$; a color pair $(c_1, c_2)$ with $c_1, c_2 \in [k]$ and $c_1 \neq c_2$.}
\KwOut {A string $s^{(c_1, c_2)} \in \{0, 1\}^n$ satisfying
$s^{(c_1, c_2)}_i=
\begin{cases}
& 1, ~~~s_i = c_1 ~\text{or}~ c_2, \\
& 0, ~~~\text{otherwise}.
\end{cases}$}
\Runtime{$O(1)$ queries to $B_s$. Succeeds with certainty.}
\KWProcedure{}
Prepare the initial state $\ket{\Phi_0}=| 0 \rangle ^ {\otimes n} \ket{0} \in (C^2)^{\otimes n} \otimes C^{n + 1}$;
Apply the unitary transformation $H ^{\otimes n} \otimes I $ to $\ket{\Phi_0}$.
Apply the unitary transformation $B_s^{(c_1, c_2)\dagger} (I \otimes D(\pi)) B_s^{(c_1, c_2)}$ oracle which calls the black-peg oracle $B_s$ $O(1)$ times in parallel. $D(\pi) = \sum_{i = 0}^{n} (-1)^i |i \rangle \langle i | $.
Apply the unitary transformation $H ^{\otimes n} \otimes I$
Measure the first $n$ registers in the computational basis.
\end{algorithm}
\begin{lemma}\label{lemmaFind2colorposition}
Given a black-peg oracle $B_s$ with $s \in [k]^n$ and two colors $c_1, c_2 \in [k]$ with $c_1 \neq c_2$, the quantum algorithm \textbf{FindTwoColorPositon} (Algorithm \ref{algorithm:find2color}) will output with certainty a string $s^{(c_1, c_2)} \in \{0, 1\}^n$ satisfying
$$s^{(c_1, c_2)}_i=
\begin{cases}
& 1, ~~~s_i = c_1 ~\text{or}~ c_2, \\
& 0, ~~~\text{otherwise},
\end{cases}$$
for $i\in\{1, 2, \cdots, n\}$, and the algorithm consumes $O(1)$ queries to the black-peg oracle $B_s$.
\end{lemma}
\begin{figure}[htbp]
\centering\includegraphics[width=14cm]{nonadapt2.pdf}
\caption{The circuit diagram of Agorithm \ref{algorithm:find2color}}
\label{fig:nonadapt}
\end{figure}
\begin{proof}[Proof of Lemma \ref{lemmaFind2colorposition}]
The process of Algorithm \ref{algorithm:find2color} is depicted in Figure \ref{fig:nonadapt}, from which one may find that Algorithm \ref{algorithm:find2color} can be regarded as $n$ synchronous executions of the Deutsch algorithm \cite{deutsch1985quantum,deutsch1992rapid} which computes $f(0) \oplus f(1)$ with just one query to Boolean function $f : \{0, 1\} \rightarrow \{0, 1\}$.
At the first step, we prepare the initial state $$\ket{\Phi_0}=| 0 \rangle ^ {\otimes n} \ket{0} \in (C^2)^{\otimes n} \otimes C^{n + 1}.$$
At the second step, apply the unitary operator $H ^{\otimes n} \otimes I $ to $\ket{\Phi_0}$, we get
\begin{equation*}
\Phi_1 = H ^{\otimes n} \otimes I | \Phi_0 \rangle = \frac{1}{\sqrt{2^n}} \sum_{x = 0}^{2^n-1} | x \rangle \otimes | 0 \rangle.
\end{equation*}
At the third step, recall that ${B_s^{(c_1, c_2)}}(x) = \sum_{i=1}^n (\delta_{x_i 0}\delta_{s_i c_1} + \delta_{x_i 1}\delta_{s_i c_2})$ and ${B_s^{(c_1, c_2)}} | x \rangle | 0 \rangle = | x \rangle | {B_s^{(c_1, c_2)}}(x) \rangle$ . Then after applying the unitary transformation $B_s^{(c_1, c_2)\dagger} (I \otimes D(\pi)) B_s^{(c_1, c_2)}$, we have
\begin{align*}
\Phi_2
& = B_s^{(c_1, c_2)\dagger} (I \otimes D(\pi)) B_s^{(c_1, c_2)} | \Phi_1 \rangle \\
& = B_s^{(c_1, c_2)\dagger} (I \otimes D(\pi)) \frac{1}{\sqrt{2^n}} \sum_{x = 0}^{2^n-1} | x \rangle \otimes | {B_s^{(c_1, c_2)}}(x) \rangle \\
& = B_s^{(c_1, c_2)\dagger} \frac{1}{\sqrt{2^n}} \sum_{x = 0}^{2^n-1} (-1)^{{B_s^{(c_1, c_2)}}(x)}| x \rangle \otimes | {B_s^{(c_1, c_2)}}(x) \rangle \\
& = \frac{1}{\sqrt{2^n}} \sum_{x = 0}^{2^n-1} (-1)^{{B_s^{(c_1, c_2)}}(x)}| x \rangle \otimes | 0 \rangle \\
& = \frac{1}{\sqrt{2^n}} \sum_{x = 0}^{2^n-1} (-1)^{\sum_{i=1}^n (\delta_{x_i 0}\delta_{s_i c_1} + \delta_{x_i 1}\delta_{s_i c_2})}| x \rangle \otimes | 0 \rangle \\
\label{eq:bsc2} & = \frac{1}{\sqrt{2^n}} \bigotimes_{i = 1}^{n} [(-1)^{\delta_{s_i c1}} | 0 \rangle + (-1)^{\delta_{s_i c2}} | 1 \rangle] \otimes | 0 \rangle
\end{align*}
where $D(\pi) = \sum_{i = 0}^{n} (-1)^i |i \rangle \langle i | $.
At the fourth step, apply the $H ^{\otimes n} \otimes I$ to $\Phi_2$, we get
\begin{align}
\Phi_3
& = H ^{\otimes n} \otimes I |\Phi_2 \rangle \\
& = \bigotimes_{i = 1}^{n} (-1)^{\delta_{s_i c_1}} | \delta_{s_i c_1} \oplus \delta_{s_i c_2} \rangle \otimes | 0 \rangle\\
& = \bigotimes_{i = 1}^{n} (-1)^{\delta_{s_i c_1}} | \delta_{s_i c_1} \vee \delta_{s_i c_2} \rangle \otimes | 0 \rangle \label{oplus2vee}
\end{align}
where Eq. \eqref{oplus2vee} holds because $\delta_{s_i c_1}$ and $\delta_{s_i c_2}$ can't be both $1$.
Finally, by measuring the first $n$ registers, the algorithm outputs with certainty the string $x$ satisfying $x_i = 1$ if $s_i = c_1$ or $s_i = c_2$, $x_i = 0$ otherwise.
Note that the algorithm uses two queries to ${B_s^{(c_1, c_2)}}$ that will be proved soon in Lemma \ref{lemmaBsc} to be realizable with one query to $B_s$. Thus, the complexity of Algorithm \ref{algorithm:find2color} with respective to $B_s$ is $O(1)$, which completes the proof of Lemma \ref{lemmaFind2colorposition}.
\end{proof}
\begin{lemma}\label{lemmaBsc}
Given $s\in [k]^n$, $x \in [2]^n$, two colors $c_1, c_2 \in [k], c_1 \neq c_2$, ${B_s^{(c_1, c_2)}}(x)$ can be computed using one black-peg query to ${B_s}$.
\end{lemma}
\begin{proof}
Given a secret string $s = s_1s_2 \dots s_n \in [k]^n$ and $x = x_1x_2 \dots x_n \in [2]^n$, we now describe how to compute ${B_s^{(c_1, c_2)}}(x) = \sum_{i=1}^n (\delta_{x_i 0}\delta_{s_i c_1} + \delta_{x_i 1}\delta_{s_i c_2})$ by using $B_s$.
First we define a string $y^x \in [k]^n$ as
\begin{equation*}
y^x_i =
\begin{cases}
& c_1, ~~~if ~ x_i=0, \\
& c_2, ~~~if~ x_i=1.
\end{cases}
\end{equation*}
Let $V = \{ i \in \{1, 2, \dots, n\}: x_i = 0\}$.
Feeding the black-peg function $B_s$ with $y^x$, we get
\begin{align*}
B_s(y^x)
& = |\{ i \in V: s_i = c_1 \} \cup \{ i \in \{1, 2, \cdots n \} - V: s_i = c_2 \} | \\
& = \sum_{i=1}^n (\delta_{x_i 0}\delta_{s_i c_1} + \delta_{x_i 1}\delta_{s_i c_2}) \\
& = {B_s^{(c_1, c_2)}}(x)
\end{align*}
As a result, ${B_s^{(c_1, c_2)}}(x) $ can be computed by calling the black-peg function $B_s$ once.
\end{proof}
\section{Adaptive Quantum Algorithm}
In this section, we discuss adaptive quantum algorithm for Mastermind.
Firstly it is easy to get a lower bound of quantum complexity in the adaptive setting.
\begin{fact} \label{fact:2}
For the Mastermind game with $n$ positions and $k$ colors, any adaptive quantum algorithm must require $\Omega(\sqrt{k})$ black-peg or black-white-peg queries.
\end{fact}
\begin{proof} When $n = 1$, one can see that a black-white-peg query equals to a black-peg query, and the problem reduces to the unstructured search problem: searching for one color in $k$ colors, whose adaptive quantum lower bound is well-known to be $\Omega(\sqrt{k})$.
\end{proof}
Here we will present an optimal adaptive quantum algorithm achieving the lower bound $\Omega(\sqrt{k})$.
\begin{theorem}
\label{theorem:adaptive}
There is an adaptive quantum algorithm for the Mastermind game with $n$ positions and $k$ colors that uses $O(\sqrt{k})$ black-peg queries and returns the secret string with certainty.
\end{theorem}
\begin{proof}
The adaptive algorithm is presented in Algorithm \ref{algorithm:adaptive}, of which the key idea is to apply $n$ Grover searches synchronously on $n$ secret characters. It is well known that Grover's algorithm can be adjusted to an exact version that finds the target state with certainty, if the proportion of the target states, whose value is $\frac{1}{k}$ in our setting, is known in advance. The process of Algorithm \ref{algorithm:adaptive} is depicted in Figure \ref{fig3}.
\begin{algorithm}[htb]
\SetKwInput{Runtime}{Runtime}
\SetKwInOut{KWProcedure}{Procedure}
\caption{An adaptive quantum algorithm for Mastermind with $n$ positions and $ k$ colors}
\label{algorithm:adaptive}
\LinesNumbered
\KwIn {A {black-peg} oracle $B_s$ for $s\in [k]^n$ such that $B_s| x \rangle |b\rangle = |x\rangle |b\oplus_{n+1} B_s(x)\rangle$}
\KwOut {The secret string $s$}
\Runtime {$O(\sqrt{k})$ queries to $B_s$. Succeeds with certainty.}
\KWProcedure{}
Prepare the initial state $\ket{\Phi_0}=\ket{0}^{\otimes n}\ket{0} \in (C^k)^{\otimes n} \otimes C^{n + 1}$; \label{adaptivestep1}
Set the number of iterations $T = \lceil \frac{\pi}{4 \arcsin(\sqrt{\frac{1}{k}})} - \frac{1}{2} \rceil $ and the rotation angle $\phi = 2 \arcsin(\frac{sin(\frac{\pi}{4k + 2})}{\sin(\theta)})$, with $\theta = \arcsin(\sqrt{\frac{1}{k}})$. \label{adaptivestep2}
Apply the unitary transformation $QFT_k ^{\otimes n} \otimes I$ to $\ket{\Phi_0}$.
\For{i = 1 to T} {\label{adaptivestep4}
Apply the unitary operator $O_s(\phi)$, where $O_s(\phi) = B_s^{\dagger}(I \otimes D(\phi)) B_s$, $D(\phi) = \sum_{j = 0}^{n} e^{ij\phi} | j \rangle \langle j |$.
Apply the unitary operator $S_0(\phi)$, where $S_0(\phi) = ( QFT_k (I + (e^{i\phi} - 1) \ket{0} \langle 0 | ) QFT_k ^{\dagger}) ^{\otimes n} \otimes I $. \label{adaptivestep5}
}
Measure the first $n$ registers in the computational basis.
\end{algorithm}
\begin{figure}[htbp]
\centering\includegraphics[width=16cm]{adapt.pdf}
\caption{The circuit diagram of Algorithm \ref{algorithm:adaptive}}
\label{fig3}
\end{figure}
At the first step, we prepare the initial state $$\Phi_0 = \ket{0}^{\otimes n}\ket{0} \in (C^k)^{\otimes n} \otimes C^{n + 1},$$ where $(C^k)^{\otimes n}$ is associated with the query registers used to store the query string $x$ and $ C^{n+1}$ is associated with the auxiliary register used to store the query result $B_s(x)$. In addition, we need to set some parameters for the exact Grover search. There are several approaches to achieve the exact Grover search \cite{ Brassard2002,hoyer2000arbitrary,long2001grover}. Here we use the approach proposed in \cite{long2001grover}, whose parameters including the number of iterations $T$ and the rotation angle $\phi$ are given below:
\begin{equation*}
\begin{split}
& T = \lceil \frac{\pi}{4 \arcsin(\sqrt{\frac{1}{k}})} - \frac{1}{2} \rceil, \\
& \phi = 2 \arcsin(\frac{sin(\frac{\pi}{4k + 2})}{\sin(\theta)})
\end{split}
\label{equation:grover_param}
\end{equation*}
with $\theta = \arcsin(\sqrt{\frac{1}{k}})$.
At the second step, apply the unitary transformation $QFT_k ^{\otimes n} \otimes I$ to $\ket{\Phi_0}$ to create the uniform superposition state
$$| \Phi_1 \rangle = (QFT_k ^{\otimes n} \otimes I) \ket{\Phi_0} =\frac{1}{\sqrt{k^n}}\sum_{x\in[k]^n}|x \rangle| 0 \rangle= \bigotimes ^n_{i=1}\left(\frac{1}{\sqrt{k}}\sum_{x_i = 0}^{k - 1}|x_i \rangle\right) \otimes | 0 \rangle.$$
From the third to the sixth step, apply $T$ Grover iteration operators $\left(S_0(\phi)O_s(\phi)\right)^T$ to $| \Phi_1 \rangle$, where
\begin{align*}
S_0(\phi) &= ( QFT_k (I + (e^{i\phi} - 1) \ket{0} \langle 0 | ) QFT_k ^{\dagger}) ^{\otimes n} \otimes I, \\
O_s(\phi) &= B_s^{\dagger}(I \otimes D(\phi)) B_s,
\end{align*}
with
\begin{equation*}
D(\phi) =
\begin{bmatrix}
e^{i 0 \phi} & 0 & 0 & \cdots & 0 \\
0 & e^{i 1 \phi} & 0 & \cdots & 0 \\
0 & 0 & e^{i 3 \phi} & \cdots & 0 \\
\vdots & & \ddots & & \vdots \\
0 & 0 & \cdots & 0 & e^{i n \phi}\\
\end{bmatrix}.
\label{equation:D}
\end{equation*}
Thus, after the sixth step we get
\begin{align}
| \Phi_2 \rangle&=(S_0(\phi)O_s(\phi))^T| \Phi_1 \rangle\\
&=\bigotimes ^n_{i=1}\left(\left(S'_0(\phi)O_{s_i}(\phi)\right)^T \frac{1}{\sqrt{k}}\sum_{x_i = 0}^{k - 1}|x_i \rangle\right) \otimes | 0 \rangle \label{pG}\\
&=\ket{s_1s_2\cdots s_n}| 0 \rangle,\label{s_result}
\end{align}
where $ S'_0(\phi) = QFT_k (I + (e^{i\phi} - 1) \ket{0} \langle 0 | ) QFT_k ^{\dagger}$ and $Q_{s_i}(\phi)$ is defined as
\begin{align}
Q_{s_i}(\phi)\ket{x_i}= e^{i\phi \delta_{s_ix_i}}\ket{x_i}\label{Qsi}
\end{align}
which is to decide whether $x_i$ equals to $s_i$ or not. We will explain in more details later why Eq. (\ref{pG}) holds based on Lemma \ref{lemma:Qs}. Now assume that it is right. Then one see that $\left(S'_0(\phi)O_{s_i}(\phi)\right)^T \frac{1}{\sqrt{k}}\sum_{x_i = 0}^{k - 1}|x_i \rangle$ is actually the exact version of Grover's algorithm for identifying an $x_i$ such that $x_i=s_i$. Since the proportions of the target states in $n$ synchronous Grover searches are all $1/k$, the number of the iterations and the rotation angle are the same for each Grover search.
As a result, we get Eq. (\ref{s_result}), and then the algorithm outputs the secret string $s$ with certainty by measuring the first $n$ registers.
The number of iterations of the operator $Q_s(\phi)$ is $T = \lceil \frac{\pi}{4 \arcsin(\sqrt{\frac{1}{k}})} - \frac{1}{2} \rceil = O(\sqrt{k})$, and thus the number of queries to $B_s$ is $O(\sqrt{k})$, which concludes the proof of Theorem \ref{theorem:adaptive}.
\end{proof}
Now we are going to explain Eq. (\ref{pG}), which means that the unitary operator $(S_0(\phi)O_s(\phi))^T$ plays a role as $n$ synchronous Grover searches on $n$ secret characters. First, $S_0(\phi)$ represents the general diffusion operator of Grover's algorithm $S'_0(\phi)=QFT_k (I + (e^{i\phi} - 1) \ket{0} \langle 0 | ) QFT_k ^{\dagger}$ applied on $n$ $k$-dimensional spaces in parallel. Second, we have a look at the effect of $O_s(\phi) = B_s^{\dagger}(I \otimes D(\phi)) B_s$, which is depicted in Figure \ref{fig2}.
Recall that the black-peg oracle $B_s$ works as $B_s| x \rangle |b\rangle = |x\rangle |b\oplus B_s(x)\rangle$, where $|x \rangle \in (C^k)^{\otimes n}$, $| b \rangle \in C^{n + 1}$.
Then we have
\begin{lemma}\label{lemma:Qs}
Let $O_s(\phi) = B_s^{\dagger}(I \otimes D(\phi)) B_s$. There is \begin{align*}
O_s(\phi) |x\rangle |0\rangle= \mathop{\bigotimes}_{i = 1}^{n} e^{i\phi \delta_{s_ix_i}} | x_i \rangle |0 \rangle
\end{align*}
for $s=s_1s_2\cdots x_n\in [k]^n, x=x_1x_2\cdots x_n \in [k]^n$.
\end{lemma}
\begin{proof} By direct calculation, we have
\begin{align*}
O_s(\phi) |x\rangle |0\rangle &=B_s^{\dagger}(I \otimes D(\phi)) B_s |x\rangle |0\rangle \\
&=B_s^{\dagger}(I \otimes D(\phi))|x\rangle |B_s(x)\rangle \\
&= e^{i\phi B_s(x)} B_s^{\dagger} |x\rangle|B_s(x) \rangle \\
&= e^{i\phi B_s(x)} |x \rangle |0 \rangle.
\label{equation:2}
\end{align*}
Note that $B_s(x) = \sum_{i = 1}^{n} \delta_{s_ix_i}$.
Thus we have
\begin{align}
O_s(\phi) |x\rangle|0\rangle & = e^{i\phi \sum_{i = 1}^{n} \delta_{s_ix_i}} |x_1x_2 \cdots x_n \rangle |0 \rangle \\
& = \mathop{\bigotimes}_{i = 1}^{n} e^{i\phi \delta_{s_ix_i}} | x_i \rangle |0 \rangle \label{eq-22}
\\
& = \mathop{\bigotimes}_{i = 1}^{n} Q_{s_i}(\phi)\ket{x_i}|0\rangle, \label{eq-23}
\end{align}
where Eq. (\ref{eq-23}) follows from substituting Eqs. (\ref{Qsi}) into (\ref{eq-22}).
\end{proof}
\begin{figure}[htbp]
\centering\includegraphics[width=9cm]{Os.pdf}
\caption{A phase $e^{i\phi}$ is added to $| x_i \rangle$ when $x_i$ is equal to $s_i$}
\label{fig2}
\end{figure}
\section{Conclusions and Problems}
In this paper, we have systematically investigated the quantum complexity of Mastermind, obtaining optimal quantum algorithms achieving the lower bounds in both the non-adaptive and adaptive settings. Substantial speedup advantages have been revealed for playing Mastermind on quantum computers over classical ones.
In this following we list some problems maybe worthy of further consideration.
\noindent\textbf{ Problem 1: What is the quantum complexity of Mastermind without color repetition?} There is a variant of Mastermind where color repetition is prohibited in both the secret string $s$ and the query string $x$. In particular, when $k=n$, this variant is called {\it Permutation} Mastermind. Similar to the case with color repetition, the classical complexity of Permutation Mastermind in the non-adaptive setting is also $\Theta(n\log n)$ \cite{Glazik2021,Larcher2022}.
The classical complexity of Permutation Mastermind in the adaptive setting leaves an $O(\log n)$ gap between the lower bound $\Omega (n)$ and the upper bound $O(n \log n)$ \cite{Ouali2018}.
Our quantum algorithms seem not to be suitable for this variant.
\noindent\textbf{Problem 2: How to strengthen the robustness of the adaptive quantum algorithms while keeping the complexity $O(\sqrt{k})$?} Algorithm \ref{algorithm:adaptive} is to apply $n$ exact Grover searches synchronously on $n$ secret characters. Although we used the exact Grover search, noise is almost inevitable on quantum computers. If the probability of error at each position is $\delta > 0$ due to quantum noise,
the success probability of winning the game after executing the algorithm is $(1-{\delta})^n$ that decreases exponentially with $n$. Hence, a problem worthy of studying is to develop quantum algorithms which can effectively resist to the noise perturbation.
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,184 |
Leaving Your Body to Crime-Solving Science
Canada's first "body farm" is a research facility where experts will hone their crime-solving knowledge, and people are lining up to donate their remains to the effort
If you've ever pondered donating your body to science, you might be one the many people interested in a new possibility. Set to open this spring in Bécancour, Quebec, near Trois-Rivières, the Secure Site for Research in Thanatology is already being flooded with offers from prospective donors.
Researchers from the University of Montreal and the University of Quebec at Trois-Rivières will be using the bodies to study human decomposition—what happens to a human body left in a field or a car or buried in a shallow grave—to improve police investigations. Bodies will be exposed to a variety of elements and temperatures and will remain in the same spot for years.
Among the researchers' goals is finding better methods for verifying how long a person has been dead.
Estimates are generally based on the life cycles of the insects consuming the body, but Canada's cold climate means that there are month-long stretches in a year when there are no insects. The Quebec site is the perfect spot to learn more.
"Some of the research is about advancing our ability to search and recover victims, and to do that much more rapidly," Shari Forbes, the director of the project, told Yahoo Canada. Scientists will use the site not only to train canine units, but also to explore whether drones can be used to locate missing remains.
The researchers will also working to create new methods for identifying bodies. "We typically rely on fingerprints, DNA, and teeth, but often after many years, that's not available to identify the victim," Forbes said.
Only researchers and police services, including canine units, will be able to visit the site.
Due to provincial regulations, the site is accepting residents from Quebec only, but other site are being planned for Ontario and western Canada.
Quebecers who are interested in donating their remains can e-mail Forbes at Shari.Forbes@uqtr.ca.
Photo: iStock/nito100.
More Under: burial, donate your body, organ donation
Marathon Running May Keep You Younger
How Pigs Could Save Human Lives
Pigs still can't fly, but they may soon be able to be organ donors By Katrina Caruso… | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,381 |
\section{Introduction}
The action of a quantum field theory is determined by the symmetries of the physical system it is meant to model. In the Wilsonian framework of renormalization, for example, one includes in principle all possible terms consistent with the symmetries of the theory. In the context of gauge theories, the most important symmetries are Poincaré invariance and the gauge symmetry. On the lattice the gauge symmetry is typically implemented exactly, while Poincaré invariance is broken to the subgroup of lattice translations and space-time hypercubic rotations. One then expects to recover the full Poincaré group in the continuum limit.
In the typical framework of gauge theories, one may consider different choices of action with the gauge symmetry and lattice Poincaré symmetry, all of which converge to the \textit{same} continuum limit. In the present work we consider an example of a new type of gauge theory - in particular, a $\mathrm{U}(1)$ gauge theory in three dimensions - which, despite preserving both the lattice gauge symmetry and the lattice Poincaré invariance, has a different continuum limit than the standard Wilson-type $\mathrm{U}(1)$ gauge theory. In fact, by formulating the standard $\mathrm{U}(1)$ gauge theory in the Hamiltonian formalism, we see that it is natural to include an extra parameter in the theory, in this case an angle $\theta$. This extension, which in a mathematical sense corresponds to a \textit{self-adjoint extension} of the electric field operator in the Hamiltonian formalism, would be far from obvious from a Euclidean action perspective. In this sense, our work expands the current framework of lattice gauge theories.
In these proceedings, we first formulate the new type of $\mathrm{U}(1)$ gauge theory and explain how the $\theta$ parameter is introduced. In the $\theta=0$ case, it reduces to the standard $\mathrm{U}(1)$ gauge theory, for which we briefly review the main results in the three-dimensional case. We specialize to three dimensions for simplicity and then consider the case $\theta=\pi$, which is the only value of $\theta$ which preserves the symmetries of the $\theta=0$ theory; in particular, for $\theta \not\in \{0, \pi\}$, charge conjugation is explicitly broken. Similarly to the standard $\mathrm{U}(1)$ gauge theory, the $\theta=\pi$ theory may be written in terms of a dual height model, which we simulate numerically. The dual model is free of the sign problem that appears in the path integral formulation of the $\theta=\pi$ theory. We investigate appropriate order parameters for the breaking of the relevant symmetries, and we find evidence that the $\theta=\pi$ theory has a broken ${\mathbb{Z}}_2$ symmetry down to the continuum limit, a feature which is absent from the standard $\mathrm{U}(1)$ theory.
\section{The standard compact Abelian gauge theory in 3D}\label{sec:usual u(1)}
In this section we briefly review some results, as well as the Hamiltonian formulation, for the standard compact $\mathrm{U}(1)$ gauge theory in three dimensions.
\subsection{Action formulation}
The standard compact Wilson-type $\mathrm{U}(1)$ gauge theory in 3D is well-understood both analytically and numerically \cite{GopfMack}. In the Villain formulation, its partition function is given by
\begin{equation}
\label{eq:usual u(1) partition function}
Z = \pqty{\prod_{l \in \mathrm{links}} \int_{-\pi}^\pi d\theta_l} \pqty{\prod_{p \in \mathrm{plaq}} \sum_{n_p = -\infty}^{+\infty}} \exp{\bqty{ -\frac{1}{2e^2} \sum_p ((d\theta)_p - 2\pi n_p)^2}} \ ,
\end{equation}
where $e^2$ is a dimensionless coupling, $l$ are lattice links and $p$ are plaquettes. Labelling the ordered links in the plaquette $p$ from $1$ to $4$, we have $(d\theta)_p = \theta_1 + \theta_2 - \theta_3 - \theta_4$. Many features of the theory can be understood analytically by dualization. The partition function eq.\eqref{eq:usual u(1) partition function} can be rewritten in terms of new field variables $h_x \in {\mathbb{Z}}$ which live on the sites $x$ of the dual lattice. The end result is that, up to some constant prefactors,
\begin{equation}
\label{eq:unstaggered height model}
Z = \pqty{\prod_{x \in \mathrm{sites}} \sum_{h_x=-\infty}^{+\infty}}\exp{\bqty{-\frac{e^2}{2} \sum_{\expval{xy}} (h_x-h_y)^2}} \ ,
\end{equation}
where $h_x \in {\mathbb{Z}}$ are integer-valued scalar fields on the dual lattice, and $\expval{xy}$ denotes nearest neighbours. It is precisely the integer nature of the fields which makes the theory non-trivial. If $h_x \in {\mathbb{R}}$, this would simply be the action of a massless free real scalar field.
Through the analysis of the height model eq.\eqref{eq:unstaggered height model}, it has been shown analytically that 3D $\mathrm{U}(1)$ gauge theory is confining at all couplings, with a mass gap $m$ which scales as \cite{GopfMack}
\begin{equation}
\label{eq:mass gap}
m^2 = \frac{8\pi^2}{e^2} \exp{\bqty{-2\pi^2 c/e^2}} \ ,
\end{equation}
in lattice units, where $c \approx 0.2527$, while the string tension scales as
\begin{equation}
\sigma = \frac{\widetilde{c}}{4\pi^2} m e^2
\end{equation}
for some constant $\widetilde{c}$, again in lattice units. These results have found numerical confirmation \cite{TepAthen, CasPan}. The continuum limit is achieved as $e^2 \to 0$. Perhaps surprisingly, the resulting continuum theory is a free scalar field of mass $m$ \cite{GopfMack}.
\subsection{Hamiltonian formulation}
In the Hamiltonian formulation of $\mathrm{U}(1)$ gauge theory, time is continuous while space is discretized into a square lattice \cite{KogSuss}. The temporal gauge $A_0=0$ is chosen. Classically, one assigns a $\mathrm{U}(1)$ variable to each lattice link. Thus, quantising, the Hilbert space on each link is given by the set of square-integrable functions on $\mathrm{U}(1)$ and the overall Hilbert space is therefore
\begin{equation}
\mathcal{H} = \bigotimes_{l \in \mathrm{links}} L^2(\mathrm{U}(1))_l \ .
\end{equation}
The Hamiltonian of the theory can be obtained via the transfer matrix and is given by
\begin{equation}
H = \frac{e^2}{2} \sum_{l \in \mathrm{links}} E_l^2 + \frac{1}{2e^2}\sum_{p \in \mathrm{plaq}} B^2(U_p) \ ,
\end{equation}
where $U_p$ is the usual plaquette variable and the electric field operator $E_l$ is given on each link by $E_l = -i\pdv{}{\varphi_l}$, where $U_l = \exp{(i\varphi_l)}$ is the $\mathrm{U}(1)$ link variable. Thus one may interpret $\varphi_l$ as an angular position operator on the circle $\mathrm{U}(1)$, with $E_l$ its canonically conjugate angular momentum. The magnetic term $B^2$ may be chosen in different equivalent ways, similarly to the freedom available in the choice of action. For example, $B^2$ could be either the Wilson or Villain term for the standard $\mathrm{U}(1)$ gauge theory. There is an additional constraint on the Hilbert space, in that only those states which satisfy the Gauss' law constraint
\begin{equation}
\label{eq:gauss law}
G_x \ket{\psi} = 0 \ , \quad\quad\quad\quad G_x = \sum_{i} \pqty{E_{x,i}-E_{x-i,i}}
\end{equation}
are to be considered as physical states.
\section{A new type of Abelian gauge theory}
In the previous section we have seen that, in the Hamiltonian formulation, on each lattice link the Hilbert space is formed by the square-integrable functions on $\mathrm{U}(1)$. Each such wavefunction $\psi \in L^2(\mathrm{U}(1))$ therefore satisfies
\begin{equation}
\label{eq:periodic bcs}
\psi(2\pi)=\psi(0), \quad \quad \quad \quad \int_0^{2\pi} d\varphi \abs{\psi(\varphi)}^2 < \infty \ .
\end{equation}
In other words, on each link the wavefunctions are required to come back to themselves after going around $2\pi$ in gauge field space. This, however, is not the most general choice that is consistent with the symmetries of the theory. One may, in fact, require that the wavefunction only come back to itself up to a twist,
\begin{equation}
\label{eq:twisted periodic bcs}
\psi(2\pi)=e^{i\theta}\psi(0) \ .
\end{equation}
This choice is consistent with the gauge symmetry, so that the resulting theory still has a $\mathrm{U}(1)$ gauge symmetry. In particular, the choice eq.\eqref{eq:twisted periodic bcs} is still compatible with the Gauss law eq.\eqref{eq:gauss law}. Another way of understanding the choice eq.\eqref{eq:twisted periodic bcs} is through the mathematical concept of \textit{self-adjoint extensions} \cite{ReedSimon, Gieres}. In the standard Wilson-type gauge theory, the electric field $E=-i\partial_\varphi$ is an operator on the single-link Hilbert space. In particular, in order to make sense of the Hamiltonian and the Gauss' law constraint, the electric field must be \textit{self-adjoint}, so that it has a complete orthonormal basis of eigenfunctions and real eigenvalues. The choice eq.\eqref{eq:twisted periodic bcs} is in fact the most general choice consistent with the requirement that $E=-i\partial_\varphi$ be self-adjoint.
\begin{wrapfigure}{r}{0.4\textwidth}
\centering
\begin{tikzpicture}
\draw[step=1.0,black] (0.5,0.5) grid (4.5,3.5);
\foreach \partial in {(2,1),(2,3),(1,2),(3,2), (4,1), (4,3)}
\fill \partial circle(.10);
\foreach \partial in {(1,1),(1,3),(2,2),(3,1), (3,3), (4,2)}
{
\fill[white] \partial circle(.10);
\draw \partial circle(.10);
}
\end{tikzpicture}
\caption{A two dimensional slice of the lattice, depicting the staggered integer (black dots) and half-integer (white dots) height variables.}
\label{fig:staggered lattice}
\end{wrapfigure}
Up until now, we have modified only the domain of definition of the electric field operator, i.e. the Hilbert space of the theory. In fact, requiring that the wavefunctions satisfy eq.\eqref{eq:twisted periodic bcs} in this case is equivalent, by a basis change, to having wavefunctions which satisfy eq.\eqref{eq:periodic bcs} with a modified electric field operator $E_l \to E_l +\tfrac{\theta}{2\pi}$. Starting from the Villain form of the magnetic term $B^2$ of the Hamiltonian, we then modify it, similarly to the electric field, in such a way to preserve both the structure of small fluctuations and the overall Euclidean lattice cubic symmetry in the partition function. Since we aim to understand the resulting theory numerically, we obtain the partition function from the Hamiltonian. We then dualize the partition function, as we've seen in Section \ref{sec:usual u(1)} for the standard $\mathrm{U}(1)$. We note that for $\theta \neq 0$, the original partition function has a sign problem which is absent from the dualized partition function expressed in terms of the dual variables.
Among the possible values of $\theta$, only the $\theta = \pi$ theory preserves all the symmetries of the $\theta = 0$ theory; in particular, for $\theta \not\in \{ 0, \pi \}$ charge conjugation is explicitly broken. We therefore focus on the $\theta=\pi$ case, for which the dualized partition function is that of a height model,
\begin{equation}
\label{eq:staggered height model}
Z = \pqty{\prod_{x \mathrm{even}} \sum_{h_x \in {\mathbb{Z}}}} \pqty{\prod_{x \mathrm{odd}} \sum_{h_x \in {\mathbb{Z}} + 1/2}}\exp{\bqty{-\frac{e^2}{2} \sum_{\expval{xy}} (h_x-h_y)^2}} \ ,
\end{equation}
where each lattice site at position $\vec x = (x_0, x_1, x_2)$ in lattice units is said to be even or odd according to the parity of $x_0+x_1+x_2$. The height variables $h_x$ are integer on even lattice sites and half-integer on odd lattice sites. The staggering is isotropic in space-time; each integer variable is surrounded by half-integer nearest neighbours in all directions, and viceversa. This should be contrasted with the case of quantum link models, where a similar dualization can be performed but the staggering only occurs in the space directions \cite{QuantumLink}. An example of a staggered lattice in two dimensions is shown in figure \ref{fig:staggered lattice}, however in our case we work in three dimensions and the staggering is in all directions. The partition function eq.\eqref{eq:staggered height model} should be contrasted with the dual partition function for the standard $\mathrm{U}(1)$ theory, eq.\eqref{eq:unstaggered height model}; the two are identical except for the nature of the height variables, which for the latter are all integer-valued.
\section{Symmetries and order parameters}
The theory described by the partition function eq.\eqref{eq:staggered height model}, i.e. the dualized $\theta=\pi$ 3D $\mathrm{U}(1)$ gauge theory, has several symmetries:
\begin{enumerate}
\item \textit{Global ${\mathbb{Z}}$-invariance}: $h_x \to h_x + c$ where $c$ is any constant integer. This preserves both the action and the integer and half-integer nature of the height variables.
\item \textit{${\mathbb{Z}}_2$ charge conjugation $C$}: $h_x \to -h_x$. Again this preserves both the action and the nature of the height variables.
\item \textit{${\mathbb{Z}}_2$ shift symmetry $S$}: $h_x \to h_{x+\mu}+\frac12$ where $\mu$ is any direction. Since shifting by one lattice spacing in any direction replaces integer height variables with half-integers and vice-versa, we add a half-integer shift in order to restore their nature.
\item \textit{Translations by an even number of lattice spacings}. These preserve the nature of the height variables and therefore do not require any further staggering.
\item \textit{Euclidean cubic symmetry}. The action is isotropic with respect to lattice rotations and reflections in any of the three directions.
\end{enumerate}
The global ${\mathbb{Z}}$-invariance should be thought of as a redundancy in our description of the system, whereby we are free to set the overall height of the system \cite{GopfMack}. We will therefore require that all observables be invariant under the global ${\mathbb{Z}}$-symmetry.
We now construct order parameters for the possible breaking of the $S$ and $C$ symmetries. We consider first the observable
\begin{equation}
O_{CS} = \sum_{x} (-1)^x h_x = \sum_{x\,\mathrm{even}} h_x - \sum_{x\,\mathrm{odd}} h_x \ ,
\end{equation}
which is sensitive to both $S$ and $C$, under either of which it changes sign. We note that $O_{CS}$ is indeed invariant under the global ${\mathbb{Z}}$-symmetry, and moreover, it is a sum of local observables.
An observable which is sensitive to $S$ but not $C$ is more difficult to construct, since it must also be ${\mathbb{Z}}$-invariant and the sum of local terms. We chose the observable $O_S$ to be
\begin{equation}
O_S = \sum_{c \in \mathrm{cubes}} \sum_{x \in c} (-1)^x (h_x - \bar{h}_c)^2 \ ,
\end{equation}
where the sum is first over all unit cubes in the lattice and then over all sites $x$ belonging to cube $c$ with the relevant parity. The cube average $\bar{h}_c$ is defined as
\begin{equation}
\bar{h}_c = \frac{1}{8} \sum_{x \in c} h_x \ .
\end{equation}
The observable $O_S$ is invariant under charge conjugation $C$ but changes sign under the shift symmetry $S$. We note that a theory whose phase diagram was characterized by order parameters for the breaking of charge conujugation and one-site shift symmetry was already considered in the context of quantum link models \cite{QuantumLink}.
\section{Numerical simulation}
\begin{figure}
\centering
\includegraphics{O_S_hist_0.70_and_sweep.pdf}
\caption{Left: Histogram of the operator $O_S$ normalized by the volume $V$ at $e^2=0.70$. Right: Susceptibility of $O_S$ normalized by $V^2$ as a function of $e^2$ for several volumes, with a fit of the large-volume data to the form $A \exp{(-B / e^2)}$. The double-peaked structure at large volume (left) and scaling as $V^2$ (right) indicates spontaneous breaking of the $S$ symmetry.}
\label{fig:OS observable data}
\end{figure}
We simulated the staggered height model numerically using a multi-cluster algorithm \cite{Evertz}. We investigated a range of couplings from $e^2 = 0.3$ up to $e^2 = 2.0$ on $L^3$ lattices from $L=32$ up to $L=256$. We computed the observables $O_S$ and $O_{CS}$ and show the relevant data in figures \ref{fig:OS observable data} and \ref{fig:OCS observable data}. We expect the continuum limit to emerge as $e^2 \to 0$, as in the case of the standard $\mathrm{U}(1)$ gauge theory.
\begin{figure}
\centering
\includegraphics{O_CS_hist_0.70_and_sweep.pdf}
\caption{Left: Histogram of the operator $O_{CS}$ normalized by the volume at $e^2=0.70$. Right: Susceptibility of $O_{CS}$ normalized by $V^2$ as a function of $e^2$ for several volumes. A single histogram peak at large volumes (left) and scaling smaller than $V^2$ (right) indicates an unbroken $CS$ symmetry.}
\label{fig:OCS observable data}
\end{figure}
Focusing on figure \ref{fig:OS observable data}, we see that the susceptibility of $O_S$, appropriately normalized by the volume squared, is essentially volume-independent for a wide range of coupling $e^2$, indicating that the $S$ symmetry is spontaneously broken. This is confirmed by the histogram, which shows two clearly defined peaks which become sharper with increasing volume. As the coupling becomes smaller, finite volume effects cause the $S$ symmetry to be restored. However, we see that this happens for decreasing values of $e^2$ as the volume is increased, which indicates that this is likely a finite-volume effect. Moreover, the function $f(e^2)=A \exp{(-B/e^2)}$, inspired by eq.\eqref{eq:mass gap}, provides an excellent fit to the data and would imply that the symmetry is broken for all couplings. We interpret the data to mean that the $S$ symmetry (shift by one lattice spacing) is broken for all values of the coupling $e^2$. It is important to note that translations by an even number of lattice spacings remain unbroken, so that we still expect a translational invariant theory in the continuum limit.
The data for the $O_{CS}$ observable, on the other hand, is shown in figure \ref{fig:OCS observable data}. In this case the susceptibility normalized by $V^2$ decreases with the volume, which is indicative that at least one of $S$ or $C$ is unbroken. The decrease with the volume is less apparent in the central region of the right-hand side of figure \ref{fig:OCS observable data}, but the histogram in figure \ref{fig:OCS observable data}, taken at $e^2=0.70$ (in this region) shows that, while two peaks seem to be forming for small volumes, they merge in the center around zero for bigger volumes. We therefore interpret the data to mean that the $C$ symmetry remains unbroken for all couplings.
Overall, we therefore see evidence that the continuum limit of the unstaggered model, obtained as $e^2 \to 0$, has a ${\mathbb{Z}}_2$ broken symmetry which remains unbroken in the standard Wilson-type $\mathrm{U}(1)$ gauge theory. It is conceivable that the broken shift symmetry may manifest itself in terms of the internal degrees of freedom of the continuum theory. It would be interesting to obtain an effective description of the $\theta=\pi$ theory in the continuum and compare with the $\theta=0$ case, which, as we have seen in Section \ref{sec:usual u(1)}, is described in the continuum by a massive free scalar.
\section{Conclusions}
We have shown that $\mathrm{U}(1)$ gauge theory may be extended by the addition of an extra non-perturbative parameter $\theta$, in a way which preserves both the gauge symmetry and the Euclidean hypercubic symmetry. We considered the theory in three dimensions for $\theta=\pi$ and dualized it to obtain a height model free of the sign problem which affects the original theory. We find evidence that the $\theta=\pi$ theory has a broken ${\mathbb{Z}}_2$ symmetry which is absent from the standard $\mathrm{U}(1)$ gauge theory, and therefore describes a different continuum limit.
We are currently performing additional numerical work to compute other quantities for the staggered height model, such as the mass gap and the string tension. It would also be worthwhile to understand the universality class corresponding to the continuum limit of the $\theta=\pi$ theory, and understand the nature of the effective theory that describes it.
Other interesting extensions of the present work involve studying the theory in different dimensions (for example in 4D) or at values of $\theta \not\in \{0, \pi\}$ and, perhaps more importantly, understanding the theoretical framework and performing numerical simulations in the non-Abelian case.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,889 |
Meredith Teasley Photography is a wedding and family photographer based in Nolensville, Tennessee and servicing the Nashville area. This southern photographer is an ideal choice for clients looking for timeless, stunning wedding photography that leaves a legacy. Meredith is a passionate photographer who loves people and considers it a privilege to document your love story with timeless and gorgeous images of your wedding day.
Meredith did an amazing job with our wedding photos and everyone loved working with her! She has a wonderful eye and is just a lovely person. She also turned around the photos pretty quickly. I definitely recommend her!
Meredith is such a lovely and gifted person that puts everyone at ease to bring out the best of each personality and captures the authenticity of the moment. Working with Meredith you don't just get pictures, you get a Family Treasure.
Meredith did a wonderful job photographing my daughter's wedding in June. The photographs are spectacular! She was very discreet, but still able to get wonderful photos of every aspect of the wedding. I would hire her again without hesitation! | {
"redpajama_set_name": "RedPajamaC4"
} | 9,752 |
London & Rotterdam – November 26 2013 – Unilever and the Cambridge Programme for Sustainability Leadership (CPSL) today announced the seven finalists for the inaugural Unilever Sustainable Living Young Entrepreneurs Awards 2013. The international awards programme is designed to inspire young people around the world to tackle environmental, social and health issues. The competition is for anyone aged 30 years or under, and the awards were looking for inspiring practical, tangible solutions to help make sustainable living commonplace.
Out of 510 entries from 90 countries, seven finalists were selected. The candidates submitted scalable and sustainable solutions in the form of products, services or applications that enable changes in practices or behaviours in, for example: sanitation and hygiene; water scarcity; greenhouse gases; waste; sustainable agriculture; and helping smallholder farmers.
The finalists will take part in a four week development programme followed by an accelerator workshop in Cambridge, UK, at which expert help and professional guidance will be provided to help them develop their ideas. This will be followed by a pitch to a panel of judges in London, comprising entrepreneurs and leaders from business and sustainability. The winner and finalists will attend a prestigious dinner in London on 30 January 2014 at which the HRH The Prince of Wales Prize will be presented.
The Prize winner will receive €50.000 in financial support and individually tailored mentoring, the six finalists will each receive €10.000 in financial support and mentoring. Four runners-up will also receive an on-line development programme to help them further develop their ideas.
In 2013, the Cambridge Programme for Sustainability Leadership (CPSL) celebrates its 25th anniversary of working with leaders on the critical global challenges faced by business and society.
CPSL contributes to the University's mission and leadership position in the field of sustainability via a mix of executive programmes, business platforms and strategic engagements, informed by world-class thinking and research. Its leadership network consists of more than 5,000 alumni from leading global organisations and an expert team of Fellows, Senior Associates and staff.
HRH The Prince of Wales is the patron of CPSL, which is a member of The Prince's Charities, a group of not-for-profit organisations of which His Royal Highness is President. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,035 |
9/11 remains identified 17 years after attacks that changed America
By Natalie Dreier, Cox Media Group National Content Desk
It has been 17 years since the September 11 attacks, but the work is not done for those trying to identify all those who perished during that day.
The New York Times reported that another unknown victim of the attacks now has been identified.
Mark Desire, the assistant director of forensic biology at the New York City Medical Examiner's office, announced that a bone fragment that the office analyzed belonged to Scott Michael Johnson. Johnson was a 26-year-old financial worker.
Remains of Montclair native killed on 9/11 identified nearly 17 years after WTC attacks https://t.co/dtxI3DeReY pic.twitter.com/z5Zh9cCVVw
— NJ.com (@njdotcom) July 25, 2018
Desire told the Times that his team worked on identifying the fragment half a dozen times since it was found in what was left of the World Trade Center.
>> Read more trending news
Finally they were able to get enough DNA from the sample to make an identification, making Johnson the 1,642nd person named, NJ.com reported. The attack on New York killed 2,753.
Credit: Robert Giroux
About 40 percent of those who were killed in the attacks have not had any remains identified, CBS News reported.
The medical examiner's office had been trying to identify more than 22,000 remains, the Times reported.
Johnson was the first person identified in nearly a year. The previous person's name was withheld at the family's request, the Times reported.
When they learned that their son was finally identified, nearly two decades since the attacks, it took them right back to the day of the attack.
"You get pulled right back into it and it also means there's finality. Somehow I always thought he would just walk up and say, 'Here I am. I had amnesia.,'" Ann Johnson, Scott's mother, told the Times.
Scott's father, Tom, said he's not comforted knowing that his son has been finally identified. He said he appreciated the work the medical examiner's office has done, but the pain itself never goes away.
"He was one of the kindest people that anyone around him had ever known. The pain of losing someone like that was tremendous," Tom Johnson told the Times.
For more on how the medical examiner's office is identifying remains, click here.
Natalie Dreier, Cox Media Group National Content Desk
Credit: Alex Burton | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 576 |
Saudi Arabian Airlines is reportedly looking into a potential order for Boeing's 777X jets at a time when the U.S. plane maker is experiencing pressure.
Saudia explores 777X order with Boeing under pressure
Ruta Burbaite
Image : John Taggart
Saudi Arabia's flag carrier, Saudi Arabian Airlines, is reportedly looking into a potential order for Boeing's 777X wide-body jets at a time when the U.S. plane maker is experiencing pressure in production and orders.
The Saudi carrier's talks with Boeing were first reported by Reuters on July 9, 2018, citing "three sources familiar with the matter". The airline's director-general Saleh bin Nasser al-Jasser had previously told the news agency that a wide-body jet order would be considered in 2018.
However, it is not clear when the airline, also known as Saudia, would reach a deal with the U.S. plane maker or how many jets it would order. Neither Saudia nor Boeing have released any official statements yet.
As to the jets that the Saudi carrier could be interested in: the 350-375 seated 777-8 variant is valued $360.5 million at list price and the 400-425 seated 777-9 variant is worth $388.7 million at list price, according to Boeing, which would bring the potential purchase of these airliners to a sizeable sum for the airline.
However, it seems that Saudia is a very likely customer for the 777X Family jets, as the airline operates a mixed fleet of 148 Airbus and Boeing aircraft, including the 777 jets (nine 777-200ERs and 33 777-300ERs) as well as 13 787-9 Dreamliners.
Boeing feels the pressure
Saudia's potential order could alleviate production pressure for the U.S. plane maker as it is now unlikely that the order for 80 jets by IranAir will be fulfilled, since it was placed before the U.S. withdrew from a nuclear deal with Iran and suggested re-imposing sanctions on the country, Reuters reports.
Saudia's interest also comes at a time when the solidity of existing 777X orders from the Gulf are clouded by doubts, particularly from Etihad Airways, which is currently restructuring after accumulating large losses in the past two years. The UAE carrier's order for 25 777X jets, which dates back to November 2013, remains unfulfilled in Boeing's orders and deliveries book.
And finally, the wide-body 777X is an upgrade to Boeing's successful 777 and 787 Dreamliner families, with a longer composite wing with folding tips, a stretched and updated fuselage, and new 100,000lb-thrust (445kN) General Electric GE9X turbofan engines.
But with the arrival of the 777X Family, Boeing is under pressure to bridge the gap between the new jets and the end of production on the 777-300ER and 777 Freighter. Which is why, the 777-9 model is due to enter service in December 2019, earlier than the previously reported 2020 date.
The 777-8 (229 ft) would succeed the ultra-long-range 777-200LR (209 ft 1 in), and compete with Airbus A350-1000. The 777-9 (252 ft) would in turn succeed the 777-300ER (242 ft 4 in) and would be longer than the previous longest airliner, the 747-8 (250 ft 2 in).
Airbus Boeing UAE SaudiArabia Iran Etihad Airways Iran Air Saudi Arabian Airlines 777X orders 787-9 Dreamliner 777-9 777-8 Saudia 777
PIA Boeing 777 impounded in Malaysia over lease dispute
As it was about to take off for Islamabad, a Boeing 777 of Pakistan International Airlines was seized by Malaysian autho...
DHL Express to expand cargo capacity by new Boeing 777 order
While expecting to achieve a sustainability goal of zero emissions by 2050, DHL Express placed an order of eight ne...
US sets new tariffs on plane parts from France & Germany
The United States imposes new upgraded tariffs on aircraft parts from France and Germany. ...
Nordica to acquire LOT stake in Xfly, become sole owner
Estonia's Nordica and LOT Polish Airlines reached an agreement to amend the ownership of their joint subsidiary Xf... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 119 |
{"url":"https:\/\/tel.archives-ouvertes.fr\/tel-00265284v1","text":"Instabilit\u00e9 des \u00e9quations de Schr\u00f6dinger\n\nAbstract : In this work we study different instability phenomena for nonlinear Schr\u00f6dinger equations.\\\\[3pt]\nIn the first part we show a phase decoherence mechanism for the semiclassical Gross-Pitaevski equation in dimension 3. This geometrical phenomenon occurs because the harmonical potential allows the construction of stationnary solutions to the equation which concentrate on circles of R^{3}.\nIn the second part, we obtain a geometric instability result for the cubic NLS on a riemannian surface. We assume that this surface admits a stable and nondegenerate periodic geodesic. Then with a WKB method we construct nonlinear quasimodes and we obtain approximate solutions to the equation for times such that instability occurs. Thus we generalize results of Burq-G\u00e9rard-Tzvetkov for the sphere.\nIn the last part, we consider supercritical Schr\u00f6dinger equations on a riemannian manifold of dimension $d$. Thanks to nonlinear geometric optics in an analytic frame, we show a mechanism of loss of derivatives in Sobolev spaces, and an instabilty in the energy space.\nKeywords :\nDocument type :\nTheses\nDomain :\n\nhttps:\/\/tel.archives-ouvertes.fr\/tel-00265284\nContributor : Laurent Thomann Connect in order to contact the contributor\nSubmitted on : Tuesday, March 18, 2008 - 4:47:01 PM\nLast modification on : Sunday, June 26, 2022 - 11:48:04 AM\nLong-term archiving on: : Friday, September 28, 2012 - 11:21:45 AM\n\nIdentifiers\n\n\u2022 HAL Id : tel-00265284, version 1\n\nCitation\n\nLaurent Thomann. Instabilit\u00e9 des \u00e9quations de Schr\u00f6dinger. Math\u00e9matiques [math]. Universit\u00e9 Paris Sud - Paris XI, 2007. Fran\u00e7ais. \u27e8tel-00265284\u27e9\n\nRecord views","date":"2022-10-07 13:49:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5343601107597351, \"perplexity\": 2482.9233611199197}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030338073.68\/warc\/CC-MAIN-20221007112411-20221007142411-00285.warc.gz\"}"} | null | null |
\section{Introduction}
The Standard Model (SM) of particle physics is very successful in
describing the interactions of the elementary particles, except
possibly neutrinos. Although it is regarded as a good low-energy
effective theory, the SM has many theoretical problems. Its gauge
symmetry group is the direct product of three groups $SU(3)\times
SU(2)\times U(1)$ and the corresponding gauge couplings are
unrelated. It does not explain the three family structure of quarks
and leptons, and their masses are fixed by arbitrary Yukawa couplings,
with neutrinos being prevented from having mass. The Higgs sector,
responsible for the symmetry breaking and for the fermion
masses, has not been tested experimentally and the mass of the Higgs
boson is unstable under radiative corrections.
In supersymmetry (SUSY) \cite{SUSY} the Higgs boson mass is stabilized
under radiative corrections because the loops containing standard
particles are partially canceled by the contributions from loops
containing SUSY particles. If to the Minimal
Supersymmetric Standard Model (MSSM) \cite{MSSM} we add the notion of Grand
Unified Theory (GUT), then we find that the three gauge couplings
approximately unify at a certain scale $M_{GUT}$ \cite{GUT}. Indeed,
measurements of the gauge couplings at the CERN $e^+e^-$ collider LEP
and neutral current data \cite{PDG} are in much better agreement with
the MSSM--GUT with the SUSY scale $M_{SUSY}\raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 1$ TeV
\cite{gaugeUnif}, as compared with the SM.
Besides achieving gauge coupling unification \cite{gaugUnifRecent},
GUT theories also reduce the number of free parameters in the Yukawa
sector. For example, in $SU(5)$ models, the bottom quark and the tau
lepton Yukawa couplings are equal at the unification scale, and the
predicted ratio $m_b/m_{\tau}$ at the weak scale agrees with
experiments. Furthermore, a relation between the top quark mass and
$\tan\beta=v_u/v_d$, the ratio between the vacuum expectation values of the
two Higgs doublets is predicted. Two solutions are possible,
characterized by low and high values of $\tan\beta$ \cite{YukUnif}.
In models with larger groups, such as $SO(10)$ and $E_6$, both the top
and bottom Yukawa couplings are unified with the tau Yukawa at the
unification scale \cite{YukUniThree}. In this case, only the large
$\tan\beta$ solution survives.
In this talk we describe some recent results \cite{YukUnifBRpV},
that show that the minimal extension of the MSSM--GUT
\cite{epsrad} in which R--Parity Violation (RPV) is introduced via a
bilinear term in the MSSM superpotential \cite{e3others,chaHiggsEps},
allows $b$-$\tau$ Yukawa unification for any value of
$\tan\beta$.
We also analyze the $t$-$b$-$\tau$ Yukawa unification and find that it is
easier to achieve than in the MSSM, occurring in a slightly wider high
$\tan\beta$ region. We also address the question of the compatibility
between the predicted and measured value for $\alpha_s(M_Z)$ in the
MSSM and in the bilinear RPV model.
\section{Description of the Model}
The superpotential $W$ is given by \cite{e3others,chaHiggsEps}
\begin{eqnarray}
W&\hskip -1mm=\hskip -1mm&
\varepsilon_{ab}\!\left[
h_U^{ij}\widehat Q_i^a\widehat U_j\widehat H_u^b
\!+\! h_D^{ij}\widehat Q_i^b\widehat D_j\widehat H_d^a
\!+\! h_E^{ij}\widehat L_i^b\widehat R_j\widehat H_d^a \right.\cr
\vb{18}
&\hskip -6mm&
\left. \hskip 1cm
-\mu\widehat H_d^a\widehat H_u^b
+\epsilon_i\widehat L_i^a\widehat H_u^b\right]
\label{superpotential}
\end{eqnarray}
where $i,j=1,2,3$ are generation indices, $a,b=1,2$ are $SU(2)$
indices. This superpotential is motivated by models of spontaneous
breaking of R--Parity~\cite{SBRpV}. Here R--Parity and lepton number
are explicitly violated by the last term in Eq.~(\ref{superpotential}).
The set of soft SUSY
breaking terms are
\begin{eqnarray}
V_{soft}&\hskip -1mm=\hskip -1mm&
M_Q^{ij2}\widetilde Q^{a*}_i\widetilde Q^a_j+M_U^{ij2}
\widetilde U^*_i\widetilde U_j+M_D^{ij2}\widetilde D^*_i
\widetilde D_j \cr
\vb{18}
&\hskip -2mm&\hskip -5mm
+M_L^{ij2}\widetilde L^{a*}_i\widetilde L^a_j
+M_R^{ij2}\widetilde R^*_i\widetilde R_j+m_{H_d}^2 H^{a*}_d H^a_d\cr
\vb{18}
&\hskip -2mm&\hskip -5mm
+m_{H_u}^2 H^{a*}_u H^a_u
- \left[\ifmath{{\textstyle{1 \over 2}}} \sum M_i\lambda_i\lambda_i+h.c.\right]\cr
\vb{18}
&\hskip -2mm&\hskip -5mm
+\varepsilon_{ab}\left[
A_U^{ij}\widetilde Q^a_i\widetilde U_j H_u^b
+A_D^{ij}\widetilde Q^b_i\widetilde D_j H_d^a\right.\cr
\vb{18}
&\hskip -2mm& \hskip -5mm\left.
+A_E^{ij}\widetilde L^b_i\widetilde R_j H_d^a
\!-\!B\mu H_d^a H_u^b\!+\!B_i\epsilon_i\widetilde L^a_i H_u^b\right]
\,.
\end{eqnarray}
The bilinear
R-parity violating term {\sl cannot} be eliminated by superfield
redefinition. The
reason~\cite{marco} is that the bottom Yukawa coupling, usually neglected,
plays a crucial role in splitting
the soft-breaking parameters $B$ and $B_i$ as well as the scalar
masses $m_{H_d}^2$ and $M_L^{2}$, assumed to be equal at the
unification scale.
\noindent
The electroweak symmetry is broken when the VEVS of
the two Higgs doublets $H_d$
and $H_u$, and the sneutrinos.
\begin{eqnarray}
H_d&=&{{{1\over{\sqrt{2}}}[\chi^0_d+v_d+i\varphi^0_d]}\choose{
H^-_d}} \\
H_u&=&{{H^+_u}\choose{{1\over{\sqrt{2}}}[\chi^0_u+v_u+
i\varphi^0_u]}}\\
L_i&=&{{{1\over{\sqrt{2}}}
[\tilde\nu^R_{i}+v_i+i\tilde\nu^I_{i}]}\choose{\tilde\ell^{i}}}
\end{eqnarray}
The gauge bosons $W$ and $Z$ acquire masses
\begin{equation}
m_W^2=\ifmath{{\textstyle{1 \over 4}}} g^2v^2 \quad ; \quad m_Z^2=\ifmath{{\textstyle{1 \over 4}}}(g^2+g'^2)v^2
\end{equation}
where
\begin{equation}
v^2\equiv v_d^2+v_u^2+v_1^2+v_2^2+v_3^2=(246 \; {\rm GeV})^2
\end{equation}
We introduce the
following notation in spherical coordinates:
\begin{eqnarray}
v_d&=&v\sin\theta_1\sin\theta_2\sin\theta_3\cos\beta\cr
v_u&=&v\sin\theta_1\sin\theta_2\sin\theta_3\sin\beta\cr
v_1&=&v\sin\theta_1\sin\theta_2\cos\theta_3\cr
v_2&=&v\sin\theta_1\cos\theta_2\cr
v_3&=&v\cos\theta_1\nonumber
\end{eqnarray}
which preserves the MSSM
definition $\tan\beta=v_u/v_d$. The angles $\theta_i$
are equal to $\pi/2$ in the MSSM limit.
\noindent
The full scalar potential may be written as
\begin{equation}
V_{total} = \sum_i \left| { \partial W \over \partial z_i} \right|^2
+ V_D + V_{soft} + V_{RC}
\end{equation}
where $z_i$ denotes any one of the scalar fields in the
theory, $V_D$ are the usual $D$-terms, $V_{soft}$ the SUSY soft
breaking terms, and $V_{RC}$ are the
one-loop radiative corrections.
\noindent
In writing $V_{RC}$ we use the diagrammatic method and find
the minimization conditions by correcting to one--loop the tadpole
equations.
This me\-thod has advantages with respect to the effective potential when
we calculate the one--loop corrected scalar masses. The scalar
potential contains linear terms,
\begin{equation}
V_{linear}=t_d\sigma^0_d+t_u\sigma^0_u+t_i\tilde\nu^R_{i}
\equiv t_{\alpha}\sigma^0_{\alpha}\,,
\end{equation}
where we have introduced the notation
\begin{equation}
\sigma^0_{\alpha}=(\sigma^0_d,\sigma^0_u,\nu^R_1,\nu^R_2,\nu^R_3)
\end{equation}
and $\alpha=d,u,1,2,3$. The one loop tadpoles are
\begin{eqnarray}
t_{\alpha}&=&t^0_{\alpha} -\delta t^{\overline{MS}}_{\alpha}
+T_{\alpha}(Q)\cr
\vb{22}
&=&t^0_{\alpha} +T^{\overline{MS}} _{\alpha}(Q)
\label{tadpoles}
\end{eqnarray}
where $T^{\overline{MS}} _{\alpha}(Q)\equiv -\delta t^{\overline{MS}}_{\alpha}
+T_{\alpha}(Q)$ are the finite one--loop tadpoles.
\noindent
In the following we will consider the one generation version of this
model, where only $\epsilon_3\not=0$. Then $v_1=v_2=0$ if
$\epsilon_1=\epsilon_2=0$.
\section{Main Features}
The $\epsilon$--model is a one(three) parameter(s) generalization of
the MSSM.
It can be thought as an effective model
showing the more important features of the SBRP--model~\cite{SBRpV}
at the weak
scale.
The mass matrices, charged and neutral currents, are similar to the
SBRP--model if we identify
\begin{equation}
\epsilon \equiv v_R h_{\nu}
\end{equation}
The R--Parity violating
parameters $\epsilon_3$ and $v_3$ violate tau--lepton number, inducing
a non-zero $\nu_{\tau}$ mass
$m_{\nu_{\tau}}\propto (\mu v_3+\epsilon_3v_d)^2$,
which arises due to mixing between the weak eigenstate $\nu_{\tau}$ and
the neutralinos.
The $\nu_e$ and $\nu_{\mu}$
remain massless in first approximation. They acquire
masses from supersymmetric loops \cite{ralf,numass} that are typically
smaller than the tree level mass.
The model has the MSSM as a limit. This can be illustrated in
Figure~\ref{fig1} where we show the ratio of the lightest CP-even Higgs
boson mass $m_h$ in the
$\epsilon$--model and in the MSSM as a function of
$v_3$.
Many other results concerning this model and the implications for
physics at the accelerators can be found in ref.~\cite{e3others,chaHiggsEps}.
\FIGURE[t]{\epsfig{file=ratio_v3.ps,width=6.5cm}
\caption{Ratio of the
lightest CP-even Higgs boson mass $m_h$ in the
$\epsilon$--model and in the MSSM as a function of
$v_3$.}
\label{fig1}}
\section{Radiative Breaking}
\subsection{Radiative Breaking in the $\epsilon\!$ model: The minimal case}
At $Q = M_{GUT}$ we assume the standard minimal supergravity
unifications assumptions,
\begin{eqnarray}
&&A_t = A_b = A_{\tau} \equiv A \:, \cr
&&\vb{24}
B=B_2=A-1 \:, \cr
&&\vb{24}
m_{H_1}^2 = m_{H_2}^2 = M_{L}^2 = M_{R}^2 = m_0^2 \:, \cr
&&\vb{24}
M_{Q}^2 =M_{U}^2 = M_{D}^2 = m_0^2 \:, \cr
&&\vb{24}
M_3 = M_2 = M_1 = M_{1/2}
\end{eqnarray}
In order to determine the values of the Yukawa couplings and of the
soft breaking scalar masses at low energies we first run the RGE's from
the unification scale $M_{GUT} \sim 10^{16}$ GeV down to the weak
scale.
We randomly give values at the unification scale
for the parameters of the theory.
\begin{equation}
\begin{array}{ccccc}
10^{-2} & \leq &{h^2_t}_{GUT} / 4\pi & \leq&1 \cr
10^{-5} & \leq &{h^2_b}_{GUT} / 4\pi & \leq&1 \cr
-3&\leq&A/m_0&\leq&3 \cr
0&\leq&\mu^2_{GUT}/m_0^2&\leq&10 \cr
0&\leq&M_{1/2}/m_0&\leq&5 \cr
10^{-2} &\leq& {\epsilon^2_3}_{GUT}/m_0^2 &\leq& 10\cr
\end{array}
\end{equation}
\noindent
The value of ${h^2_{\tau}}_{GUT}/ 4 \pi$ is defined in such a way
that we get the $\tau$ lepton mass correctly.
As the charginos mix with the tau
lepton, through a mass matrix is given by
\begin{equation}
{\bf M_C}=\left[\matrix{
M & {\textstyle{1\over{\sqrt{2}}}}gv_u & 0 \cr
{\textstyle{1\over{\sqrt{2}}}}gv_d & \mu &
-{\textstyle{1\over{\sqrt{2}}}}h_{\tau}v_3 \cr
{\textstyle{1\over{\sqrt{2}}}}gv_3 & -\epsilon_3 &
{\textstyle{1\over{\sqrt{2}}}}h_{\tau}v_d}\nonumber
\right]
\end{equation}
Imposing that one of
the eigenvalues reproduces the observed tau mass $m_{\tau}$, $h_{\tau}$
can be solved exactly as \cite{chaHiggsEps}
\begin{equation}
h_{\tau}^2={{2m_{\tau}^2}\over{v_d}}\left[
{{1+\delta_1}\over{1+\delta_2}}
\right]\nonumber
\end{equation}
where the $\delta_i\,$, $i=1,2$, depend on $m_{\tau}$, on the SUSY
parameters $M,\mu,\tan\beta$ and on the R-Pari\-ty violating parameters
$\epsilon_3$ and $v_3$.
It can be shown \cite{chaHiggsEps} that
\begin{equation}
\lim_{\epsilon_3 \rightarrow 0} \delta_i = 0
\end{equation}
\noindent
After running the RGE we have a
complete set of parameters, Yukawa couplings and soft-breaking masses
$m^2_i(RGE)$ to study the minimization. To do this we use the
following method \cite{epsrad}:
\begin{enumerate}
\item
We start with random values for $h_t$ and $h_b$ at $M_{GUT}$.
The value of $h_{\tau}$ at $M_{GUT}$
is fixed in order to get the correct $\tau$ mass.
\item
The value of $v_d$ is determined from $m_{b}=h_b v_d/ \sqrt{2}$ for
$m_{b}=2.8$ GeV (running $b$ mass at $m_Z$).
\item
The value of $v_u$ is determined from $m_{t}=h_t v_u/ \sqrt{2}$ for
$m_{t}=176 \pm 5$ GeV. If
\begin{equation}
\hskip -0.5mm
v_d^2+v_u^2 \!>\! v^2=\frac{4}{g^2}\, m^2_W = (246 \hbox{ GeV})^2
\end{equation}
then we go back and choose another starting point.
The value of $v_3$ is then obtained from
\begin{equation}
v_3=\pm\, \sqrt{\frac{4}{g^2}\, m^2_W -v_1^2 -v_2^2}
\end{equation}
\end{enumerate}
\noindent
We see that the freedom in $h_{t}$ and $h_{b}$ at $M_{GUT}$ can be
translated into the freedom in the mixing angles $\beta$ and
$\theta$. Comparing, at this point, with the MSSM we have one extra
parameter $\theta$. We will discuss this in more detail below. In
the MSSM we would have $\theta=\pi/2$.
After doing this, for each point in parameter space, we solve the extremum
equations, for the soft breaking
masses, which we now call $m^2_i$ ($i=H_1,H_2,L$).
Then we calculate numerically the eigenvalues for the real and
imaginary part of the neutral scalar mass-squared matrix. If they are
all positive, except for the Goldstone boson, the point is a good one.
If not, we go back to the next random value.
As before, we end up
with a set of solutions for which
the $m^2_i$ obtained from the minimization
of the potential differ from those obtained from the RGE, which we
call $m^2_i(RGE)$.
Our goal is to find solutions that obey
\begin{equation}
m^2_i=m^2_i(RGE) \quad \forall i
\end{equation}
To do that we define a function
\begin{equation}
\eta= max \left( \frac{m^2_i}{m^2_i(RGE)},\frac{m^2_i(RGE)}{m^2_i}
\right) \quad \forall i
\end{equation}
We see that we have always
\begin{equation}
\eta \ge 1
\end{equation}
and use {\tt MINUIT} to minimize $\eta$. We have shown \cite{epsrad}
that it is easy to get solutions for this problem.
Before we finish this section
let us discuss the counting of free
parameters. In the minimal N=1 supergravity unified
version of the MSSM this is shown in Table~\ref{table1}. The counting
for the $\epsilon$--model is presented in Table~\ref{table2}.
Finally, we note that in either case, the sign of the mixing parameter
$\mu$ is physical and has to be taken into account.
\TABLE{
\begin{tabular}{ccc}\hline
Parameters
\hskip -8pt&\hskip -8pt
Conditions
\hskip -8pt&\hskip -8pt
Free Parameters\hskip -8pt \cr \hline
$h_t$, $h_b$, $h_{\tau}$
\hskip -8pt&\hskip -8pt
$m_W$, $m_t$
\hskip -8pt&\hskip -8pt $\tan \beta$ \cr
$v_d$, $v_u$,$M_{1/2}$
\hskip -8pt&\hskip -8pt
$m_b$, $m_{\tau}$
\hskip -8pt&\hskip -8pt 2 Extra \cr
$m_0$, $A$, $\mu$
\hskip -8pt&\hskip -8pt
$t_i=0$, $i=1,2$
\hskip -8pt& \hskip -8pt({\it e.g.} $m_h$, $m_A$)\cr \hline
Total = 9\hskip -8pt&\hskip -8ptTotal = 6 \hskip -8pt&\hskip -8pt
Total = 3\cr\hline
\end{tabular}
\caption{Counting of free parameters in N=1 supergravity MSSM.}
\label{table1}}
\TABLE{
\begin{tabular}{ccc}\hline
Parameters \hskip -8pt & \hskip -8pt
Conditions \hskip -8pt & \hskip -8pt Free Parameters \cr \hline
$h_t$, $h_b$, $h_{\tau}$
\hskip -8pt & \hskip -8pt$m_W$, $m_t$ \hskip -8pt & \hskip -8pt
$\tan\beta$, $\epsilon_i$ \cr
$v_d$, $v_u$, $M_{1/2}$
\hskip -8pt & \hskip -8pt$m_b$, $m_{\tau}$ \hskip -8pt & \hskip -8pt \cr
$m_0$,$A$, $\mu$
\hskip -8pt & \hskip -8pt$t_i=0$\hskip -8pt & \hskip -8pt 2 Extra \cr
$v_i$, $\epsilon_i$
\hskip -8pt & \hskip -8pt($i=1,\ldots,5$)
\hskip -8pt & \hskip -8pt ({\it e.g.} $m_h$, $m_A$)\cr \hline
Total = 15\hskip -8pt & \hskip -8ptTotal = 9
\hskip -8pt & \hskip -8ptTotal = 6\cr\hline
\end{tabular}
\caption{Counting of free parameters in our model.}
\label{table2}}
\subsection{Yukawa Unification in the $\epsilon$ model: I Motivation}
There is a strong motivation to consider GUT theories where {\it
both} gauge and Yukawa unification can achieved. This is because
besides achieving gauge coupling unification,
GUT theories can also reduce the number of free parameters in the Yukawa
sector and this is normally a desirable feature. The situation with
respect to GUT theories that embed the MSSM can be summarized as
follows \cite{YukUnif,YukUniThree}:
\begin{itemize}
\item
In $SU(5)$ models, $h_b=h_{\tau}$ at $M_{GUT}$. The
predicted ratio $m_b/m_{\tau}$ at $M_{WEAK}$ agrees with
experiments.
\bigskip
\item
A relation between $m_{top}$ and $\tan\beta$ is predicted.
Two solutions are possible: low and high $\tan\beta$ .
\bigskip
\item
In $SO(10)$ and $E_6$ models $h_t=h_b=h_{\tau}$ at $M_{GUT}$.
In this case, only the large $\tan\beta$ solution survives.
\bigskip
\item
Recent global fits of low energy data (the lightest Higgs
mass and $B(b\rightarrow s\gamma)$) to MSSM
show that it is hard to reconcile these constraints
with the large $\tan\beta$ solution. Also the low $\tan\beta$ solution
with $\mu<0$ is also disfavored.
\end{itemize}
In the following sections we will
show \cite{YukUnifBRpV}
that the $\epsilon$--model allows $b-\tau$ Yukawa unification for
any value of $\tan\beta$ and satisfying perturbativity of the
couplings. We also find the $t-b-\tau$ Yukawa unification
easier to achieve than in the MSSM, occurring in a
wider high $\tan\beta$ region.
\subsection{Yukawa Unification in the $\epsilon$ model: II The Method}
As before $h_{\tau}$ can be solved exactly
\begin{equation}
h_{\tau}^2={{2m_{\tau}^2}\over{v_d}}\left[
{{1+\delta_1}\over{1+\delta_2}}
\right]
\end{equation}
where the $\delta_i\,$, $i=1,2$, depend on $m_{\tau}$, on the SUSY
parameters $M,\mu,\tan\!\beta$ and on the R-parity violating parameters
$\epsilon_3$ and $v_3$.
Also $h_t $ and $h_b$
are related to $m_t$ and $m_b$
\begin{equation}
m_t = h_t \frac{v}{\sqrt2} \sin \beta \sin \theta\,, \: \: \: \: \:
m_b = h_b \frac{v}{\sqrt2} \cos \beta \sin \theta
\end{equation}
where
\begin{equation}
v=2m_W/g \ ; \ \tan\beta=v_u/v_d \ ; \
\cos\theta=v_3/v
\end{equation}
\noindent
In our approach we divide the evolution in three ranges:
\begin{enumerate}
\item
$m_{Z} \rightarrow m_t$\\
We use running fermion masses and gauge
couplings.
\item
$m_t \rightarrow M_{SUSY}$ \\
We use the two-loop SM RGE's including the quartic Higgs coupling $\lambda$.
\item
$M_{SUSY} \rightarrow M_{GUT}$\\
We use the two-loop RGE's.
\end{enumerate}
\noindent
Using a top $\rightarrow$ bottom
approach we randomly vary the unification scale
$M_{GUT}$ and the unified coupling $\alpha_{GUT}$ looking for
solutions compatible with the low energy data \cite{LEPinternal}
\begin{eqnarray}
&&\alpha^{-1}_{em}(m_Z) = 128.896 \pm0.090\cr
&&\vb{18}
\sin^2\theta_w(m_Z) =
0.2322 \pm 0.0010\cr
&&\vb{18}
\alpha_s(m_Z)=0.118 \pm 0.003
\end{eqnarray}
We get a region centered around
\begin{equation}
M_{GUT} \approx
2.3 \times10^{16} GeV \ ; \
{\alpha_{GUT}}^{-1} \approx 24.5
\end{equation}
\noindent
Next we use a bottom $\rightarrow$ top approach to
study the unification of Yukawa couplings using two-loop
RGEs. We take \cite{LEPinternal}
\begin{eqnarray}
&&m_W = 80.41 \pm 0.09\ GeV\cr
&&\vb{18}
m_{\tau}=1777.0 \pm 0.3 \ MeV \cr
&&\vb{18}
m_b(m_b) = 4.1\ \hbox{to}\ 4.5\ GeV
\end{eqnarray}
We calculate the running masses
\begin{eqnarray}
m_{\tau}(m_t) &=& \eta_{\tau}^{-1} m_{\tau}(m_{\tau})\cr
\vb{18}
m_b(m_t)&=&\eta_b^{-1}m_b(m_b)
\end{eqnarray}
where $\eta_{\tau}$ and $\eta_b$
include three--loop order QCD and one--loop order QED \cite{alf3}.
At the scale $Q=m_t$ we keep as a free parameter the running top quark
mass $m_t(m_t)$ and vary randomly the SM quartic Higgs coupling
$\lambda$.
In solving the RG equations we take the following boundary conditions:
\begin{enumerate}
\item
At scale $Q=m_t$
\begin{equation}
\lambda_i^2(m_t)=2m_i^2(m_t)/v^2 \ ; \
i=t,b,\tau
\end{equation}
\item
At scale $Q=M_{SUSY}$
\begin{eqnarray}
\lambda_t(M_{SUSY}^-)&=&h_t (M_{SUSY}^+) \sin\beta\sin\theta \cr
\vb{18}
\lambda_b(M_{SUSY}^-)&=&h_b (M_{SUSY}^+) \cos\beta\sin\theta \cr
\vb{18}
\lambda_{\tau}(M_{SUSY}^-)&=&h_{\tau} (M_{SUSY}^+)
\cos\beta\sin\theta\cr
&&\vb{20}
\times
\sqrt{{1+\delta_2}\over{1+\delta_1}}
\end{eqnarray}
where $h_i$ denote the Yukawa couplings of our model and $\lambda_i$
those of the SM.
The boundary condition for the quartic Higgs coupling is
\begin{eqnarray}
\lambda(M_{SUSY}^-) &\hskip -2mm=\hskip -2mm&
\ifmath{{\textstyle{1 \over 4}}}\!
\Big[(g^2(M_{SUSY}^+)\!+\!g'^2(M_{SUSY}^+) \Big] \cr
&&\vb{18}
(\cos2\beta\sin^2\theta+
\cos^2\theta)^2
\end{eqnarray}
The MSSM limit is obtained setting $\theta \to \pi/2$ i.e. $v_3=0$.
\end{enumerate}
\noindent
Before we close this section we give some details of the calculation.
At the scale $Q=M_{SUSY}$ we vary randomly the SUSY parameters $M$,
$\mu$ and $\tan\beta$, as well as the R--Parity violating parameter
$\epsilon_3$.
The parameter $v_3=v\cos\theta$ is calculated from the boundary conditions.
Since $\lambda$ (or equivalently the SM Higgs
mass $m_H^2=2\lambda v^2$) is varied randomly, in practice we also scan
over $\theta$.
This way, we consider all possible initial conditions
for the RGEs at $Q=M_{SUSY}$, and evolve them up to the unification
scale $Q=M_{GUT}$.
The solutions that satisfy $b-\tau$ unification
are kept.
\subsection{Yukawa Unification in the $\epsilon$ model: III Results
and Discussion}
\FIGURE[t]{\epsfig{file=aretop.eps,width=7cm}
\caption{Top quark mass as a function of $\tan\beta$ for different
values of the R--Parity violating parameter $v_3$. Bottom quark and
tau lepton Yukawa couplings are unified at $M_{GUT}$. The horizontal
lines correspond to the $1\sigma$ experimental $m_t$
determination. Points with $t-b-\tau$ unification lie in the diagonal
band at high $\tan\beta$ values. We have taken $M_{SUSY}=m_t$.}
\label{fig2}}
The results are summarized in Figure~\ref{fig2} where we present the
top quark mass as a function of $\tan\beta$ for different
values of the R--Parity violating parameter $v_3$. Bottom quark and
tau lepton Yukawa couplings are unified at $M_{GUT}$. The horizontal
lines correspond to the $1\sigma$ experimental $m_t$
determination. Points with $t-b-\tau$ unification lie in the diagonal
band at high $\tan\beta$ values. We have taken $M_{SUSY}=m_t$.
The dependence of our results on $\alpha_s$ and $m_b$
is totally analogous to what happens in
the MSSM. The upper bound on $\tan\beta$, which
is $\tan\beta\raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 61$ for $\alpha_s=0.118$, increases with $\alpha_s$
and becomes $\tan\beta\raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 63$ (59) for $\alpha_s=0.122$ (0.114).
The top mass value for which unification is achieved for any
$\tan\beta$ value within the perturbative region increases with
$\alpha_s$, as in the MSSM.
As for the dependence on $m_b$, if we consider $m_b(m_b)=4.1$ (4.5)
GeV then the upper bound of this parameter is given by $\tan\beta\raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}}
64$ (58). In addition, the MSSM region is narrower (wider) at high
$\tan\beta$ compared with the $m_b(m_b)=4.3$ GeV case.
The line at high $\tan\beta$ values corresponds
to points where $t-b-\tau$ unification is achieved. Since the region
with $|v_3|<5$ GeV overlaps with the MSSM region, it follows that
$t-b-\tau$ unification is possible in this model for values of $|v_3|$
up to about 5 GeV, instead of 50 GeV or so, which holds in the case of
bottom-tau unification.
\section{On $\alpha_3(M_Z)$ versus $\sin^2 \theta_W (M_Z)$}
Recent studies \cite{MSSMalfas} of gauge coupling
unification in the context of minimal
R--Parity conserving supergravity (SUGRA) agree that using
the experimental values for the electromagnetic coupling and the weak
mixing angle, the prediction obtained for $\alpha_s(M_Z) \sim 0.129
\pm 0.010$ is about 2$\sigma$ larger than indicated by the most recent
world average value $\alpha_s(M_Z)^{W.A}=0.1189 \pm 0.0015$
\cite{alphasWA}.
We have re-considered the $\alpha_s$ prediction in the context of the
model with bilinear breaking of R--Parity. We have shown
\cite{RPValfas}, that in this simplest SUGRA R--Parity breaking model,
with the same particle content as the MSSM, there appears an
additional negative contribution to $\alpha_s$, which can bring the
theoretical prediction closer to the experimental world average. This
additional contribution comes from two--loop b--quark Yukawa effects
on the renormalization group equations for $\alpha_s$. Moreover we
have shown that this contribution is typically correlated to the
tau--neutrino mass which is induced by R--Parity breaking and which
controls the R-Parity violating effects. We found that it is possible
to get a 5\% effect on $\alpha_s(M_Z)$ even for light $\nu_{\tau}$
masses. The results are summarized in Figure~\ref{fig3} where we
present the situation for the MSSM and in Figure~\ref{fig4} where the
results for the bilinear R--Parity breaking model are shown.
\FIGURE[t]{\epsfig{file=mssm.eps,width=7cm}\caption{$\alpha_s(M_Z)$
versus $\hat s_Z$ for the MSSM.}\label{fig3}}
\FIGURE[t]{\epsfig{file=rmssm.eps,width=7cm}\caption{$\alpha_s(M_Z)$
versus $\hat s_Z$ for the bilinear $\not{\!\!\!\!R}_p$ model.}\label{fig4}}
\section{Conclusions}
The bilinear R--Parity model is a minimal extension of the MSSM with many
new features, among which the possibility of having masses for the neutrinos.
We have shown that it is possible to incorporate these models in
a N=1 SUGRA scenario, where the number of free parameters is reduced.
In these so--called {\it radiative breaking} scenarios we showed that
this model allows $b-\tau$ Yukawa unification for
any value of $\tan\beta$ while satisfying perturbativity of the
couplings. We also find the $t-b-\tau$ Yukawa unification
easier to achieve than in the MSSM, occurring in a
wider high $\tan\beta$ region.
By performing a full two--loop calculation~\cite{RPValfas}
we also have shown that in this model there appears an
additional negative contribution to $\alpha_s$, which can bring the
theoretical prediction closer to the experimental world average.
Although we presented here only the one generation example, we have
achieved also the above results in the full three generation case. In
this situation we can get at one--loop
non zero values for the masses of the two
lightest neutrinos which very interesting in the
context of solving the solar and atmospheric neutrino
problems~\cite{numass}.
\section*{Acknowledgements}
This work was supported in part by the TMR network grant
ERBFMRXCT960090 of the European Union.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,447 |
<?php
namespace Salita\PacienteBundle;
use Symfony\Component\HttpKernel\Bundle\Bundle;
class SalitaPacienteBundle extends Bundle
{
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,640 |
\section{Introduction}
Algebraic geometry is the study of solutions sets
to polynomial equations. Solutions that depend on
an infinitesimal parameter can be analyzed combinatorially using
min-plus algebra. This insight led to the development of
tropical algebraic geometry \cite{MS}.
While all algebraic varieties and their tropicalizations may be explored
at various level of granularity, varieties that serve as moduli spaces
are usually studied at the highest level of abstraction.
This paper does exactly the opposite: we investigate and
tropicalize certain concrete moduli spaces,
mostly from the 19th century repertoire \cite{hunt}, by means
of their defining polynomials.
A first example, familiar to all algebraic geometers, is the
moduli space $\mathcal{M}_{0,n}$ of $n$ distinct points on the projective line $\mathbb{P}^1$.
We here regard $\mathcal{M}_{0,n}$ as a subvariety in a suitable torus.
Its tropicalization ${\rm trop}(\mathcal{M}_{0,n})$
is a simplicial fan of dimension $n-3$ whose points parametrize all
metric trees with $n$ labeled leaves. The cones distinguish different combinatorial types of metric trees.
The defining polynomials of this (tropical) variety are
the $\binom{n}{4}$ Pl\"ucker quadrics $p_{ij} p_{k\ell} - p_{ik} p_{j\ell} + p_{i\ell} p_{jk}$.
These quadrics are the $4 \times 4$-subpfaffians of
a skew-symmetric $n \times n$-matrix, and they form
a {\em tropical basis} for $\mathcal{M}_{0,n}$.
The {\em tropical compactification} defined by this fan
is the moduli space $\overline{\mathcal{M}}_{0,n}$ of $n$-pointed stable rational curves.
The picture for $n=5$ is delightful:
the tropical surface ${\rm trop}(\mathcal{M}_{0,5})$ is the cone
over the {\em Petersen graph}, with vertices labeled by the
$10$ Pl\"ucker coordinates $p_{ij}$ as in Figure~\ref{fig:petersen}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.3\textwidth]{peterson}
\end{center}
\vspace{-0.3in}
\label{fig:petersen}
\caption{The Petersen graph represents the tropicalization of $\mathcal{M}_{0,5}$.}
\end{figure}
A related example is the universal family $\mathcal{A}(5)$ over the modular curve $X(5)$.
The relevant combinatorics goes back to Felix Klein and his
famous 1884 lectures on the icosahedron \cite{klein}.
Following Fisher \cite{Fis}, the surface $\mathcal{A}(5)$ sits in
$\mathbb{P}^1 \times \mathbb{P}^4$ and has the Pfaffian representation
\begin{equation}
\label{eq:pfaff5}
{\rm rank} \begin{bmatrix}
0 & -a_1x_1 & -a_2 x_2 & a_2 x_3 & a_1 x_4 \\
a_1 x_1 & 0 & -a_1 x_3 & -a_2 x_4 & a_2 x_0 \\
a_2 x_2 & a_1 x_3 & 0 & -a_1 x_0 & -a_2 x_1 \\
-a_2 x_3 & a_2 x_4 & a_1 x_0 & 0 & -a_1 x_2 \\
-a_1 x_4 & -a_2 x_0 & a_2 x_1 & a_1 x_2 & 0
\end{bmatrix} \, \leq \,\, 2 .
\end{equation}
The base of this family is $ \mathbb{P}^1$ with coordinates $(a_1:a_2)$.
The tropical surface ${\rm trop} (\mathcal{A}(5))$ is
a fan in $\mathbb{TP}^1 \times \mathbb{TP}^4$, which is
combinatorially the Petersen graph in Figure~\ref{fig:petersen}.
The central fiber,
over the vertex of $\mathbb{TP}^1$
given by ${\rm val}(a_1) = {\rm val}(a_2)$,
is the $1$-dimensional fan with rays
$e_0,e_1,e_2,e_3,e_4$. These
correspond to the edges
34-25, 12-35, 45-13, 23-14, 15-24.
For ${\rm val}(a_1) {<} {\rm val}(a_2)$,
the fiber is given by the pentagon 12-34-15-23-45-12
with these rays attached. For
${\rm val}(a_1) {>} {\rm val}(a_2)$, it is the pentagram
35-14-25-13-24-35 with the five rays.
Each of the edges has multiplicity $5$.
The map from ${\rm trop} (\mathcal{A}(5))$ onto $\mathbb{TP}^1$
is visualized in Figure~\ref{fig:pentagonpentagram}.
\begin{figure}[h]
\begin{center}
\vspace{-0.1in}
\includegraphics[width=0.5\textwidth]{trop_a5}
\end{center}
\vspace{-0.35in}
\label{fig:pentagonpentagram}
\caption{The universal family of tropical elliptic normal curves of degree $5$.}
\end{figure}
The discriminant of our family $\mathcal{A}(5) \rightarrow \mathbb{P}^1$ is the binary form
\begin{equation}
\label{eq:binary12}
a_1^{11} a_2 \,- \,11 a_1^6 a_2^6 \,-\,a_1 a_2^{11},
\end{equation}
whose $12$ zeros represent Klein's icosahedron.
The modular curve $X(5)$ is $\mathbb{P}^1$ minus these $12$ points.
For each $(a_1:a_2) \in X(5)$, the condition (\ref{eq:pfaff5})
defines an elliptic normal curve in $\mathbb{P}^4$.
Throughout this paper we work
over an algebraically closed field $K$ of characteristic $0$.
Our notation and conventions
regarding tropical geometry follow \cite{MS}. For simplicity of exposition,
we identify the tropical projective space $\mathbb{TP}^n$ with its open part
$\mathbb{R}^{n+1} /\mathbb{R} (1,1,\ldots,1)$.
The adjective ``classical'' in our title has two meanings.
{\em Classical as opposed to tropical} refers to moduli spaces that
are defined over fields, the usual setting of algebraic geometry.
The foundations for tropicalizing such schemes and stacks are currently being
developed, notably in the work of
Abramovich {\it et al.}~\cite{ACP} and
Baker {\it et al.}~\cite{BPR} (see \cite{abramovich} for a survey). These rest on the
connection to nonarchimedean geometry.
{\em Classical as opposed to modern} refers to moduli spaces
that were known in the 19th century. We focus here on the varieties
featured in Hunt's book \cite{hunt}, notably the
Segre cubic, the Igusa quartic, the Burkhardt quartic,
and their universal~families. We shall also
revisit the work on tropical del Pezzo surfaces by
Hacking {\it et al.} in \cite{HKT} and explain how this relates to the
tropical G\"opel variety of \cite[\S 9]{RSSS}.
Each of our moduli spaces admits a high-dimensional symmetric embedding of the form
\begin{equation}
\label{eq:twomaps} \mathbb{P}^d \,\, \buildrel{{\rm linear}}\over{\hookrightarrow}\,\, \mathbb{P}^m\,\,
\buildrel{{\rm monomial}} \over{\dashrightarrow} \,\, \mathbb{P}^n.
\end{equation}
The coordinates of the first map are the
linear forms defining the $m{+}1$ hyperplanes in a
complex reflection arrangement $\mathcal{H}$ in $\mathbb{P}^d$,
while the coordinates of the second map are monomials
that encode the symplectic geometry of a finite vector space.
The relevant combinatorics rests on the representation theory
developed in \cite{GS, GSW}.
Each of our moduli spaces is written as the image of a map
(\ref{eq:twomaps}) whose coordinates are monomials in linear forms,
and hence the formula in~\cite[Theorem 3.1]{DFS} expresses
its tropicalization using the matroid structure of $\mathcal{H}$.
Our warm-up example, the modular curve $X(5)$,
fits the pattern (\ref{eq:twomaps}) for $d=1, m=11$ and $n=5$.
Its arrangement $\mathcal{H} \subset \mathbb{P}^1$ is
the set of $12$ zeros of (\ref{eq:binary12}), but now identified with the
complex reflection arrangement ${\rm G}_{16}$ as in
\cite[\S 2.2]{GS}. If we factor
(\ref{eq:binary12}) into six quadrics,
$$ \bigr(a_1 a_2\bigr) \cdot \prod_{i=1}^5 \bigl((\gamma^{5-i} a_1+(\gamma{+}\gamma^4) a_2) (\gamma^ia_1+(\gamma^2{+}\gamma^3)a_2 \bigr) ,$$
where $\gamma$ is a primitive fifth root of unity, then these
define the coordinates of $ \,\mathbb{P}^{11}
\buildrel{{\rm monomial}} \over{\dashrightarrow} \mathbb{P}^5. $
The image is a quadric in a plane in $\mathbb{P}^5$,
and $X(5)$ is now its intersection with the torus $\mathbb{G}_m^5$.
The symmetry group ${\rm G}_{16}$ acts on $\mathbb{P}^5$
by permuting the six homogeneous coordinates.
The tropical modular curve ${\rm trop}(X(5))$
is the standard one-dimensional fan in $\mathbb{TP}^5$,
with multiplicity five and pentagonal fibers as above. But now
the full symmetry group acts on
the surface $\mathcal{A}(5) \subset \mathbb{P}^5 \times \mathbb{P}^4$
and the corresponding tropical surface by permuting coordinates.
\smallskip
We next discuss the organization of this paper.
In Section~\ref{sec:segrecubic} we study the Segre cubic and the Igusa quartic, in their symmetric embeddings into $\mathbb{P}^{14}$ and
$\mathbb{P}^9$, respectively. We show that the corresponding
tropical variety is the space of phylogenetic trees on six taxa,
and we determine the universal family of tropical Kummer surfaces over that base.
In Section~\ref{sec:burkhardt} we study the Burkhardt quartic in its symmetric embedding
in $\mathbb{P}^{39}$, and, over that base, we compute the universal
family of abelian surfaces in $\mathbb{P}^8 $ along with
their associated tricanonical curves of genus~$2$.
In Section~\ref{sec:tropicalization} we compute the Bergman fan
of the complex reflection arrangement $\mathrm{G}_{32}$
and from this we derive the tropical Burkhardt quartic in $\mathbb{TP}^{39}$.
The corresponding tropical compactification is shown to coincide with
the Igusa desingularization of the Baily--Borel--Satake compactification of $\mathcal{A}_2(3)$.
In Section~\ref{sec:genus2moduli} we relate our findings to the
abstract tropical moduli spaces of \cite{BMV, Cha}.
Figure \ref{table-tropical-moduli} depicts the resulting correspondence between
trees on six taxa, metric graphs of genus 2, and cones in the
tropical Burkhardt quartic.
In Section~\ref{sec:delpezzo} we study the reflection arrangements of types
$\mathrm{E}_6$ and $\mathrm{E}_7$, and we show how they lead to the
tropical moduli spaces of marked del Pezzo surfaces
constructed by Hacking, Keel and Tevelev \cite{HKT}.
For $\mathrm{E}_7$ we recover the tropical G\"opel variety of \cite[\S 9]{RSSS}.
This is a six-dimensional fan which serves as the universal family of tropical cubic surfaces.
\subsection*{Acknowledgements}
Steven Sam was supported by a Miller Research Fellowship
at UC Berkeley.
Qingchun Ren and Bernd Sturmfels were supported by the
National Science Foundation (DMS-0968882) and DARPA (HR0011-12-1-0011).
We thank Florian Block, Dustin Cartwright, Melody Chan, Diane Maclagan, Sam Payne
and Jenia Tevelev for helpful discussions.
We are especially grateful to Gus Schrader for his contributions to
the material in Section~\ref{sec:burkhardt}.
\section{Segre Cubic, Igusa Quartic, and Kummer Surfaces}
\label{sec:segrecubic}
The moduli spaces in this section are based on
the hyperplane arrangement in $\mathbb{P}^4$ associated
with the reflection representation of the symmetric group
$\Sigma_6$. It consists of the $15$ hyperplanes
\begin{equation}
\label{eq:xixj} \qquad x_i - x_j \,\, = \,\, 0 \quad \qquad (1 \leq i< j \leq 6).
\end{equation}
Here $\mathbb{P}^4$ is the projectivization of the
$5$-dimensional vector space $K^6/K(1,1,1,1,1,1)$.
The $15$ linear forms in (\ref{eq:xixj}) define the map
$\, \mathbb{P}^4 \buildrel{{\rm linear}}\over{\hookrightarrow} \mathbb{P}^{14} \,$
whose image is the $4$-dimensional subspace
${\rm Cyc}_4$ of $\mathbb{P}^{14}$ that is defined by the linear equations
$z_{ij} - z_{ik} + z_{jk} = 0$
for $1 \leq i < j < k \leq 6$.
The corresponding tropical linear space ${\rm trop}({\rm Cyc}_4)$, with the coarsest fan structure, is isomorphic to both the moduli space of equidistant (rooted) phylogenetic trees with $6$ vertices and the moduli space of (unrooted) phylogenetic trees with $7$ vertices.
The former was studied by Ardila and Klivans in \cite[\S 4]{AK}.
They develop the correspondence between ultrametrics and equidistant
phylogenetic trees in \cite[Theorem 3]{AK}. The latter is a tropicalization of the Grassmannian $ {\rm Gr}(2,7)$ as described in \cite[\S 4]{SS}.
From the combinatorial description given there one derives the face numbers below:
\begin{lemma}
The tropical linear space ${\rm trop}({\rm Cyc}_4)$
is the space of ultrametrics on $6$ elements,
or, equivalently, the space of equidistant phylogenetic trees on $6$ taxa.
It is a fan over a three-dimensional simplicial complex
with
$56$ vertices,
$ 490$ edges,
$ 1260$ triangles
and $945$ tetrahedra.
\end{lemma}
We now define our two modular threefolds by way of
a monomial map from $\mathbb{P}^{14}$ to another space
$\mathbb{P}^n$. The homogeneous coordinates on that $\mathbb{P}^n$
will be denoted $m_0, m_1, \ldots,m _n$,
so as to highlight that they can be identified with
certain modular forms, known as
theta constants.
The {\em Segre cubic} $\mathcal{S}$ is the closure of the image of ${\rm Cyc}_4$
under $\,\mathbb{P}^{14} \buildrel{{\rm monomial}} \over{\dashrightarrow} \mathbb{P}^{14}\,$ given by
\begin{equation}
\label{eq:segremap}
\begin{matrix}
(z_{12} z_{34} z_{56} : z_{12} z_{35} z_{46} : z_{12} z_{36} z_{45} : z_{13} z_{24} z_{56} : z_{13} z_{25} z_{46} : z_{13} z_{26} z_{45} : z_{14} z_{23} z_{56} :\\
z_{14} z_{25} z_{36} : z_{14} z_{26} z_{35} : z_{15} z_{23} z_{46} : z_{15} z_{24} z_{36} : z_{15} z_{26} z_{34} : z_{16} z_{23} z_{45} : z_{16} z_{24} z_{35} : z_{16} z_{25} z_{34}).
\end{matrix}
\end{equation}
The prime ideal of $\mathcal{S}$ is generated by $10$ linear trinomials, like $m_0 - m_1 + m_2$, that
come from Pl\"ucker relations among the $ x_i - x_j$, and one cubic binomial such as
$\,m_0 m_7 m_{12}-m_2 m_6 m_{14}$. For a graphical representation of this ideal we refer to
Howard {\em et al.}~\cite[(1.2)]{HMSV}: for the connection, note that the monomials in \eqref{eq:segremap} naturally correspond to perfect matchings of a set of size $6$, which are the colored graphs in \cite{HMSV}.
To see that this is the same as the classical definition of the Segre cubic, the reader can jump ahead to \eqref{eq:segre_rst} and \eqref{eq:from_m_to_rst}.
The {\em Igusa quartic} $\mathcal{I}$ is the closure of the image of ${\rm Cyc}_4$ under
$\mathbb{P}^{14} \buildrel{{\rm monomial}} \over{\dashrightarrow} \mathbb{P}^9$ given~by
\begin{small}
$$
\label{eq:igusamap}
\begin{matrix} (
z_{12} z_{13} z_{23} z_{45} z_{46} z_{56} \!:\!
z_{12} z_{14} z_{24} z_{35} z_{36} z_{56} \!:\!
z_{12} z_{15} z_{25} z_{34} z_{36} z_{46} \!:\!
z_{12} z_{16} z_{26} z_{34} z_{35} z_{45}\!:\!
z_{13} z_{14} z_{34} z_{25} z_{26} z_{56}: \\
z_{13} z_{15} z_{35} z_{24} z_{26} z_{46}\! : \!
z_{13} z_{16} z_{36} z_{24} z_{25} z_{45}\! : \!
z_{14} z_{15} z_{45} z_{23} z_{26} z_{36} \! : \!
z_{14} z_{16} z_{46} z_{23} z_{25} z_{35}\! : \!
z_{15} z_{16} z_{56} z_{23} z_{24} z_{34} )
\end{matrix}
$$
\end{small}
The prime ideal of $\mathcal{I}$ is generated by the five linear forms in
the column vector
\begin{equation}
\label{eq:5by5}
\begin{pmatrix}
0 & m_0 & m_1 & m_2 & m_3 \\
m_0& 0& m_4& m_5& m_6 \\
m_1& m_4& 0& m_7& m_8 \\
m_2& m_5& m_7& 0& m_9 \\
m_3& m_6& m_8& m_9& 0
\end{pmatrix}
\cdot
\begin{pmatrix} \phantom{-}1 \,\\ -1 \,\\
\phantom{-} 1\, \\ -1 \,\\ \phantom{-}1\, \end{pmatrix}
\end{equation}
together with any of the $4 \times 4$-minors of the
symmetric $5 \times 5$-matrix in (\ref{eq:5by5}).
The linear forms (\ref{eq:5by5}) come from
Pl\"ucker relations of degree $(1,1,1,1,1,1)$
on ${\rm Gr}(3,6)$. We note that $m_0,\ldots,m_9$ can be written
in terms of theta functions by Thomae's theorem \cite[\S VIII.5]{DO}.
To see that this is the usual Igusa quartic, one can calculate the projective dual of the quartic hypersurface we have just described and verify that it is a cubic hypersurface whose singular locus consists of $10$
nodes. The Segre cubic is the unique cubic in $\mathbb{P}^4$ with $10$ nodes.
A key ingredient in the study of modular varieties is the symplectic combinatorics of finite vector spaces. Here we consider the binary space $\mathbb{F}_2^4$ with the symplectic form
\begin{equation}
\label{eq:innerproduct}
\langle x,y \rangle \,\, = \,\, x_1 y_3 + x_2 y_4 - x_3 y_1 - x_4 y_2 .
\end{equation}
We fix the following bijection between the $15$ hyperplanes
(\ref{eq:xixj})
and the vectors in $\mathbb{F}_2^4 \backslash \{0\}$:
\begin{equation}
\label{eq:bijection15}
\!\!
\begin{matrix}
z_{12} & z_{13} & z_{14} & z_{15} & z_{16} & z_{23} & z_{24} & z_{25} & z_{26} & z_{34} &
z_{35} & z_{36} & z_{45} & z_{46} & z_{56} \\
\!\! u_{0001} \! & \!\! u_{1100} \! & \!\! u_{1110} \! & \!\!
u_{0101} \! & \!\! u_{0110} \! & \!\! u_{1101} \! & \! u_{1111} \! & \! u_{0100} \! &
\! u_{0111} \! & \! u_{0010}
\! & \! u_{1001} \! & \! u_{1010} \! &\! u_{1011} \! & \! u_{1000} \! & \! u_{0011}
\end{matrix}
\end{equation}
This bijection has the property that two
vectors in $\mathbb{F}_2^4 \backslash \{0\}$
are perpendicular with respect to (\ref{eq:innerproduct})
if and only if the corresponding elements of the root system
$\mathrm{A}_5$ are perpendicular. Combinatorially, this means that
the two pairs of indices are disjoint.
There are precisely $35$ two-dimensional subspaces $L$ in
$\mathbb{F}_2^4$. Of these planes $L$, precisely $15$ are isotropic,
which means that $L = L^\perp$. The other $20$ planes naturally
come in pairs $\{L, L^\perp\}$. Each plane
is a triple in $\mathbb{F}_2^4 \backslash \{0\}$
and we write it as a cubic monomial
$z_{ij} z_{k\ell} z_{mn}$.
Under this identification, the parametrization (\ref{eq:segremap}) of the Segre cubic $\mathcal{S}$ is given by the $15$ isotropic planes $L$, while that of the Igusa quartic $\mathcal{I}$ is given by the $10$ pairs $L \cdot L^\perp$ of non-isotropic planes in $\mathbb{F}_2^4$.
The symplectic group $ \mathbf{Sp}_4(\mathbb{F}_2)$
consists of all linear automorphisms of $\mathbb{F}_2^4$ that
preserve the symplectic form (\ref{eq:innerproduct}). As an abstract group it
is isomorphic to the symmetric group on six letters:
\begin{align} \label{eqn:level2isom}
\mathbf{Sp}_4(\mathbb{F}_2) \,\,\cong \,\, \Sigma_6.
\end{align}
This group isomorphism is made explicit by the bijection
(\ref{eq:bijection15}).
Let $\mathcal{M}_2(2)$ denote the moduli space of
smooth curves of genus $2$ with a level $ 2$ structure.
In light of the isomorphism (\ref{eqn:level2isom}), a level $2$
structure on a genus $2$ curve $C$ is an ordering of its six Weierstrass points,
and this corresponds to the choice of six labeled points
on the projective line $\mathbb{P}^1$. The latter choices are parametrized
by the moduli space $\mathcal{M}_{0,6}$.
In what follows,
we consider the {\em open Segre cubic}
$\,\mathcal{S}^{\circ} = \mathcal{S} \backslash \{m_0 m_1 \cdots m_{14} =0\}\,$
inside the torus $\mathbb{G}_m^{14} \subset \mathbb{P}^{14}\,$ and
the {\em open Igusa quartic}
$\,\mathcal{I}^{\circ} = \mathcal{I} \backslash \{m_0 m_1 \cdots m_9 =0 \}\,$
inside the torus $\mathbb{G}_m^{9} \subset \mathbb{P}^9$.
\begin{proposition}
\label{prop:identification}
We have the following identification of
three-dimensional moduli spaces:
\begin{equation}
\label{eq:SIMM}
\mathcal{S}^{\circ} \,= \,\mathcal{I}^{\circ} \,=\,
\mathcal{M}_2(2) = \mathcal{M}_{0,6}.
\end{equation}
\end{proposition}
\begin{proof}
We already argued the last equation.
The first equation is the isomorphism between
the open sets $D$ and $D'$ in the proof of
\cite[Theorem 3.3.11]{hunt}. A nice way to
see this isomorphism is that the kernels
of our two monomial maps coincide (Lemma~\ref{lem:kernel}).
The middle equation follows from the
last part of \cite[Theorem 3.3.8]{hunt},
which concerns the Kummer functor $\mathbf{K}_2$.
For more information on the modular interpretations
of $\mathcal{S}$ and $\mathcal{I}$ see \cite[\S VIII]{DO}.
\end{proof}
The Kummer surface associated to a point
in $\mathcal{I}^{\circ}$ is the intersection of the Igusa quartic
$\mathcal{I}$ with the tangent space at that point,
by \cite[Theorem 3.3.8]{hunt}.
We find it convenient to express that Kummer surface
in terms of the corresponding point in $\mathcal{S}^{\circ}$.
Following Dolgachev and Ortland \cite[\S IX.5, Proposition~6]{DO},
we write the defining equation of the Segre cubic
$\mathcal{S}$ as
\begin{equation}
\label{eq:segre_rst}
16 r^3 - 4 r (s_{01}^2+s_{10}^2+s_{11}^2) + 4 s_{01} s_{10} s_{11} + r t^2 \,\,= \,\, 0.
\end{equation}
The embedding of the $\mathbb{P}^4$
with coordinates $(r \!:\!s_{01}\!:\!s_{10}\!:\!s_{11}\!:\!t) $ into our $\mathbb{P}^{14}$ can be written as
\begin{equation}
\label{eq:from_m_to_rst}
\begin{matrix}
r= m_0 ,\qquad
s_{01} = 2m_0 - 4m_1 ,\qquad
s_{10} = 2m_0 - 4m_3, \qquad \\ \qquad
s_{11} = 4m_4 - 2m_0 - 4m_7 ,\qquad
t = 8(m_1+ m_3 - m_0 - m_4 - m_7).
\end{matrix}
\end{equation}
This does not pick out an $\Sigma_6$-equivariant embedding of the space spanned by the $r, s_{ij}$ in the permutation representation of the $m_i$, but it has the advantage of giving short expressions.
Fixing Schr\"odinger coordinates
$(x_{00}\!:\! x_{01 \!}\!:x_{10}\!: \! x_{11})$ on $\mathbb{P}^3$, the
Kummer surface is now given~by
\begin{equation}
\label{eq:kummer}
\begin{matrix}
r (x_{00}^4+x_{01}^4+x_{10}^4+x_{11}^4)
+ s_{01} (x_{00}^2 x_{01}^2+x_{10}^2 x_{11}^2)
+ s_{10} (x_{00}^2 x_{10}^2+x_{01}^2 x_{11}^2) \\
+ s_{11} (x_{00}^2 x_{11}^2 + x_{01}^2 x_{10}^2)
+ t (x_{00} x_{01} x_{10} x_{11}) \quad = \quad 0.
\end{matrix}
\end{equation}
This equation is the determinant of the
$5 \times 5$-matrix in
\cite[Example 1.1]{RSSS}. Its lower
$4 \times 4$-minors satisfy (\ref{eq:segre_rst}).
Our notation is consistent with that for the Coble quartic in \cite[(2.13)]{RSSS}.
We now come to the tropicalization
of our three-dimensional moduli spaces.
We write $e_{12}, e_{13}, \ldots, e_{56}$ for
the unit vectors in $\mathbb{TP}^{14} =
\mathbb{R}^{15}/\mathbb{R} (1,1,\ldots,1)$.
These correspond to our coordinates
$z_{12}, z_{13}, \ldots, z_{56}$ on the
$\mathbb{P}^{14}$ which contains ${\rm Cyc}_4 \simeq \mathbb{P}^4$.
The $56$ rays of the Bergman fan ${\rm trop}({\rm Cyc}_4)$
are indexed by proper subsets $\sigma \subsetneqq \{1,2,3,4,5,6\}$
with $|\sigma| \geq 2$. They are
$$ E_\sigma \,\, \, = \,\, \sum_{\{i,j\} \subseteq \sigma} e_{ij}. $$
Cones in ${\rm trop}({\rm Cyc}_4)$ are spanned by
collections of $E_\sigma$ whose indices $\sigma$ are nested or disjoint.
Let $A_{\rm segre}$ denote the $15 \times 15$-matrix with entries in $\{0,1\}$ that represents the tropicalization of the monomial map (\ref{eq:segremap}). The columns of $A_{\rm segre}$ are indexed by (\ref{eq:bijection15}). The rows of $A_{\rm segre}$ are indexed by tripartitions of $\{1,2,\ldots,6\}$,
or by isotropic planes in $\mathbb{F}_2^4$.
An entry is $1$ if the pair that indexes the column
appears in the tripartition that indexes the row,
or, equivalently, if the line of $\mathbb{F}_2^4$
that indexes the column is contained in the plane that indexes the row.
Note that each row and each column of $A_{\rm segre}$ has
precisely three nonzero entries.
We similarly define the $10 \times 15$-matrix $A_{\rm igusa}$
with entries in $\{0,1\}$ that represents the monomial
map for the Igusa quartic. Its rows have six nonzero
entries and its columns have four nonzero entries.
The column labels of $A_{\rm igusa}$ are the same
as those of $A_{\rm segre}$. The rows
are now labeled by bipartitions of $\{1,2,\ldots,6\}$, or by pairs of non-isotropic planes in $\mathbb{F}_2^4$.
\begin{lemma} \label{lem:kernel}
The matrices $A_{\rm segre}$ and $A_{\rm igusa}$ have the same kernel.
This kernel is the $5$-dimensional subspace spanned by the vectors $E_\sigma - E_{\sigma^c}$ where $\sigma$ runs over triples in $\{1,2,\ldots,6\}$.
\end{lemma}
This lemma can be proved by a direct computation.
The multiplicative version of this fact implies the
identity $\,\mathcal{S}^{\circ} = \mathcal{I}^{\circ}\,$ as seen in
Proposition \ref{prop:identification}. We have the following result.
\begin{theorem} \label{thm:tropsegrecubic}
The tropical Segre cubic ${\rm trop}(\mathcal{S}) $ in $\mathbb{TP}^{14} $
is the image of ${\rm trop}({\rm Cyc}_4)$ under the linear map
$A_{\rm segre}$. The tropical Igusa quartic ${\rm trop}(\mathcal{I}) $ in $\mathbb{TP}^{9} $
is the image of ${\rm trop}({\rm Cyc}_4)$ under the linear map
$A_{\rm igusa}$. These two $3$-dimensional fans are affinely isomorphic to each other,
but all maximal cones of ${\rm trop}(\mathcal{I}) $ come with multiplicity two.
The underlying simplicial complex has
$25$ vertices, $105$ edges and $105$ triangles. This
is the tree space ${\rm trop}(\mathcal{M}_{0,6})$.
\end{theorem}
\begin{proof}
The fact that we can compute the tropicalization of
the image of a linear space under a monomial map
by just applying the tropicalized monomial map
$A_\bullet$ to the
Bergman fan is \cite[Theorem 3.1]{DFS}.
The fact that the two tropical threefolds are
affinely isomorphic follows immediately from Lemma~\ref{lem:kernel}.
To analyze the combinatorics of this common image fan,
we set $E_\sigma $ to be the zero vector
when $\sigma = \{i\}$ is a singleton.
With this convention, we have
$$ A_{\rm segre} E_\sigma = A_{\rm segre} E_{\sigma^c} \quad
\hbox{and} \quad
A_{\rm igusa} E_\sigma = A_{\rm igusa} E_{\sigma^c}
$$
for all proper subsets $\sigma$ of $\{1,2,\ldots,6\}$.
We conclude that the $56 = 15 + 20 + 15 + 6$ rays of the
Bergman fan ${\rm trop}({\rm Cyc}_4)$ get mapped to
$25 = 15 + 10$ distinct rays in the image fan.
The cones in ${\rm trop}({\rm Cyc}_4)$
correspond to equidistant trees, that is,
\underline{rooted} metric trees on six taxa. Combinatorially,
our map corresponds to removing the root from the tree, so the
cones in the image fan correspond to
\underline{unrooted} metric trees on six taxa.
Specifically, each of the $945$ maximal cones of
${\rm trop}({\rm Cyc}_4)$ either has one ray
$E_{\{i,j,k,\ell,m\}}$ that gets mapped to zero,
or it has two rays $E_\sigma$
and $E_{\sigma^c}$ that become identified.
Therefore its image is three-dimensional.
Our map takes the $945$ simplicial cones of dimension $4$ in
${\rm trop}({\rm Cyc}_4)$
onto the $105$ simplicial cones of dimension $3$,
one for each unrooted tree. The fibers involve precisely nine
cones because each trivalent tree on six taxa has
nine edges, each a potential root location.
Combinatorially, nine rooted trivalent trees map to the same
unrooted tree.
It remains to analyze the multiplicity of each maximal cone in the image.
The $105$ maximal cones in ${\rm trop}(\mathcal{S})$
all have multiplicity one, while the corresponding cones
in ${\rm trop}(\mathcal{I})$ have multiplicity two.
We first found this by a direct calculation using the software ${\tt gfan}$ \cite{jensen},
starting from the homogeneous ideals of $\mathcal{S}$ and ${\mathcal{I}}$ described above.
It can also be seen by examining the images of the rays $E_\tau$
under each matrix $A_\bullet$ modulo the line spanned by the
vector $(1,1,\ldots,1)$.
Each of the $15$ vectors $A_{\rm igusa} E_{ij}$ is the
sum of four unit vectors in $\mathbb{TP}^9$, while
the $10$ vectors $A_{\rm igusa} E_{ijk}$
are the ten unit vectors multiplied by the factor $2$.
\end{proof}
We next discuss the tropicalization of the universal family
of Kummer surfaces over $\mathcal{S}^{\circ}$.
This is the hypersurface in $\,\mathcal{S}^{\circ} \times \mathbb{P}^3$
defined by the equation (\ref{eq:kummer}).
The tropicalization of this hypersurface
is a five-dimensional fan whose fibers over
the tree space $\,{\rm trop}(\mathcal{S})\,$
are the tropical Kummer surfaces in
$\mathbb{TP}^3$.
We computed this fan from the equations using
{\tt gfan} \cite{jensen}.
\begin{proposition}
The tropicalization of the universal Kummer surface in the coordinates $((m_0:m_1:\dotsb{}:m_{14}),(x_{00}:x_{01}:x_{10}:x_{11}))$ is a $5$-dimensional polyhedral fan
in $\mathbb{TP}^{14} \times \mathbb{TP}^3$. This fan has $56$ rays and $1536$ maximal cones,
and its f-vector is $\,(56, 499 ,1738, 2685, 1536)$.
\end{proposition}
Instances of tropical Kummer surfaces can be obtained by slicing the above fan with fixed values of the $15$ tropical $m$ coordinates. Figure~\ref{fig:kummer-snowflake} shows the tropicalization of a Kummer surface
over a snowflake tree (Type (7) in Table \ref{table-tropical-moduli}).
It consists of $30$ two-dimensional polyhedra, $24$ unbounded and
$6$ bounded. The latter $6$ form the facets of a parallelepiped.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{type_aaa}
\label{fig:kummer-snowflake}
\caption{Tropicalization of a Kummer surface over a snowflake tree.}
\end{figure}
Figure~\ref{fig:kummer-caterpillar} shows a tropical Kummer surface over
a caterpillar tree (Type (6) in Table \ref{table-tropical-moduli}).
It consists of $33$ two-dimensional polyhedra, $24$ bounded and
$9$ bounded. The latter $9$ polygons form a subdivision of a flat octagon.
These two pictures were drawn using {\tt polymake} \cite{GJ}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.48]{type_aab}
\label{fig:kummer-caterpillar}
\caption{Tropicalization of a Kummer surface over a caterpillar tree.}
\end{figure}
On each Kummer surface we could now
identify a tree that represents the bicanonical image of the associated
genus $2$ curve. Classically, one obtains a double quadric with six distinguished points
by intersecting with any of the planes in the $16_6$ configuration
\cite[(1.2)]{RSSS}.
The tropical variety described in Theorem \ref{thm:tropsegrecubic}
defines the {\em tropical compactification} $\overline{\mathcal{S}}$ of the
Segre cubic $\mathcal{S}$. By definition, the
threefold $\overline{\mathcal{S}}$ is the closure
of $\mathcal{S}^{\circ}$ in the toric variety determined by the
given fan structure on ${\rm trop}(\mathcal{S})$.
For details, see Tevelev's article \cite{Tev}.
This tropical compactification of our moduli space (\ref{eq:SIMM})
is intrinsic. To see this, we recall that the {\em intrinsic torus}
of a very affine variety $X \subset \mathbb{G}_m^n$
is the torus whose character lattice is the finitely generated
multiplicative free abelian group $K[X]^*/K^*$.
The following lemma can be used to find the intrinsic torus for
each of the very affine varieties in this paper.
\begin{lemma} \label{lem:intrinsictorusimage}
Let $m \colon T_1 \to T_2$ be a monomial map of tori and $U \subset T_1$
a subvariety embedded in its intrinsic torus. Then $\overline{m(U)} \subset m(T_1)$ is the embedding of $\overline{m(U)}$ in its intrinsic torus.
\end{lemma}
\begin{proof}
Choose identifications $K[T_1] = K[x_1^{\pm}, \dots, x_r^{\pm}]$ and $K[T_2] = K[y_1^{\pm}, \dots, y_s^{\pm}]$. By assumption, the pullback $m^*(y_i)$ is a monomial in the $x_j$, which we call $z_i$. We have an injection
of rings $m^* \colon K[\overline{m(U)}] \subset K[U]$, and hence we get an induced injection of groups $\phi \colon K[\overline{m(U)}]^* / K^* \subset K[U]^* / K^*$. Since $K[\overline{m(U)}]$ is generated by the $y_i$, we conclude that $m^*(K[\overline{m(U)}])$ is contained in the subalgebra $K[z_1^{\pm}, \dots, z_s^{\pm}] \subset K[U]$. Pick $f \in K[\overline{m(U)}]^* / K^*$. Since $U$ is embedded in its intrinsic torus,
we have $\phi(f) = z_1^{d_1} \cdots z_s^{d_s}$ for some $d_i \in \mathbb{Z}$. So $\phi(y_1^{d_1} \cdots y_s^{d_s}) = \phi(f)$ and since $\phi$ is injective, we conclude that $f = y_1^{d_1} \cdots y_s^{d_s}$.
\end{proof}
The embedding of the Segre cubic $\mathcal{S}$ into
the $9$-dimensional toric variety given by (\ref{eq:segremap})
satisfies the hypotheses of Lemma \ref{lem:intrinsictorusimage}.
Indeed, $\mathcal{S}^{\circ}$
is the image of the complement of a hyperplane arrangement
under a monomial map, and, by \cite[\S 4]{Tev},
the intrinsic torus of an essential arrangement of
$n$ hyperplanes in $\mathbb{P}^r$ is $\mathbb{G}_m^{n-1}$.
The same argument works for all
moduli spaces studied in this paper.
That the ambient torus $\mathbb{G}_m^9$ is intrinsic for
the open Segre cubic $\mathcal{S}^{\circ} $
can also be seen from the fact that the
$15$ boundary divisors $\mathcal{S} \cap \{m_i=0\}$ are irreducible.
Indeed, by \cite[\S 3.2.1]{hunt},
they are projective planes $\mathbb{P}^2$.
Each of the ten singular points of $\mathcal{S}$ lies
on six of these planes,
so each boundary plane contains four singular points.
From the combinatorial description above we infer the following summary of the situation.
\begin{corollary}
The tropical compactification of the open Segre cubic $\mathcal{S}^{\circ}$,
and hence of the other
moduli spaces in \eqref{eq:SIMM}, is the
Deligne--Mumford compactification $\overline{\mathcal{M}}_{0,6}$.
This threefold is the blow-up of the $10$ singular points of $\mathcal{S}$, or of the $15$ singular lines of the Igusa quartic~$\mathcal{I}$.
\end{corollary}
The second sentence is Theorem 3.3.11 in Hunt's book \cite{hunt}.
The first is a special case of \cite[Theorem 1.11]{HKT}.
Our rationale for giving a detailed equational derivation of the familiar manifold $\overline{\mathcal{M}}_{0,6}$ is that it sets the stage for our primary example in the next section.
\section{Burkhardt Quartic and Abelian Surfaces}
\label{sec:burkhardt}
The Burkhardt quartic is a rational quartic threefold in $\mathbb{P}^4$. It can be characterized as the unique quartic hypersurface in $\mathbb{P}^4$ with the maximal number $45$ of nodal singular points
\cite{DJSV}.
It compactifies the moduli space $\mathcal{M}_2(3)$
of genus 2 curves with level 3 structure \cite{DL,FSM,GS,hunt}. We identify $\mathcal{M}_2(3)$ with a subvariety of $\mathcal{A}_2(3)$, the moduli space of principally polarized abelian surfaces with level 3 structure, by sending a
smooth curve to its Jacobian.
All constructions in this section can be
carried out over any field $K$
of characteristic other than $2$ or $3$, provided
$K$ contains a primitive third root of unity $\omega$.
In the tropical context, $K$ will be a field with a valuation.
For details on arithmetic issues see Elkies' paper \cite{Elk}.
We realize the Burkhardt quartic
as the image of a rational map that is given as a composition
$\, \mathbb{P}^3 \,\, \buildrel{{\rm linear}}\over{\hookrightarrow}\,\, \mathbb{P}^{39}\,\,
\buildrel{{\rm monomial}} \over{\dashrightarrow} \,\, \mathbb{P}^{39}$.
We choose coordinates
$(c_0:c_1:c_2:c_3)$ on $\mathbb{P}^3$
and coordinates $(m_0:m_1:\cdots :m_{39})$ on the rightmost $\mathbb{P}^{39}$.
The $40$ homogeneous coordinates
$u_{ijk\ell}$ on the middle $\mathbb{P}^{39}$
are indexed by the lines through the origin in the finite vector space
$\mathbb{F}_3^4$. Each line is given
by the vector whose leftmost nonzero coordinate is $1$.
The linear map $\mathbb{P}^3 \hookrightarrow \mathbb{P}^{39}$ is defined as follows,
where $\omega = \frac{1}{2} (-1 + \sqrt{-3})$ is a third root of unity:
$$
\begin{matrix}
u_{0001} = c_1 {+} c_2 {+} c_3 &
u_{0010} = c_2 {-} c_3 {+} c_0 &
u_{0011} = c_3 {+} c_0 {-} c_1 &
u_{0012} = c_0 {+} c_1 {-} c_2 \\
u_{0100} = \sqrt{-3} \cdot c_1 &
u_{0101} = c_1 {+} \omega^2 c_2 {+} \omega^2 c_3 &
u_{0102} = c_1 {+} \omega c_2 {+} \omega c_3 &
u_{0110} = c_2 {-} \omega c_3 {+} \omega^2 c_0 \\
u_{0111} = c_3 {+} c_0 {-} \omega c_1 &
u_{0112} = c_0 {+} \omega^2 c_1 {-} c_2 &
u_{0120} = c_2 {-} \omega^2 c_3 {+} \omega c_0 &
u_{0121} = c_0 {+} \omega c_1 {-} c_2 \\
u_{0122} = c_3 {+} c_0 {-} \omega^2 c_1 &
u_{1000} = \sqrt{-3} \cdot c_0 &
u_{1001} = c_1 {+} \omega c_2 {+} \omega^2 c_3 &
u_{1002} = c_1 {+} \omega^2 c_2 {+} \omega c_3 \\
u_{1010} = c_2 {-} c_3 {+} \omega c_0 &
u_{1011} = c_3 {+} \omega c_0 {-} c_1 &
u_{1012} = c_0 {+} \omega^2 c_1 {-} \omega^2 c_2 &
u_{1020} = c_2 {-} c_3 {+} \omega^2 c_0 \\
u_{1021} = c_0 {+} c_1 \omega {-} \omega c_2 &
u_{1022} = c_3 {+} \omega^2 c_0 {-} c_1 &
u_{1100} = \sqrt{-3} \cdot c_3 &
u_{1101} = c_1 {+} c_2 {+} \omega c_3 \\
u_{1102} = c_1 {+} c_2 {+} c_3 \omega^2 &
u_{1110} = c_2 {-} \omega c_3 {+} c_0 &
u_{1111} = c_3 {+} \omega c_0 {-} \omega c_1 &
u_{1112} = c_0 {+} \omega c_1 {-} \omega^2 c_2 \\
u_{1120} = c_2 {-} \omega^2 c_3 {+} c_0 &
u_{1121} = c_0 {+} \omega^2 c_1 {-} \omega c_2 &
u_{1122} = c_3 {+} c_0 \omega^2 {-} \omega^2 c_1 &
u_{1200} = \sqrt{-3} \cdot c_2 \\
u_{1201} = c_1 {+} \omega^2 c_2 {+} c_3 &
u_{1202} = c_1 {+} \omega c_2 {+} c_3 &
u_{1210} = c_2 {-} \omega^2 c_3 {+} \omega^2 c_0 &
u_{1211} = c_3 {+} \omega c_0 {-} \omega^2 c_1 \\
u_{1212} = c_0 {+} c_1 {-} \omega^2 c_2 &
u_{1220} = c_2 {-} \omega c_3 {+} \omega c_0 &
u_{1221} = c_0 {+} c_1 {-} \omega c_2 &
u_{1222} = c_3 {+} \omega^2 c_0 {-} \omega c_1
\end{matrix}
$$
These $40$ linear forms cut out the hyperplanes
of the complex reflection arrangement $\mathrm{G}_{32}$.
We refer to the book by Hunt \cite[\S 5]{hunt}
for a discussion of this arrangement
and its importance for modular Siegel threefolds.
Our first map $\mathbb{P}^3 \hookrightarrow \mathbb{P}^{39}$ realizes the arrangement $\mathrm{G}_{32}$
as the restriction of the $40$ coordinate planes in
$\mathbb{P}^{39}$ to a certain $3$-dimensional linear subspace.
The monomial
map $\mathbb{P}^{39} \dashrightarrow \mathbb{P}^{39}$
is defined outside the hyperplane arrangement
$ \{\prod u_{ijk\ell} = 0\}$ which corresponds to $\mathrm{G}_{32}$.
It is given by the following $40$ monomials of degree four:
$$
\begin{matrix}
m_0 = u_{0001} u_{0010} u_{0011} u_{0012} & &
m_1 = u_{0001} u_{1000} u_{1001} u_{1002} & &
m_2 = u_{0001} u_{1010} u_{1011} u_{1012} \\
m_3 = u_{0001} u_{1020} u_{1021} u_{1022} & &
m_4 = u_{0010} u_{0100} u_{0110} u_{0120} & &
m_5 = u_{0010} u_{0101} u_{0111} u_{0121} \\
m_6 = u_{0010} u_{0102} u_{0112} u_{0122} & &
m_7 = u_{0011} u_{1200} u_{1211} u_{1222} & &
m_8 = u_{0011} u_{1201} u_{1212} u_{1220} \\
m_9 = u_{0011} u_{1202} u_{1210} u_{1221} & &
m_{10} = u_{0012} u_{1100} u_{1112} u_{1121} & &
m_{11} = u_{0012} u_{1101} u_{1110} u_{1122} \\
m_{12} = u_{0012} u_{1102} u_{1111} u_{1120} & &
m_{13} = u_{0100} u_{1000} u_{1100} u_{1200} & &
m_{14} = u_{0100} u_{1010} u_{1110} u_{1210} \\
m_{15} = u_{0100} u_{1020} u_{1120} u_{1220} & &
m_{16} = u_{0101} u_{1000} u_{1101} u_{1202} & &
m_{17} = u_{0101} u_{1010} u_{1111} u_{1212} \\
m_{18} = u_{0101} u_{1020} u_{1121} u_{1222} & &
m_{19} = u_{0102} u_{1000} u_{1102} u_{1201} & &
m_{20} = u_{0102} u_{1010} u_{1112} u_{1211} \\
m_{21} = u_{0102} u_{1020} u_{1122} u_{1221} & &
m_{22} = u_{0110} u_{1001} u_{1111} u_{1221} & &
m_{23} = u_{0110} u_{1011} u_{1121} u_{1201} \\
m_{24} = u_{0110} u_{1021} u_{1101} u_{1211} & &
m_{25} = u_{0111} u_{1001} u_{1112} u_{1220} & &
m_{26} = u_{0111} u_{1011} u_{1122} u_{1200} \\
m_{27} = u_{0111} u_{1021} u_{1102} u_{1210} & &
m_{28} = u_{0112} u_{1001} u_{1110} u_{1222} & &
m_{29} = u_{0112} u_{1011} u_{1120} u_{1202} \\
\end{matrix} $$ $$ \begin{matrix}
m_{30} = u_{0112} u_{1021} u_{1100} u_{1212} & &
m_{31} = u_{0120} u_{1002} u_{1122} u_{1212} & &
m_{32} = u_{0120} u_{1012} u_{1102} u_{1222} \\
m_{33} = u_{0120} u_{1022} u_{1112} u_{1202} & &
m_{34} = u_{0121} u_{1002} u_{1120} u_{1211} & &
m_{35} = u_{0121} u_{1012} u_{1100} u_{1221} \\
m_{36} = u_{0121} u_{1022} u_{1110} u_{1201} & &
m_{37} = u_{0122} u_{1002} u_{1121} u_{1210} & &
m_{38} = u_{0122} u_{1012} u_{1101} u_{1220} \\
m_{39} = u_{0122} u_{1022} u_{1111} u_{1200} .
\end{matrix}
$$
The combinatorics behind this list is as follows.
The $40$ monomials represent the $40$ isotropic
planes in the space $\mathbb{F}_3^4$, with respect to the
symplectic inner product (\ref{eq:innerproduct}).
The linear inclusion $\mathbb{P}^3 \hookrightarrow \mathbb{P}^{39}$
has the property that two linearly independent vectors $x,y$ in $\mathbb{F}_3^4$
satisfy $\langle x,y \rangle = 0$ if and only if
the corresponding linear forms $u_x$ and $u_y$
are perpendicular in the root system $\mathrm{G}_{32}$,
using the usual Hermitian inner product
(when considered over $\mathbb{C}$).
Let $\mathcal{B}$ denote the Burkhardt quartic in $\mathbb{P}^{39}$,
that is, the closure of the image of the map above.
Its homogeneous prime ideal $I_{\mathcal{B}}$
is minimally generated by one quartic and
a $35$-dimensional space of linear forms in
$K[m_0,m_1,\ldots,m_{39}]$.
That space has a natural generating set consisting of $160 = 4 \cdot 40$ linear trinomials.
Namely, the four coordinates $m_\bullet $ that share a common parameter $u_{ijk\ell}$
span a two-dimensional space modulo $I_\mathcal{B}$. For instance,
the first four coordinates share the parameter $u_{0001}$,
and they satisfy the following linear trinomials:
\begin{equation}
\label{eq:linrel}
\begin{matrix}
m_0+\omega^2 m_1-\omega m_2 &= &
m_0-\omega m_1-\omega^2 m_3 &= & & & \\
m_0+\omega^2 m_2+\omega m_3 &=&
m_1+\omega m_2 -\omega^2 m_3 &=& 0 .
\end{matrix}
\end{equation}
These relations are constructed as follows:
Each of the $40$ roots $u_{ijk\ell}$ appears as a factor in
precisely four of the coordinates $ m_\bullet$,
and these four span a two-dimensional space over $K$.
The $160$ linear trinomials (\ref{eq:linrel}) cut out a $4$-dimensional linear subspace of $\mathbb{P}^{39}$. We fix the following system of coordinates, analogous to (\ref{eq:from_m_to_rst}), on that linear subspace $\mathbb{P}^4$ of $\mathbb{P}^{39}$:
\begin{equation}
\label{eq:burkpara}
\begin{matrix}
r & = & 3 c_0 c_1 c_2 c_3 & = & m_{13}/3 \\
s_{01} & = & -c_0(c_1^3+c_2^3+c_3^3) & = & (\sqrt{-3} \cdot m_1 - m_{13})/3 \\
s_{10} & = & \,\,c_1(c_0^3+c_2^3-c_3^3) & = & (-\sqrt{-3} \cdot m_4 - m_{13})/3 \\
s_{11} & = &\,\, c_2(c_0^3-c_1^3+c_3^3) & = & (-\sqrt{-3}\cdot m_7 - m_{13})/3 \\
s_{12} & = & \,\,c_3(c_0^3+c_1^3-c_2^3) & = & (-\sqrt{-3} \cdot m_{10} - m_{13})/3
\end{matrix}
\end{equation}
The polynomial that defines the Burkhardt quartic $\mathcal{B} \subset
\mathbb{P}^4$ is now written as
\begin{equation}
\label{eq:burkhardtclassic}
r (r^3+s_{01}^3+s_{10}^3+s_{11}^3+s_{12}^3)\,+\,3 s_{01} s_{10} s_{11} s_{12}
\,\,\, = \,\,\, 0.
\end{equation}
The Burkhardt quartic has $45$ isolated singular points.
For example, one of the singular points is $
(r: s_{01}:s_{10}:s_{11}:s_{12}) =
(0:0:0:1:1)$. In the $m$-coordinates, this point is
\begin{equation}
\label{eq:singpoint}
\begin{matrix}
(0 : 0 : 0 : 0 : 0 : 0 : 0 : -\omega^2 : -\omega : 1 : \omega^2 : -1 : \omega : 0 : 0 : 0 : 0 : 0 :\\
0 : 0 : 0 : 0 :
-\omega^2 :
-1 : -\omega : -1 : -\omega^2 : \omega^2 : -\omega : \omega^2 :
\\
\omega^2 : \omega^2 : 1 : \omega : 1 : \omega^2 : -\omega^2 : \omega : -\omega^2 : -\omega^2 )
\end{matrix}
\end{equation}
For each singular point
precisely $16$ of the $40$ $m$-coordinates are zero.
Each hyperplane $m_\bullet = 0$ intersects the Burkhardt quartic $\mathcal{B}$
in a tetrahedron of four planes, known as {\em Jacobi planes},
which contains $18$ of the $45$ singular points, in a configuration
that is depicted in \cite[Figure 5.3(b)]{hunt}.
The relevant combinatorics will be explained when tropicalizing in Section~\ref{sec:tropicalization}.
The closure of the image of the monomial map
$\,\mathbb{P}^{39} \dashrightarrow \mathbb{P}^{39}, \, u \mapsto m \,$
is a toric variety $\mathcal{T}$. Writing
$\mathbb{P}^4$ for the linear subspace
defined by the $160$ trinomials like (\ref{eq:linrel}), we have
\begin{equation}
\label{eq:burkinters}
\mathcal{B} \,\,\, = \,\,\, \mathcal{T} \,\cap \,\mathbb{P}^4 \quad \subset \quad \mathbb{P}^{39}.
\end{equation}
Thus we have realized the Burkhardt quartic as a linear section
of the toric variety $\mathcal{T}$, and it makes sense to
explore the combinatorial properties of $\mathcal{T}$.
Let $A$ denote the $40 \times 40$ matrix
representing our monomial map $u\mapsto m$.
The columns of $A$ are indexed by the $u_{ijk\ell}$,
and hence by the lines in $\mathbb{F}_3^4$.
The rows of $A$ are indexed by the $m_\bullet$,
and hence by the isotropic planes in $\mathbb{F}_3^4$.
The matrix $A$ is the $0$-$1$ matrix that encodes incidences
of lines and isotropic planes. Each row and each column
has exactly four entries $1$, and the other entries are $0$.
The matrix $A$ has rank $25$, and we computed its
{\em Markov basis} using the software {\tt 4ti2} \cite{4ti2}.
\begin{proposition}
\label{prop:toricvarT}
\begin{compactenum}[\rm (a)]
\item The projective toric variety $\mathcal{T}$ has dimension $24$.
\item Its prime ideal is minimally generated by $5136$ binomials, namely
$ 216$ binomials of degree $5$,
$270$ of degree $6$,
$4410$ of degree $8$,
and $240$ of degree $12$.
\item The Burkhardt quartic is the scheme-theoretic intersection in (\ref{eq:burkinters}).
This intersection is not ideal-theoretic, since there is no quartic relation on $\mathcal{T}$ that
could specialize to (\ref{eq:burkhardtclassic}).
\item The $24$-dimensional polytope of $\mathcal{T}$, which
is the convex hull of the $40$ rows of $A$, has
precisely $13144$ facets.
\end{compactenum}
\end{proposition}
\begin{proof}
(a) follows from the fact that ${\rm rank}(A)=25$. The statements in (b) and (c) follow from our {\tt 4ti2} calculation.
The facets in (d) were computed using the software {\tt polymake} \cite{GJ}.
The scheme-theoretic intersection in (c) can be verified by taking the following five
among the $216$ quintic binomials that vanish on $\mathcal{T}$:
\[
\begin{matrix}
m_0 m_{13} m_{22} m_{33} m_{37} - m_1 m_4 m_9 m_{10} m_{39} & &
m_0 m_{14} m_{23} m_{33} m_{35} - m_2 m_4 m_9 m_{10} m_{36} \\
m_0 m_{16} m_{25} m_{35} m_{37} - m_1 m_5 m_9 m_{10} m_{38} & &
m_0 m_{17} m_{26} m_{36} m_{38} - m_2 m_5 m_8 m_{11} m_{39} \\
m_9 m_{11} m_{13} m_{18} m_{20} - m_7 m_{10} m_{14} m_{16} m_{21} & &
\end{matrix}
\]
Each of these quintic binomials factors on $\mathbb{P}^4$
as the Burkhardt quartic (\ref{eq:burkinters}) times a linear form, and
these five linear forms
generate the irrelevant maximal ideal
$\langle r ,s_{01},s_{10}, s_{11}, s_{12} \rangle $.
\end{proof}
We next explain the connection to abelian surfaces. Consider
the {\em open Burkhardt quartic}
\[
\mathcal{B}^{\circ} = \mathcal{B} \setminus \{\prod m_i = 0\} \subset \mathbb{P}^{39}.
\]
In its modular interpretation (\cite{FSM}, \cite[\S 3.1]{GS}, \cite[Lemma 5.7.1]{hunt}), this threefold is the moduli space
$\mathcal{M}_2(3)$ of smooth genus $2$ curves with level $3$ structure.
With every point $(r:s_{01}:s_{10}:s_{11}:s_{12}) \in \mathcal{B}^{\circ}$
we associate an abelian surface (which is a Jacobian) following \cite[\S 3.2]{GS}.
The ambient space for this family of abelian surfaces is
the projective space $\mathbb{P}^8$ whose coordinates
\[
(x_{00}:x_{01}: x_{02}: x_{10}:x_{11}: x_{12}: x_{20}:x_{21}:x_{22})
\]
are indexed by $\mathbb{F}_3^2$.
The following five polynomials represent all the
affine subspaces of $\mathbb{F}_3^2$:
\begin{align*}
f &\,=\, x_{00}^3+x_{01}^3+x_{02}^3+x_{10}^3+x_{11}^3+x_{12}^3+x_{20}^3+x_{21}^3+ x_{22}^3,\\
g_{01} &\,=\, 3(x_{00}x_{01}x_{02}+x_{10}x_{11}x_{12}+x_{20}x_{21}x_{22}),\\
g_{10} &\,=\, 3(x_{00}x_{10}x_{20}+x_{01}x_{11}x_{21}+x_{02}x_{12}x_{22}),\\
g_{11} &\,=\, 3(x_{00}x_{11}x_{22}+x_{01}x_{12}x_{20}+x_{10}x_{21}x_{02}),\\
g_{12} &\,=\, 3(x_{00}x_{12}x_{21}+x_{01}x_{10}x_{22}+x_{02}x_{11}x_{20}).
\end{align*}
Our abelian surface is the singular locus of the {\em Coble cubic} $\{C = 0\}$ in $\mathbb{P}^8$,
which is given~by
\begin{equation*}
C\,\,=\,\,rf+s_{01}g_{01}+s_{10}g_{10}+s_{11}g_{11}+s_{12}g_{12}.
\end{equation*}
\begin{theorem} \label{thm:abelian93}
The singular locus of the Coble cubic
of any point in $\mathcal{B}^{\circ}$ is an abelian surface $S$
of degree $18$ in $\mathbb{P}^8$.
This equips $S$ with an indecomposable polarization of type $(3,3)$.
The prime ideal of $S$ is minimally generated by
$9$ quadrics and $3$ cubics.
The theta divisor on $S$ is a
tricanonical curve of genus $2$, and this is obtained by
intersecting $S$ with the $\mathbb{P}^4$ defined~by
\begin{equation}
\label{eq:ranktrica}
{\rm rank}
\begin{pmatrix}
x_{00} & x_{01} {+} x_{02} & x_{10} {+} x_{20}
& x_{11} {+} x_{22} & x_{12} {+} x_{21} \\
r & s_{01} & s_{10} & s_{11} & s_{12}
\end{pmatrix}
\,\,\, \leq \,\,\, 1.
\end{equation}
\end{theorem}
\begin{proof}
The first statement is classical (see \cite[\S 10.7]{BL}). We shall explain it below using theta functions.
The fact about ideal generators is due
to Gunji \cite[Theorem 8.3]{Gun}.
The representation (\ref{eq:ranktrica})
of the curve whose Jacobian is $S$ is derived from
\cite[Theorem 3.14(d)]{GS}.
\end{proof}
We now discuss the complex analytic view of our story.
Recall (e.g.~from \cite[\S 8.1]{BL})
that a principally polarized abelian surface
over $\mathbb{C}$ is given analytically as
$S_\tau = \mathbb{C}^2/(\mathbb{Z}^2 + \tau \mathbb{Z}^2)$,
where $\tau$ is a complex symmetric
$2\times{}2$-matrix whose imaginary part is positive definite.
The set of such matrices is the {\em Siegel upper half space} $\mathfrak{H}_2$.
Fix the $4 \times 4$ matrix $J = \begin{bmatrix} 0 \! & \! -\mathrm{Id}_2 \\ \mathrm{Id}_2 \! &
\! 0 \end{bmatrix}$.
Let $\mathrm{Sp}_4(\mathbb{Z})$ be the group of $4 \times 4$ integer-valued matrices $\gamma$ such that $\gamma J \gamma^T = J$. This acts on $\mathfrak{H}_2$ via
\begin{align}
\label{eq:actiononh2}
\begin{bmatrix}
A & B\\
C & D
\end{bmatrix}
\cdot{}\tau{} \,\,\,= \,\,(A\tau{}+B)(C\tau{}+D)^{-1},
\end{align}
where $A,B,C,D$ are $2\times 2$ matrices, and this descends to an action of $\mathrm{PSp}_4(\mathbb{Z})$ on $\mathfrak{H}_2$. The natural map $\mathrm{PSp}_4(\mathbb{Z}) \to \mathrm{PSp}_4(\mathbb{F}_3)$
takes the residue class modulo $3$ of each matrix entry. Let $\Gamma_2(3)$ denote the kernel of this map.
The action of $\mathrm{PSp}_4(\mathbb{Z})$ preserves the abelian surface, while $\Gamma_2(3)$ preserves the abelian surface together with a level $3$ structure. Hence $\mathfrak{H}_2/\mathrm{PSp}_4(\mathbb{Z})$ is the moduli space
$\mathcal{A}_2$ of principally polarized abelian surfaces, while $\mathfrak{H}_2/\Gamma_2(3)$ is the moduli space
$\mathcal{A}_2(3)$ of principally polarized abelian surfaces with level $3$ structure.
The finite group $\mathrm{PSp}_4(\mathbb{F}_3)$ is a simple group of order $25920$ and it acts naturally on $\mathfrak{H}_2/\Gamma_2(3)$.
The {\it third-order theta function} with characteristic $\sigma
\in \frac{1}{3} \mathbb{Z}^2/\mathbb{Z}^2$ is defined as
\begin{align*}
\Theta{}_3[\sigma{}](\tau{},z) &\,=\,\, \,\mathrm{exp}(3\pi{}i\sigma{}^T\tau{}\sigma{}+6\pi{}i\sigma{}^Tz)
\cdot \theta{}(3\tau{},3z+3\tau{}\sigma{}) \\*
&\,=\, \sum_{n\in{}\mathbb{Z}^2}\mathrm{exp}\bigl(3\pi{}i(n{+}\sigma{})^T\tau{}(n{+}\sigma{})
+6\pi{}i(n+\sigma{})^T z \bigr).
\end{align*}
Here $\theta{}$ is the classical Riemann theta function.
For a fixed matrix $\tau{} \in \mathfrak{H}_2$, the nine third-order theta functions on $\mathbb{C}^2$ give precisely our embedding of the abelian surface $S_\tau$ into~$\mathbb{P}^8$:
$$
S_\tau \hookrightarrow \mathbb{P}^8, \qquad
z \mapsto (\Theta{}_3[\sigma](\tau,z))_{\sigma{}\in \frac{1}{3} \mathbb{Z}^2/\mathbb{Z}^2}.
$$
Adopting the notation in \cite[\S 2]{RSSS}, for any $(j,k) \in \{0,1,2\}^2$, we abbreviate
$$
u_{jk} \,=\, \Theta{}_3[(\frac{j}{3},\frac{k}{3})](\tau{},0) \quad \hbox{and} \quad
x_{jk} \,=\, \Theta{}_3[(\frac{j}{3},\frac{k}{3})](\tau{},z).
$$
The nine {\em theta constants} $u_{jk}$ satisfy
$\,u_{01} = u_{02}, \,u_{10} = u_{20}, \, u_{11} = u_{22}$, and
$u_{12}= u_{21}$.
For that reason, we need only five theta constants $u_{00},u_{01},u_{10},u_{11},u_{12}$,
which we take as homogeneous coordinates on $\mathbb{P}^4$. These five
coordinates satisfy one homogeneous equation:
\begin{lemma} \label{lem:Siegel}
The closure of the image of the map $\,\mathfrak{H}_2 \rightarrow \mathbb{P}^4\,$ given by the
five theta constants is an irreducible hypersurface $\,\mathcal{H}$
of degree $10$. Its defining polynomial is the determinant of
$$ U \, = \,
\begin{bmatrix}
u_{00}^2 & u_{01}^2 & u_{10}^2 & u_{11}^2 & u_{12}^2\\
u_{01}^2 & u_{00}u_{01} & u_{11}u_{12} & u_{10}u_{12} & u_{10}u_{11}\\
u_{10}^2 & u_{11}u_{12} & u_{00}u_{10} & u_{01}u_{12} & u_{01}u_{11}\\
u_{11}^2 & u_{10}u_{12} & u_{01}u_{12} & u_{00}u_{11} & u_{01}u_{10}\\
u_{12}^2 & u_{10}u_{11} & u_{01}u_{11} & u_{01}u_{10} & u_{00}u_{12}
\end{bmatrix}.
$$
\end{lemma}
\begin{proof}
This determinant appears in \cite[(10)]{DL}, \cite[p.~252]{FSM}, and \cite[\S 2.2]{morikawa}.
\end{proof}
At this point, we have left the complex analytic world
and we are back over a more general field $K$.
The natural map
$\mathcal{H} \dashrightarrow \mathcal{B}$ is
10-to-1 and it is given explicitly by $4 \times 4$-minors of $U$.
\begin{corollary}
\label{cor:coblecubic}
Over the Hessian $\mathcal{H}$ of the Burkhardt quartic, the Coble cubic is written as
\begin{equation}
\label{eq:CobleU}
C \,\,\, = \,\,\, {\rm det}
\begin{bmatrix}
f(\mathbf{x}) & g_{01}(\mathbf{x}) & g_{10}(\mathbf{x}) & g_{11}(\mathbf{x}) & g_{12}(\mathbf{x}) \\
u_{01}^2 & u_{00}u_{01} & u_{11}u_{12} & u_{10}u_{12} & u_{10}u_{11}\\
u_{10}^2 & u_{11}u_{12} & u_{00}u_{10} & u_{01}u_{12} & u_{01}u_{11}\\
u_{11}^2 & u_{10}u_{12} & u_{01}u_{12} & u_{00}u_{11} & u_{01}u_{10}\\
u_{12}^2 & u_{10}u_{11} & u_{01}u_{11} & u_{01}u_{10} & u_{00}u_{12}
\end{bmatrix}.
\end{equation}
For $K=\mathbb{C}$, this expresses
$r,s_{01},s_{10},s_{11}, s_{12}$
as modular forms in terms of theta constants.
\end{corollary}
We note that the 10-to-1 map $\mathcal{H} \dashrightarrow \mathcal{B}$
is analogous to the 64-to-1 map in \cite[(7.1)]{RSSS}
from the Satake hypersurface onto the
G\"opel variety. The formula for the Coble cubic in
Corollary \ref{cor:coblecubic} is analogous to the expression
for the Coble quartic in \cite[Theorem 7.1]{RSSS}.
In this section we have now introduced four variants of a universal abelian surface.
Each of these is a five-dimensional projective variety.
Our universal abelian surfaces reside
\begin{compactenum}[\indent (a)]
\item in $\mathbb{P}^3 \times \mathbb{P}^8$ with coordinates $({\bf c}, {\bf x})$,
\item in $\mathcal{B} \times \mathbb{P}^8 \subset \mathbb{P}^4 \times \mathbb{P}^8$ with
coordinates $((r:s_{ij}), {\bf x})$,
\item in
$\mathcal{B} \times \mathbb{P}^8 \subset \mathbb{P}^{39} \times \mathbb{P}^8$ with
coordinates $({\bf m}, {\bf x})$,
\item in $\mathcal{H} \times \mathbb{P}^8 \subset \mathbb{P}^4 \times \mathbb{P}^8$ with
coordinates $({\bf u}, {\bf x})$.
\end{compactenum}
A natural commutative algebra problem is to identify explicit minimal generators for
the bihomogeneous prime ideals of each of these universal abelian surfaces.
For instance, consider case (d).
The ideal contains the polynomial ${\rm det}(U)$
of bidegree $(10,0)$
and eight polynomials of bidegree $(8,2)$,
namely the partial derivatives of $C$
with respect to the $x_{ij}$.
However, these nine do not suffice.
For instance, we have ten linearly independent ideal generators of bidegree
$(3,3)$, namely the $2 \times 2$-minors of the $2 \times 5$-matrix
$$
\begin{bmatrix}
f(\mathbf{x}) & g_{01}(\mathbf{x}) & g_{10}(\mathbf{x}) & g_{11}(\mathbf{x}) & g_{12}(\mathbf{x}) \\
f(\mathbf{u}) & g_{01}(\mathbf{u}) & g_{10}(\mathbf{u}) & g_{11}(\mathbf{u}) & g_{12}(\mathbf{u})
\end{bmatrix}.
$$
These equations have been verified numerically using {\tt Sage} \cite{sage}.
For a fixed general point ${\bf u} \in \mathcal{S}$, these $2 \times 2$-minors
give Gunji's three cubics that were mentioned in Theorem \ref{thm:abelian93}.
For the case (a) here is a concrete conjecture concerning the desired prime ideal.
\begin{conjecture} \label{conj:ninetythree}
The prime ideal of the universal abelian surface in $\mathbb{P}^3 \times \mathbb{P}^8$ is
minimally generated by $93$ polynomials, namely $\,9$
polynomials of bidegree $(4,2)$ and $\,84$ of bidegree $(3,3)$.
\end{conjecture}
The $84$ polynomials of bidegree $(3,3)$ are obtained as the $6 \times 6$-subpfaffians of the matrix
\begin{align} \label{eqn:skew9matrix}
\begin{bmatrix}
0 & -c_0 x_{02} & c_0 x_{01} & -c_1 x_{20} & -c_2 x_{22} & -c_3 x_{21} & c_1 x_{10} & c_3 x_{12} & c_2 x_{11} \\
c_0 x_{02} & 0 & -c_0 x_{00} & -c_3 x_{22} & -c_1 x_{21} & -c_2 x_{20} & c_2 x_{12} & c_1 x_{11} & c_3 x_{10} \\
-c_0 x_{01} & c_0 x_{00} & 0 & -c_2 x_{21} & -c_3 x_{20} & -c_1 x_{22} & c_3 x_{11} & c_2 x_{10} & c_1 x_{12} \\
c_1 x_{20} & c_3 x_{22} & c_2 x_{21} & 0 & -c_0 x_{12} & c_0 x_{11} & -c_1 x_{00} & -c_2 x_{02} & -c_3 x_{01} \\
c_2 x_{22} & c_1 x_{21} & c_3 x_{20} & c_0 x_{12} & 0 & -c_0 x_{10} & -c_3 x_{02} & -c_1 x_{01} & -c_2 x_{00} \\
c_3 x_{21} & c_2 x_{20} & c_1 x_{22} & -c_0 x_{11} & c_0 x_{10} & 0 & -c_2 x_{01} & -c_3 x_{00} & -c_1 x_{02} \\
-c_1 x_{10} & -c_2 x_{12} & -c_3 x_{11} & c_1 x_{00} & c_3 x_{02} & c_2 x_{01} & 0 & -c_0 x_{22} & c_0 x_{21} \\
-c_3 x_{12} & -c_1 x_{11} & -c_2 x_{10} & c_2 x_{02} & c_1 x_{01} & c_3 x_{00} & c_0 x_{22} & 0 & -c_0 x_{20} \\
-c_2 x_{11} & -c_3 x_{10} & -c_1 x_{12} & c_3 x_{01} & c_2 x_{00} & c_1 x_{02} & -c_0 x_{21} & c_0 x_{20} & 0
\end{bmatrix}.
\end{align}
This skew-symmetric $9 \times 9$-matrix was derived by Gruson and Sam \cite[\S 3.2]{GS},
building on the construction in \cite{GSW},
and it is analogous to the elliptic normal curve in (\ref{eq:pfaff5}).
The nine principal $8 \times 8$-subpfaffians of (\ref{eqn:skew9matrix})
are $x_{00} C, x_{01} C, \ldots, x_{22} C$, where $C$ is the
Coble quartic, now regarded as a polynomial in
$({\bf c}, {\bf x})$ of bidegree $(4,3)$.
Conjecture \ref{conj:ninetythree} is analogous to
\cite[Conjecture 8.1]{RSSS}.
The nine polynomials of bidegree $(4,2)$
are $\partial C/\partial x_{00}, \partial C/\partial x_{01},\ldots,\partial C/\partial x_{22}$.
\smallskip
In the remainder of this section we recall the symmetry groups that act on
our varieties. First there is the complex reflection group
denoted by $\mathrm{G}_{32}$ in the classification of Shephard and Todd \cite{ST}.
The group $\mathrm{G}_{32}$ is a subgroup of order $155520$ in $\mathrm{GL}_4(K)$.
Precisely $80$ of its elements are {\it complex reflections}
of order $3$. As a linear transformation on $K^4$, each such complex reflection has a triple eigenvalue $1$ and a single eigenvalue $\omega^{\pm 1} = \frac{1}{2} (-1 \pm \sqrt{-3})$.
The center of $\mathrm{G}_{32}$ is isomorphic to the cyclic group $\mathbb{Z}/6$. In our coordinates
$c_0,c_1,c_2,c_3$, the elements of the center
are scalar multiplications by $6$th roots of unity. Therefore, this gives an action by
$\mathrm{G}_{32}/(\mathbb{Z}/6)$ on the hyperplane arrangement $\mathrm{G}_{32}$ in $\mathbb{P}^3$.
In fact, we have
\begin{equation}
\label{eq:groupiso}
\frac{\mathrm{G}_{32}}{\mathbb{Z}/6} \,\,
\simeq \,\,\mathrm{PSp}_4(\mathbb{F}_3) .
\end{equation}
The linear map
$\mathbb{P}^3 \hookrightarrow \mathbb{P}^{39}$,
$c \mapsto u$, respects the isomorphism (\ref{eq:groupiso}).
The group acts on the $c$-coordinates by the reflections on $K^4$,
and it permutes the coordinates $u_{ijk\ell}$ via its action on the
lines through the origin in $\mathbb{F}_3^4$. Of course, the
group $\mathrm{PSp}_4(\mathbb{F}_3) $ also permutes the
$40$ isotropic planes in $\mathbb{P}^{39}$, and this action is compatible with our
monomial map $\,\mathbb{P}^{39} \dashrightarrow \mathbb{P}^{39}$.
\section{Tropicalizing the Burkhardt Quartic}
\label{sec:tropicalization}
Our goal is to understand the relationship between
classical and tropical moduli spaces for curves of genus two. To this end,
in this section, we study the tropicalization of the Burkhardt
quartic $\mathcal{B}$. This is a $3$-dimensional fan ${\rm trop}(\mathcal{B})$ in
the tropical projective torus~$\mathbb{TP}^{39}$. We shall see that
the {\em tropical compactification} of $\mathcal{B}^{\circ}$
equals the Igusa compactification of~$\mathcal{A}_3(2)$.
The variety $\mathcal{B}$ is the closure of the image of the composition
$\,\mathbb{P}^3 \hookrightarrow \mathbb{P}^{39} \dashrightarrow \mathbb{P}^{39}\,$
of the linear map given by the arrangement $\mathrm{G}_{32}$ and the monomial map given by the $40 {\times} 40$ matrix $A$ that records incidences
of isotropic planes and lines in $\mathbb{F}_3^4$. To be precise, recall that the source $\mathbb{P}^{39}$ has coordinates $e_\ell$ indexed by lines $\ell \subset \mathbb{F}_3^4$, the
target $\mathbb{P}^{39}$ has coordinates $e_W$
indexed by isotropic planes $W \subset \mathbb{F}_3^4$,
and the linear map $A$ is defined by $A(e_\ell) = \sum_{W \supset \ell} e_W$.
This implies the representation
\begin{equation}
\label{eq:tropBerg}
{\rm trop}(\mathcal{B}) \,\, = \,\,
A \cdot{\rm Berg}(\mathrm{G}_{32}) \quad \subset \quad \mathbb{TP}^{39}
\end{equation}
of our tropical threefold as the image under $A$
of the {\em Bergman fan} of the matroid of $\mathrm{G}_{32}$.
By this we mean the unique coarsest fan structure
on the tropical linear space given by the
rank $4$ matroid on the $40 $ hyperplanes of $\mathrm{G}_{32}$.
This Bergman fan is simplicial, as suggested by
the general theory of \cite{ARW}.
We computed its cones using
the software {\tt TropLi} due to Rinc\'on~\cite{Rin}.
\begin{lemma}
The Bergman complex of the rank $4$ matroid of the complex root system
$\mathrm{G}_{32}$ has $170$ vertices, $1800$ edges
and $3360$ triangles, so its Euler characteristic equals $1729$.
The rays and cones of the corresponding Bergman fan
$\,{\rm Berg}(\mathrm{G}_{32}) \subset \mathbb{TP}^{39}$
are described below.
\end{lemma}
The Euler characteristic is the {\em M\"obius number}
of the matroid, which can also be computed
as the product of the {\em exponents} $n_i$ in \cite[Table 2]{OS}
of the complex reflection group~$\mathrm{G}_{32}$:
$$ 1 \cdot 7 \cdot 13 \cdot 19 \,\,=\,\,
1729 \,\, = \,\, 3360 - 1800 + 170 -1 . $$
See \cite[(9.2)]{RSSS} for
the corresponding formula
for the Weyl group of $\mathrm{E}_7$
(and genus $3$ curves).
We now discuss the combinatorics of ${\rm Berg}(\mathrm{G}_{32})$.
The space $\mathbb{TP}^{39} = \mathbb{R}^{40}/\mathbb{R}(1,1,\ldots,1)$
is spanned by unit vectors $e_{0001}, e_{0010}, \ldots, e_{1222}$
that are labeled by the $40$ lines in $\mathbb{F}_3^4$ as before.
The $170$ rays of the Bergman fan correspond to
the {\em connected flats} of the matroid of $\mathrm{G}_{32}$,
and these come in three symmetry classes,
according to the rank of the connected~flat:
\begin{enumerate}
\item[(a)] $40$ Bergman rays of rank $1$.
These are spanned by the unit vectors $e_{0001},e_{0010},\ldots,e_{1222}$.
\item[(b)] $90$ Bergman rays of rank $2$, such as
$e_{0001}+e_{0100}+e_{0101}+e_{0102}$, which
represents
$\,\{ c_1,
c_1{+}c_2{+}c_3,
c_1 {+} \omega c_2 {+} \omega c_3,
c_1{+}\omega^2 c_2{+}\omega^2 c_3\}$.
These are the non-isotropic planes in
$\mathbb{F}_3^4$.
\item[(\"a)] $40$ Bergman rays of rank $3$, such as
$$\,e_{0001}+e_{0010}+e_{0011}+e_{1100}+e_{1101}+e_{1102}
+e_{1110}+e_{1111}+e_{1112}+e_{1120}+e_{1121}+e_{1122} .$$
These correspond to the Hesse pencils in $\mathrm{G}_{32}$,
and to the hyperplanes in $\mathbb{F}_3^4$.
Note that the $12$ indices above are perpendicular to
$(0,0,1,2)$ in the symplectic inner product.
\end{enumerate}
The $3360$ triangles of the Bergman complex of $\mathrm{G}_{32}$ also come in
three symmetry classes:
\begin{enumerate}
\item[(aa\"a)] Two orthogonal lines (a) together with a hyperplane (\"a) that
contains them both. This gives $480$ triangles because each hyperplane
contains $12$ orthogonal pairs.
\item[(ab\"a)] A flag consisting of a line (a) contained in a non-isotropic plane (b)
contained in a hyperplane (\"a). There are $1440 $ such triangles
since each of the $90$ planes has
$4 \cdot 4$~choices.
\item[(aab)] Two orthogonal lines (a) together with a non-isotropic plane (b).
The plane contains one of the lines and is orthogonal to the other one.
The count is also $1440$.
\end{enumerate}
The $1800$ edges of the Bergman complex come in five symmetry classes:
there are $240$ edges (aa) given by pairs of orthogonal lines,
$360$ edges (ab) given by lines in non-isotropic planes,
$480$ edges (a\"a) given by lines in hyperplanes,
$360$ edges (b\"a) given by non-isotropic planes in hyperplanes,
and $360$ edges (ab${}^\perp$) obtained by dualizing the previous pairs (b\"a).
Our calculations establish the following statement:
\begin{proposition} \label{prop:CP}
The Bergman complex coincides with the nested set complex for the matroid of $\mathrm{G}_{32}$. In particular, the tropical compactification of the complement of the hyperplane arrangement $\mathrm{G}_{32}$ coincides with the wonderful compactification of de Concini--Procesi \cite{dCP}.
\end{proposition}
See \cite{feichtner} for the relation between tropical compactifications and wonderful compactifications.
We expect that Proposition~\ref{prop:CP} is true for any finite complex reflection group, but we have not made any attempts to prove this.
The wonderful compactification is obtained by blowing up the irreducible flats
of lowest dimension, then blowing up the strict transforms of the irreducible flats of next lowest dimension, etc. In our case, the smallest irreducible flats are $40$ points,
corresponding to the
Bergman rays (a) and to Family 6 in \cite[Table 1]{GS}.
This first blow-up $\widehat{\mathbb{P}^3}$ is the closure of the graph of the map $\mathbb{P}^3 \dashrightarrow \mathcal{B}$, by \cite[Proposition 3.25]{GS}.
The next smallest irreducible flats are the strict transforms of 90 $\mathbb{P}^1$'s,
corresponding to the Bergman rays (b) and to Family 4 in \cite[Table 1]{GS}.
After that, the only remaining irreducible flats are $40$ hyperplanes,
corresponding to the Bergman rays (\"a) and to Family 2 in \cite[Table 1]{GS}.
Hence the wonderful compactification $\widetilde{\mathbb{P}^3}$ is obtained by blowing up these 90 $\mathbb{P}^1$'s in $\widehat{\mathbb{P}^3}$. The 90 exceptional divisors of $\widetilde{\mathbb{P}^3} \to \widehat{\mathbb{P}^3}$ get contracted to the 45 nodes of $\mathcal{B}$, so we can lift the map $\widetilde{\mathbb{P}^3} \to \mathcal{B}$ to a map $\widetilde{\mathbb{P}^3} \to \widetilde{\mathcal{B}}$,
where $ \widetilde{\mathcal{B}}$
denotes the blow-up of the Burkhardt quartic at its $45$ singular~points.
The hyperplane arrangement complement
$\mathbb{P}^3 \cap \mathbb{G}_m^{39}$ is naturally identified with the moduli space $\mathcal{M}_2(3)^-$ of smooth genus 2 curves with level 3 structure and the choice of a Weierstrass point (or equivalently, the choice of an
odd theta characteristic). See \cite{bolognesi} for more about $\mathcal{M}_2(3)^-$. Hence we have the following commutative diagram
\begin{equation} \label{cd:oddtheta}
\begin{gathered}
\xymatrix{ \mathcal{M}_2(3)^- \ar@{^{(}->}[r] \ar[d] \,&\,
\widetilde{\mathbb{P}^3}
\ar[d] \ar[r] \,&\, \widehat{\mathbb{P}^3} \ar[d] \\
\mathcal{M}_2(3) \ar@{^{(}->}[r] \,&\, \widetilde{\mathcal{B}} \ar[r] \,&\, \mathcal{B} \,&\, } \end{gathered}
\end{equation}
where the vertical maps are generically finite of degree 6, the right
horizontal maps are blow-ups, and the moduli spaces $\mathcal{M}_2(3)^-$ and $\mathcal{M}_2(3)$ are
realized as very affine varieties.
We now compute the tropical Burkhardt quartic
(\ref{eq:tropBerg}), by applying the linear map $A$
to the Bergman fan of $\mathrm{G}_{32}$. Note that the image lands in
the tropicalization ${\rm trop}(\mathcal{T})$ of the toric
variety $\mathcal{T} \subset \mathbb{P}^{39}$.
We regard ${\rm trop}(\mathcal{T})$
as a $24$-dimensional linear subspace of
$\mathbb{TP}^{39}$.
\begin{theorem} The tropical Burkhardt quartic
${\rm trop}(\mathcal{B})$ is the fan over a $2$-dimensional
simplicial complex with $85$ vertices,
$600$ edges and $880$ triangles.
A census appears in Table~\ref{fig:tropburkorbits}.
\end{theorem}
\begin{proof}
Given the $\mathrm{G}_{32}$-symmetry, the following properties of the map $A$ can be verified on representatives of $\mathrm{G}_{32}$-orbits.
The linear map $A \colon \mathbb{TP}^{39} \rightarrow \mathbb{TP}^{39}$
has the property that the image of each vector (\"a) equals twice that
of the corresponding unit vector (a). For instance,
$$ A(e_{0001}{+}e_{0010}{+}e_{0011}{+}e_{1100}{+}e_{1101}
{+}e_{1102}{+}e_{1110}{+}e_{1111}{+}e_{1112}{+}e_{1120}
{+}e_{1121}{+}e_{1122}) \,= \,
2A e_{0012}. $$
Likewise, the $90$ vectors (b) come in natural pairs of
non-isotropic planes that are orthogonal complements.
The corresponding vectors have the same image under $A$. For instance,
\begin{equation}
\label{eq:planepair}
A(e_{0001}+e_{0100}+e_{0101}+e_{0102}) \,\,=\,\,
A(e_{0010}+e_{1000}+e_{1010}+e_{1020}).
\end{equation}
We refer to such a pair of orthogonal non-isotropic planes
as a {\em plane pair}. This explains the $85$ rays of
${\rm trop}(\mathcal{B})$, namely, they are the 40 lines $a$ and the
45 plane pairs $\{b,b^\perp\}$ in $\mathbb{F}_3^4$.
The image of each cone of ${\rm Berg}(\mathrm{G}_{32})$ under
$A$ is a simplicial cone of the same dimension.
There are no non-trivial intersections of image cones. The map
${\rm Berg}(\mathrm{G}_{32}) \rightarrow {\rm trop}(\mathcal{B})$
is a proper covering of fans. The 2-to-1 covering of the rays
induces a 3-to-1 or 4-to-1 covering on each higher-dimensional cone.
The precise combinatorics is summarized in Table~\ref{fig:tropburkorbits}.
\begin{table}[h]
\begin{tabular}{|c|c|c|c|c|}
\cline{1-5}
Dimension & Orbits in ${\rm Berg}(\mathrm{G}_{32})$ & The map $A$ & Orbit size in
${\rm trop}(\mathcal{B})$ & Cone type \\
\cline{1-5}
\multirow{3}{*}{$1$} & $40$ (a) & \multirow{2}{*}{$2$ to $1$} & \multirow{2}{*}{$40$} & \multirow{2}{*}{(a)} \\
\cline{2-2}
& $40$ (\"a) & & & \\
\cline{2-5}
& $90 $ (b) & $2$ to $1$ & $45$ & (b) \\
\cline{1-5}
\multirow{5}{*}{$2$} & $240$ (aa) & \multirow{2}{*}{$3$ to $1$} & \multirow{2}{*}{$240$} & \multirow{2}{*}{(aa)}\\
\cline{2-2}
& $480$ (a\"a)& & & \\
\cline{2-5}
& $360$ (ab) & \multirow{3}{*}{$3$ to $1$} & \multirow{3}{*}{$360$} & \multirow{3}{*}{(ab)}\\
\cline{2-2}
& $360$ (b\"a)& & & \\
\cline{2-2}
& $360$ (ab${}^\perp$)& & & \\
\cline{1-5}
\multirow{3}{*}{$3$} & $1440$ (aab) & \multirow{2}{*}{$4$ to $1$} & \multirow{2}{*}{$720$} & \multirow{2}{*}{(aab)} \\
\cline{2-2}
& $1440$ (ab\"a)& & & \\
\cline{2-5}
& $480$ (aa\"a) & $3$ to $1$ & $160$ & (aaa) \\
\cline{1-5}
\end{tabular}
\label{fig:tropburkorbits}
\caption{Orbits of cones in the tropical Burkhardt quartic}
\smallskip
\end{table}
The types of the cones are named by replacing
\"a with a, and b${}^\perp$ with b.
In total, there are $40$ vertices of type (a)
and $45$ vertices of type (b).
There are $240$ edges of type (aa),
corresponding to pairs of lines
$a \perp a'$, and $360$ edges of type (ab),
corresponding to inclusions $a \subset b$.
Finally, there are $160$ triangles of type (aaa) and
$720$ triangles of type (aab).
\end{proof}
\begin{remark} \rm
There is a bijection between the $45$ rays of type (b) in $\mathrm{trop}(\mathcal{B})$ and the $45$ singular points in $\mathcal{B}$. Namely, each vector of type (b) can be written such that $16$ of its coordinates are $1$ and the other coordinates are $0$.
These $16$ coordinates are exactly the $16$ zero coordinates in the corresponding singular point.
Note that the zero coordinates of the particular singular point
in (\ref{eq:singpoint}) form precisely the support of
the vector (\ref{eq:planepair}). The number $16 $
arises because each of the $45$ plane pairs $\{b,b^\perp\}$ determines
$16$ of the $40$ isotropic planes: take any
vector in $b$ and any vector in $b^\perp$, and these two will span an isotropic plane. \hfill \qed
\end{remark}
\smallskip
We next consider the {\em tropical compactification}
$\overline{\mathcal{B}}$
of the open Burkhardt quartic $\mathcal{B}^{\circ} = \mathcal{M}_2(3)$.
By definition, $\overline{\mathcal{B}}$ is the closure
of $\mathcal{B}^{\circ} \subset \mathbb{G}_m^{24}$ inside of the
toric variety defined by the fan ${\rm trop}(\mathcal{B})$.
This toric variety is smooth because
the rays of the two types of maximal cones (aaa) and (aab)
can be completed to a basis of the lattice
$\mathbb{Z}^{24}$ spanned by all $85$ rays.
\begin{proposition} \label{prop:tropB-SNC}
The tropical compactification $\overline{\mathcal{B}}$
is sch\"on in the sense of Tevelev \cite{Tev}.
The boundary $\overline{\mathcal{B}}\setminus \mathcal{B}^{\circ}$
is a normal crossing divisor consisting of $85$
irreducible smooth surfaces.
\end{proposition}
\begin{proof}
Since our fan on ${\rm trop}(\mathcal{B})$
defines a smooth toric variety, it suffices to show that
all initial varieties $V({\rm in}_v (I_\mathcal{B})) $
are smooth and connected in the torus $\mathbb{G}_m^{39}$ \cite[Lemma 2.7]{hacking}.
There are six symmetry classes of initial ideals
${\rm in}_v (I_\mathcal{B})$. Each of them is generated by
linear binomials and trinomials together with one non-linear
polynomial $f$, obtained from the quartic by possibly removing monomial factors.
We present representatives for the six classes.
The plane pair $ \{b,b^\perp\}$ appearing in three of the cases
is precisely the one displayed in (\ref{eq:planepair}).
\begin{itemize}
\item[(a)] For the vertex given by the line $(0,0,0,1)$ we take the weight vector
$ v = A e_{0001} = e_0+e_1+e_2+e_3$. Then
$f = m_0 m_4^3-3 m_0 m_4 m_7 m_{10}
+m_0 m_7^3+m_0 m_{10}^3-3 \sqrt{3}m_1 m_4m_7 m_{10} $.
This bihomogeneous polynomial defines a smooth surface in a quotient torus
$\mathbb{G}_m^3$.
\item[(b)] For the vertex $ \{b,b^\perp\}$ we take the vector (\ref{eq:planepair}).
This is the incidence vector of the zero coordinates in (\ref{eq:singpoint}), namely
$v = e_0+e_1+e_2+e_3+e_4+e_5+e_6+e_{13}+
e_{14}+e_{15}+e_{16}+e_{17}+e_{18}+e_{19}+e_{20}+e_{21}$.
The resulting non-linear polynomial is
$ f = - m_0 m_{13}+ m_1 m_4$.
\item[(aa)] For the edge given by the orthogonal lines
$(0,0,0,1)$ and $ (0,0,1,0)$ in $\mathbb{F}_3^4$, we take
$v = 2 e_0+e_1+e_2+e_3+e_4+e_5+e_6 $, and we get
$f = m_0 m_7^3+m_0 m_{10}^3-3 \sqrt{3} m_1 m_4 m_7 m_{10}$.
\item[(ab)] For the edge given by
$(0,0,0,1)$ and $\{b,b^\perp\}$, we take
$v = 2 e_0+2 e_1+2 e_2+2 e_3+e_4+e_5+e_6+e_{13}+e_{14}+e_{15}+e_{16}
+e_{17}+e_{18}+e_{19}+e_{20}+e_{21}$, and we get
$f = - m_0 m_{13}+m_1 m_4$.
\item[(aaa)] For the triangle given by
$(0,0,0,1)$, $(0,0,1,0)$ and $(0,0,1,1)$, we take
$v =
A(e_{0001} {+}e_{0010} $ $
{+} e_{0011}) =
3e_0{+}e_1{+}e_2{+}e_3{+}e_4{+}e_5{+}e_6{+}e_7{+}e_8{+}e_9$,
and we get $f =m_0 m_{10}^2- 3 \sqrt{3} m_1 m_4 m_7$.
\item[(aab)] For the triangle given by
$(0,0,0,1)$, $(0,0,1,0)$ and $\{b,b^\perp\}$, we take
$v = 3 e_0{+}2 e_1{+}2 e_2 + 2 e_3{+}2 e_4{+}2 e_5{+}2 e_6{+}e_{13}
{+}e_{14}{+}e_{15}{+}e_{16}{+}e_{17}{+}e_{18}{+}e_{19}{+}e_{20}{+}e_{21}$. Here,
$f {=} - m_{0} m_{13}{+}m_1 m_4$.
\end{itemize}
Note that the polynomials $f$ are the same in the cases (b), (ab) and (aab),
but the varieties $V({\rm in}_v (I_{\mathcal{B}})) $
are different because of the $35$ linear relations.
In cases (b) and (ab) we have both linear trinomials and linear binomials,
while in case (aab) they are all binomials. In all six cases
the hypersurface $\{f=0\}$ has no singular points
with all coordinates nonzero.
\end{proof}
Our final goal in this section is to equate the tropical compactification $\overline{\mathcal{B}}$
with the blown up Burkhardt quartic $\widetilde{\mathcal{B}}$.
By \cite[Theorem 5.7.2]{hunt}, we can identify $\widetilde{\mathcal{B}}$
with the Igusa desingularization of the Baily--Borel--Satake compactification $\mathcal{A}_2(3)^{\rm Sat}$ of $\mathcal{A}_2(3)$. The latter can be constructed as follows.
Let $\widehat{\mathcal{B}}$ be the projective dual variety to $\mathcal{B} \subset \mathbb{P}^4$.
The canonical birational map $\mathcal{B} \dashrightarrow \widehat{\mathcal{B}}$ is defined outside of the 45 nodes. Since $\mathcal{B}$ is a normal variety, this map factors through the normalization of $\widehat{\mathcal{B}}$, which can be identified with $\mathcal{A}_2(3)^{\rm Sat}$ by \cite[\S 4]{FSM1}. The closure of the graph of the birational map $\mathcal{B} \dashrightarrow \mathcal{A}_2(3)^{\rm Sat}$ is the blow-up of $\mathcal{B}$ at its indeterminacy locus, i.e., the 45 nodes.
Using \cite[Theorem 3.1]{vdG}, we may identify this with $\widetilde{\mathcal{B}}$. By symmetry, we could also view this as the closure of the image of the inverse birational map. This realizes $\widetilde{\mathcal{B}}$ as a blow-up of $\mathcal{A}_2(3)^{\rm Sat}$, and in particular, the map blows up the Satake boundary $\mathcal{A}_2(3)^{\rm Sat} \backslash \mathcal{A}_2(3)$ which has 40 components all isomorphic to $\mathcal{A}_1(3)^{\rm Sat} \cong \mathbb{P}^1$.
\begin{lemma} \label{lem:partialtrop-M23}
The moduli space $\mathcal{A}_2(3)$ coincides with
the partial compactification of $\mathcal{M}_2(3)$ given by the $1$-dimensional subfan of $\mathrm{trop}(\mathcal{B})$ that consists of the $45$ rays of type (b).
\end{lemma}
\begin{proof}
Let $M$ be the partial compactification in question.
Let $\widetilde{\mathbb{P}^3}$ be the wonderful compactification
for $\mathrm{G}_{32}$ as described above.
The preimage of the 45 rays of type (b) in ${\rm Berg}(\mathrm{G}_{32})$ consists of 90 rays, and the resulting partial tropical compactification $P$ of $\mathbb{P}^3 \setminus (40 \text{ hyperplanes})$ is the complement of the strict transforms of the reflection hyperplanes in $\widetilde{\mathbb{P}^3}$. In the map $P \to \mathcal{B}$, the 90 divisors are contracted to the 45 singular points (2 divisors to each point). We have a map $P \to M$ which maps the 90 divisors of $P$ to the 45 divisors of $M$, and hence the birational map $M \dashrightarrow \mathcal{B}$ (given by the identity map on $\mathcal{M}_2(3)$) extends to a regular map $M \to \mathcal{B}$ which contracts the 45 divisors to the 45 singular points.
By the universal property of blow-ups, there exists
a map $M \to \widetilde{\mathcal{B}}$ which takes each of the 45 divisors to one of the 45 exceptional divisors of the
blow-up $\widetilde{\mathcal{B}} \to \mathcal{B}$.
From our previous discussion, the image of the map $P \to \widetilde{\mathcal{B}}$ equals
$\mathcal{A}_2(3)$. Since this map factors through $M$, the image of the map $M \to \widetilde{\mathcal{B}}$ is also $\mathcal{A}_2(3)$. This map has finite fibers: this just needs to be checked on the 45 divisors and we can reduce to considering the map $P \to \widetilde{\mathcal{B}}$; in the map $\widehat{\mathbb{P}^3} \to \widetilde{\mathcal{B}}$, the inverse image of an exceptional divisor is 2 disjoint copies of $\mathbb{P}^1 \times \mathbb{P}^1$ and any surjective endomorphism of $\mathbb{P}^1 \times \mathbb{P}^1$ has finite fibers. The map is birational and $\mathcal{A}_2(3)$ is smooth, so, by Zariski's Main Theorem \cite[\S III.9]{mumford}, the map is an isomorphism.
\end{proof}
\begin{theorem} \label{thm:IgusaBBS}
The intrinsic torus of $\mathcal{M}_2(3) = \mathcal{B}^{\circ}$ is the dense torus $\mathbb{G}_m^{24}$
of the toric variety $\mathcal{T}$ described in Proposition \ref{prop:toricvarT}.
The tropical compactification of $\mathcal{M}_2(3)$ provided by $\mathrm{trop}(\mathcal{B})$ is the Igusa desingularization $\widetilde{\mathcal{B}}$ of the Baily--Borel--Satake compactification $\mathcal{A}_2(3)^{\rm Sat}$ of $\mathcal{A}_2(3)$.
\end{theorem}
\begin{proof}
The first statement follows from Lemma~\ref{lem:intrinsictorusimage} and Proposition~\ref{prop:toricvarT}(a).
By Lemma~\ref{lem:partialtrop-M23}, $\overline{\mathcal{B}}$ is a compactification of $\mathcal{A}_2(3)$. The boundary of the compactification $\mathcal{M}_2(3) \subset \overline{\mathcal{B}}$ is a normal crossings divisor (Proposition~\ref{prop:tropB-SNC}), so the same is true for the boundary of $\mathcal{A}_2(3) \subset \overline{\mathcal{B}}$, and hence it is toroidal. So there exists a map $f \colon \overline{\mathcal{B}} \to \mathcal{A}_2(3)^{\rm Sat}$ that is the identity on $\mathcal{A}_2(3)$ \cite[Proposition III.15.4(3)]{borelji}. This map is unique and surjective.
From what we said above, the Satake boundary $\mathcal{A}_2(3)^{\rm Sat} \backslash \mathcal{A}_2(3)$ has
$40$ components all isomorphic to $\mathbb{P}^1$. Also, $\overline{\mathcal{B}} \backslash \mathcal{A}_2(3)$ consists of
$40$ divisors. Hence the map $f$ contracts the
$40$ divisors to these $\mathbb{P}^1$'s. By the universal property of blow-ups, there is a unique map $\tilde{f} \colon \overline{\mathcal{B}} \to \widetilde{\mathcal{B}}$ that commutes with the blow-up map. Then $\tilde{f}$ is birational and surjective. We know this map is an isomorphism on $\mathcal{A}_2(3)$ and the complement of this open subset in both domain and target are a union of $\mathbb{P}^2$'s. Any surjective endomorphism of $\mathbb{P}^2$ has finite fibers, and hence $\tilde{f}$ is an isomorphism by Zariski's Main Theorem \cite[\S III.9]{mumford} since $\widetilde{\mathcal{B}}$ is smooth.
\end{proof}
\section{Moduli of Genus Two Curves}
\label{sec:genus2moduli}
The moduli space $\mathcal{M}_{g,n}^{\rm tr}$ of tropical curves of genus $g$ with $n$ marked points is a {\it stacky fan}. This was shown by
Brannetti, Melo and Viviani \cite{BMV} and Chan \cite{Cha}.
This space was studied by many authors. See \cite{caporaso1, caporaso2} for some results.
Here, a {\it tropical curve} is a triple $(\Gamma,w,\ell)$,
where $\Gamma =(V,E)$ is a connected graph,
$w$ is a weight function $V\to \mathbb{Z}_{\geq{}0}$,
and $\ell$ is a length function $E\to \mathbb{R}_{\geq{}0}$.
The {\it genus} of a tropical curve is the sum of weights of all vertices plus the genus of the graph $\Gamma{}$.
In addition to identifications induced by graph automorphisms,
two tropical curves are isomorphic if one can be obtained from another
by a sequence of the following operations and their inverses:
\begin{compactitem}
\item Removing a leaf of weight $0$, together with the only edge connected to it.
\item Removing a vertex of degree $2$ of weight $0$, and replacing the two edges connected to it with an edge whose length is the sum of the two old edges.
\item Removing an edge of length $0$, and combining the two vertices connected by that edge. The weight of the new vertex is the sum of the two old vertices.
\end{compactitem}
In this way, every tropical curve of genus $\geq{}2$ is uniquely represented by a {\it minimal skeleton},
i.e., a tropical curve with no vertices of weight $0$ of degree $\leq{}2$ or edges of length $0$.
The moduli space of tropical curves with a fixed {\em combinatorial type} $(\Gamma{},w)$ is $\mathbb{R}_{>0}^{|E|}/\mathrm{Aut}(\Gamma{})$,
where the coordinates of $\mathbb{R}_{>0}^{|E|}$ are the lengths of the edges.
The cones for all combinatorial types are glued together to form $\mathcal{M}_g^{\rm tr}$.
The boundary of the cone of a combinatorial type $(\Gamma,w)$
corresponds to tropical curves with at least one edge of length $0$.
Contracting that edge gives a combinatorial type $(\Gamma',w')$.
Then, the cone for $(\Gamma{}',w')$ is glued along the boundary of the cone for $(\Gamma{},w)$ in the natural way.
More generally, a tropical curve with {\it marked points} is defined similarly,
but allowing rays connecting a vertex with leaves ``at infinity''.
The following construction maps curves over a valued field to tropical curves.
It is fundamental for \cite{ACP, BPR}.
Our description follows \cite[Lemma - Definition 2.2.4]{viviani}.
Let $R$ be a complete discrete valuation ring with maximal ideal $\mathfrak{m}$. Let $K$ be its field of fractions, $k = R / \mathfrak{m}$ its residue field, and $t \in R$ a uniformizing parameter.
Fix a genus $g$ curve $C$ with $n$ marked points over $K$.
The curve $C$ is a morphism ${\rm Spec}\ K \to \mathcal{M}_{g,n}$. Since the stack $\overline{\mathcal{M}}_{g,n}$ is proper (i.e., by the stable reduction theorem), there is a finite extension $K'$ of $K$ with discrete valuation ring $R'$ such that this morphism extends uniquely to a morphism ${\rm Spec}\ R' \to \overline{\mathcal{M}}_{g,n}$ (we call this a {\it stable model} of $C$). Here we renormalize the valuation on $R'$ so that its value group is $\mathbb{Z}$. Reducing modulo $\mathfrak{m}'$ gives us a point ${\rm Spec}\ k \to \overline{\mathcal{M}}_{g,n}$. By definition, this is a {\em stable curve} $C_k$ over $k$. We remark that the stable model may not be unique, but the stable curve is unique.
Since such a stable curve has at worst nodal singularities, we can construct a dual graph as follows.
For each genus $h$ component of $C_k$, we draw a vertex of weight $h$.
For each node of $C_k$, we draw an edge between the two components that meet there (this might be a loop if the node comes from a self-intersection). If a component has a marked point, then we attach a vertex at infinity to that vertex. The stable condition translates to the fact that the dual graph is a minimal skeleton as above. Finally, each node, when considered as a point in $C_{R'}$, is \'etale locally of the form $xy = t^\ell$ for some positive integer $\ell$. We then assign the length $\ell / d$ to the corresponding edge, where $d$ is the degree of the field extension $K \subset K'$. In this way, we have defined a function
\begin{equation}
\label{eq:classicaltotropical}
\mathcal{M}_{g,n}(K) \to \mathcal{M}_{g,n}^{\rm tr}.
\end{equation}
Here is a concrete illustration of this function for $g = 0$ and $n=4$.
\begin{example} \label{ex:stevens4points} \rm
Let $K = \mathbb{C}(\!(t)\!)$ and $R = \mathbb{C}[\![t]\!]$ and consider the four points in $\mathbb{P}^1_K$ given by
$(1\!:\!p(t)), (1\!:\!q(t)), (1\!:\!a), (1\!:\!b)$ where $a, b \in \mathbb{C}$ are generic and ${\rm val}(p(t)), {\rm val}(q(t)) > 0$. Let $x,y$ be the coordinates on $\mathbb{P}^1$. Naively, this gives us four points in $\mathbb{P}^1_R$, but it is not a stable model since $p(0) = q(0) = 0$ and so two points coincide in the special fiber.
The fix is to blow-up the arithmetic surface $\mathbb{P}^1_R$ at the ideal $\langle y-p(t)x, y-q(t)x \rangle$. We embed this blow-up into $\mathbb{P}^1_R \times_R \mathbb{P}^1_R$, where the latter $\mathbb{P}^1_R$ has coordinates $w,z$, as the hypersurface given by $w(y-q(t)x) = z(y-p(t)x)$. The special fiber is the nodal curve given by $y(w-z) = 0$. We wish to understand the \'etale local equation for the node cut out by $y=w-z=0$. To do this, set $x=z=1$ and consider the defining equation $y(w-1) + p(t)-q(t)w=0$. Now substitute $w' = w-1$ and $y' = y-q(t)$ to get $y'w' + (p(t)-q(t)) = 0$. Hence the dual curve is a line segment of length ${\rm val}(p(t)-q(t))$
with both vertices having weight $0$.
\qed
\end{example}
Evaluating the map (\ref{eq:classicaltotropical}) in general is a challenging
computer algebra problem:
how does one compute the metric graph from a smooth curve $C$ that
is given by explicit polynomial equations over $K$?
This section represents a contribution to this problem for curves of genus $2$.
As a warm-up for our study of genus 2 curves, let us first consider the genus 1 case.
\begin{example} \label{ex:bernds4points} \rm
An elliptic curve $C$ can be defined by giving
four points in $\mathbb{P}^1$. The curve is the double cover
of $\mathbb{P}^1$ branched at those four points.
This gives us a map $\mathcal{M}_{0,4} \rightarrow \mathcal{M}_1$,
which is well-defined over our field $K$.
The map is given explicitly
by the following formula for the j-invariant of $C$ in terms of the cross ratio
$\lambda{}$ of four ramification points (see \cite[\S 3]{Tev2}):
\begin{equation}
\label{eq:jinv}
j \,\,=\,\, 256\frac{(\lambda{}^2-\lambda{}+1)^3}{\lambda{}^2(\lambda{}-1)^2}.
\end{equation}
We now pass to the tropicalization by constructing a commutative square
\begin{equation} \label{eqn:jinv-comm}
\begin{diagram}
\mathcal{M}_{0,4}(K) & \rTo & \mathcal{M}_{0,4}^{\rm tr} \\
\dTo & & \dTo \\
\mathcal{M}_1(K) & \rTo & \mathcal{M}_1^{\rm tr}
\end{diagram}
\end{equation}
The horizontal maps are instances of
(\ref{eq:classicaltotropical}), and
the left vertical map is (\ref{eq:jinv}).
Our task is to define the right vertical map.
The ingredients are
the trees and tropical curves in Table~\ref{table:genus1}:
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|}
\cline{1-2}
Tropical curve of genus $1$ & Tree with $4$ leaves\\
\cline{1-2}
\includegraphics{figure-tropical-curve-genus1-2} & \includegraphics{figure-tree-4leaves-2} \\
\cline{1-2}
\includegraphics{figure-tropical-curve-genus1-1} & \includegraphics{figure-tree-4leaves-1} \\
\cline{1-2}
\end{tabular}
\label{table:genus1}
\caption{Trees on four taxa and tropical curves of genus 1}
\end{table}
A point in $\mathcal{M}_{0,4}^{\rm tr}$ can be represented by
a phylogenetic tree
with taxa $1,2,3,4$. Writing $\nu_{ij}$ for half the
distance from leaf $i$ to leaf $j$ in that tree, the unique
interior edge has length
$$ \ell \,\, = \,\,
{\rm max} \bigl\{
\nu_{12}+\nu_{34} - \nu_{14} - \nu_{23},\,
\nu_{13}+\nu_{24} - \nu_{12} - \nu_{34},\,
\nu_{14}+\nu_{23} - \nu_{13} - \nu_{24} \bigr\}. $$
Suppose we represent a point in $\mathcal{M}_{0,4}$
by four scalars, $x_1,x_2,x_3, x_4 \in K$, as in Example~\ref{ex:stevens4points}.
Then its image in
$\mathcal{M}_{0,4}^{\rm tr}$ is the phylogenetic tree obtained by setting
\begin{equation}
\label{eq:nu} \nu_{ij} = - {\rm val} (x_i - x_j) .
\end{equation}
The square \eqref{eqn:jinv-comm} becomes commutative if the right vertical map takes trees with interior edge length $\ell > 0$ to the
cycle of length $2 \ell$, and it takes
the star tree $(\ell = 0)$
to the node marked $1$.
To see this, we recall that
the tropical curve contains a cycle
of length $- {\rm val}(j)$, where $j$ is the j-invariant.
This is a standard fact (see \cite[\S 7]{BPR})
from the theory of elliptic curves over $K$.
Suppose the four given points in $\mathbb{P}^1$
are $0,1, \infty,\lambda$, and that $\lambda$ and $0$ are neighbors in the tree. This means $\mathrm{val}(\lambda)>0$. As desired, the length of our cycle is
\begin{align*}
-\mathrm{val}(j) &= -3\mathrm{val}(\lambda^2-\lambda+1)+2\mathrm{val}(\lambda{})+2\mathrm{val}(\lambda{}-1) \,=\, 0+2\mathrm{val}(\lambda{})+0\,=\,
2 \mathrm{val}(\lambda{}).
\end{align*}
The other case, when $\lambda $ and $0$ are not neighbors in the tree,
follows from the fact that the rational function of $\lambda$ in (\ref{eq:jinv})
is invariant under permuting the four ramification points.
In this example, the one-dimensional fan $\mathcal{M}_{0,4}^{\rm tr}$ serves
as a moduli space for tropical elliptic curves. A variant where the fibers
are elliptic normal curves is shown in Figure~\ref{fig:pentagonpentagram}.
In both situations, all maximal cones
correspond to elliptic curves over $K$ with bad reduction.
\qed
\end{example}
Moving on to genus $2$ curves, we shall now focus on
the tropical spaces $\mathcal{M}_2^{\rm tr}$ and $\mathcal{M}_{0,6}^{\rm tr}$.
There are seven combinatorial types for genus $2$ tropical curves.
Their poset is shown in \cite[Figure 4]{Cha}.
The seven types are drawn in the second column of Table~\ref{table-tropical-moduli}.
The stacky fan $\mathcal{M}_2^{\rm tr}$ is the cone over the
two-dimensional cell complex shown in Figure~\ref{figure-tropical-moduli}.
Note the identifications.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.35\textwidth]{figure-tropical-moduli}
\end{center}
\vspace{-0.3in}
\label{figure-tropical-moduli}
\caption{The moduli space of genus $2$ tropical curves}
\end{figure}
The tropical moduli space $\mathcal{M}_{0,6}^{\rm tr}$ is the space
of phylogenetic trees on six taxa.
A concrete model, embedded in $\mathbb{TP}^{14}$,
is the $3$-dimensional fan ${\rm trop}(\mathcal{M}_{0,6})$
seen in Theorem \ref{thm:tropsegrecubic}.
Combinatorially, it agrees with the tropical Grassmannian ${\rm Gr}(2,6)$
as described in \cite[Example 4.1]{SS}, so its cones
correspond to trees with six leaves.
The fan $\mathcal{M}_{0,6}^{\rm tr}$ has one zero-dimensional cone of type (1),
$25 = 10+15 $ rays of types (2) and (3),
$105 = 60+45$ two-dimensional cones of types (4) and (5),
and $105 = 90 + 15$ three-dimensional cones of types (6) and (7).
The corresponding combinatorial types of trees are
depicted in the last column of Table~\ref{table-tropical-moduli}.
Table~\ref{table-tropical-moduli} shows that there is a combinatorial correspondence
between the types of cones of the tropical Burkhardt quartic
${\rm trop}(\mathcal{B}) $ in Table~\ref{fig:tropburkorbits} and the types of cones in
$\mathcal{M}_2^{\rm tr}$ and $\mathcal{M}_{0,6}^{\rm tr}$.
We seek to give a precise explanation of this correspondence
in terms of algebraic geometry. At the moment we can carry
this out for level $2$ but we do not yet have a proof for level $3$.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\cline{1-4}
Label & Tropical curve of genus $2$ & Burkhardt cone & Tree with $6$ leaves \\
\cline{1-4}
(1) & \includegraphics{figure1} & origin & \includegraphics{figure-tree-1} \\
\cline{1-4}
(2) & \includegraphics{figure2} & (b) & \includegraphics{figure-tree-2} \\
\cline{1-4}
(3) & \includegraphics{figure3} & (a) & \includegraphics{figure-tree-3} \\
\cline{1-4}
(4) & \includegraphics{figure4} & (ab) & \includegraphics{figure-tree-4} \\
\cline{1-4}
(5) & \includegraphics{figure5} & (aa) & \includegraphics{figure-tree-5} \\
\cline{1-4}
(6) & \includegraphics{figure6} & (aab) & \includegraphics{figure-tree-6} \\
\cline{1-4}
(7) & \includegraphics{figure7} & (aaa) & \includegraphics{figure-tree-7} \\
\cline{1-4}
\end{tabular}
\label{table-tropical-moduli}
\caption{Correspondence between tropical curves, cones of ${\rm trop}(\mathcal{B})$,
and metric trees.}
\end{table}
\begin{theorem} \label{thm:level2trop}
Let $K$ be a complete nonarchimedean field.
\begin{compactenum}[\rm (a)]
\item There is a commutative square
\begin{equation}
\begin{diagram}
\mathcal{M}_{0,6}(K) & \rTo & \mathcal{M}_{0,6}^{\rm tr} \\
\dTo & & \dTo \\
\mathcal{M}_2(K) & \rTo & \mathcal{M}_2^{\rm tr}
\end{diagram}
\end{equation}
The left vertical map sends $6$ points in $\mathbb{P}^1$ to the genus $2$ hyperelliptic curve with these ramification points. The horizontal maps send a curve (with or without marked points) to its tropical curve (with or without leaves at infinity). The right vertical map is a morphism of generalized cone complexes
relating the second and fourth columns of Table~\ref{table-tropical-moduli}.
\item The top horizontal map can be described in an alternative way: under the embedding of
$\mathcal{M}_{0,6} $ into $ \mathbb{P}^{14}$ given by \eqref{eq:segremap}, take the valuations of the $15$ coordinates $m_0,m_1,\ldots,m_{14}$.
\end{compactenum}
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:level2trop}]
We start with (a). Let $C$ be a genus $2$ curve over $K$ and let $p_1, \dots, p_6 \in \mathbb{P}^1_K$ be the
branch points of the double cover $C \to \mathbb{P}^1$
induced from the canonical divisor. Let $R'$ be a discrete valuation ring over which a stable model of both $C$ and $(\mathbb{P}^1, p_1, \dots, p_6)$ can be defined and let $k$ be its residue field. The fact that the combinatorial types of dual graphs for $C$ and the marked curve $(\mathbb{P}^1, p_1, \dots, p_6)$ match up as in Table~\ref{table-tropical-moduli} is clear from the proof of \cite[Corollary 2.5]{avritzer} which constructs the stable $k$-curve of $C$ from that of $(\mathbb{P}^1, p_1, \dots, p_6)$. There is an obvious bijection of edges between the combinatorial types in all cases. We claim that the edge length coming from the \'etale neighborhood of nodal singularities is halved for curves of type (2) and is doubled for curves of type (3) from Table~\ref{table-tropical-moduli}: the description and proof for the other types can be reduced to these two cases.
First consider curves of type (3). Our stable genus $0$ curve consists of the union of two $\mathbb{P}^1$'s meeting in a point. One has $4$ marked points and the other has $2$ marked points. This arises from $6$ distinct points in $\mathbb{P}^1_{R'}$ such that exactly $2$ of them coincide after passing to the residue field. To build a stable model (cf. Example~\ref{ex:stevens4points}), we blow up the point of intersection in the special fiber of $\mathbb{P}^1_{R'}$ to get an arithmetic surface $\tilde{P}_{R'}$. Let $E$ be the double cover of the first $\mathbb{P}^1_k$ along the $4$ marked points, and let $E'$ be a copy of $\mathbb{P}^1_k$ mapping to the second $\mathbb{P}^1_k$ so that it is ramified over the $2$ marked points. Over the point of intersection, both $E$ and $E'$ are unramified, and we glue together the two preimages (there are two ways to do this, but the choice won't matter). Then $E \cup E'$ is a semistable (but not stable) curve which is the special fiber of an admissible double cover $C_{R'} \to \tilde{P}_{R'}$. Suppose that the node in the special fiber of $\tilde{P}_{R'}$ \'etale locally is $xy = t^\ell$. In a small neighborhood of this node, there are no ramification points.
Thus, a small neighborhood of each of these two points of intersection in $C_{R'}$ is isomorphic to a small neighborhood of the node in $\tilde{P}_{R'}$ and hence \'etale locally look like $xy = t^{\ell}$. Finally, we have to contract $E'$ to a single point to get a stable curve. The result is that the two nodes become one which \'etale locally looks like $xy = t^{2\ell}$.
Now consider curves of type (2). Use the notation from the previous case. The semistable model $C_{R'}$ over ${\rm Spec}\ R'$ has a hyperelliptic involution whose quotient is the union of two $\mathbb{P}^1_{R'}$'s. At the node of $C_{R'}$, which locally looks like $R'[\![x,y]\!]/\langle xy-t^m \rangle$
for some $m$, the involution negates $x$ and $y$ since it preserves the two components of $C_{R'}$. The ring of invariants
is $R'[\![u,v]\!]/\langle uv-t^{2m} \rangle $ where $u{=}x^2$, $v{=}y^2$. This is the local picture for the nodal genus $0$~curve.
The result above can also be deduced from Caporaso's
general theory in \cite[\S 2]{Cap}.
For a combinatorial illustration of type (6) see Chan's Figure 1 in \cite{Cha2}.
The two leftmost and two rightmost edges in her upstairs graph have been contracted away.
What is left is a ``barbell" graph with five horizontal edges of lengths $a, a, b, c, c$, mapping harmonically to a downstairs graph of edge lengths $a, 2b, c$.
Here we see both of the stretching factors represented in different parts of this harmonic morphism:
a $2$-edge cycle of total length $a+a$ maps to an edge of length $a$, and a single edge of length $b$ maps to an edge of length
$ 2b$ downstairs.
\smallskip
Now we consider (b). We need to argue that the
internal edge lengths can be computed from the $15$ quantities
${\rm val}(m_i)$, in a manner that is consistent with the description above.
For genus 1 curves this is precisely the consistency between Examples~\ref{ex:stevens4points}
and \ref{ex:bernds4points}.
We explain this for the case of the snowflake tree (7).
Without loss of generality, we assume that
$\{1,2\}$, $\{3,4\}$ and $\{5,6\}$ are the
neighbors on the tree. If $\nu_{ij}$ is
half the distance between leaves $i$ and $j$,
computed from the six points as in (\ref{eq:nu}), then, for instance,
$\,{\rm val}(m_{13}) = -\nu_{16} - \nu_{24} - \nu_{35} $.
A direct calculation on the snowflake tree shows that the
three internal edge lengths are
\begin{equation}
\label{eq:internaledges}
{\rm val}( m_{2})-{\rm val}(m_{13}) ,\,
{\rm val}(m_{6})-{\rm val}(m_{13}), \,\,\hbox{and} \,\,\,
{\rm val}(m_{14})- {\rm val}(m_{13}) .
\end{equation}
The edge lengths of the tropical curve $\,$
\includegraphics[width=0.05\textwidth]{figure7} $\,$
are gotten by doubling these numbers.
\end{proof}
At present we do not know the level $3$ analogues to the stretching factors $1/2$ and $2$ we saw in the proof above. Such lattice issues will play a role for the natural map from the tropical Burkhardt quartic onto the tropical moduli space $\mathcal{M}_2^{\rm tr}$. We leave that for future research:
\begin{conjecture}
\label{conj:tropmod}
Let $K$ be a complete nonarchimedean field. There is a commutative square
\begin{equation}
\begin{diagram}
\mathcal{M}_2(3)(K) & \rTo & \mathrm{trop}(\mathcal{B}) \\
\dTo & & \dTo \\
\mathcal{M}_2(K) & \rTo & \mathcal{M}_2^{\rm tr}
\end{diagram}
\end{equation}
The left map is the forgetful map. The top map is taking valuations of the coordinates $\,m_0,\dotsc{},m_{39}$. The bottom map sends a curve to its tropical curve. The right map is a morphism of (stacky) fans that takes the third column of Table~\ref{table-tropical-moduli} to the second column.
\end{conjecture}
Here is one concrete way to evaluate the left vertical map $\mathcal{M}_2(3) \to \mathcal{M}_2$
over a field $K$. We can represent an element of $\mathcal{M}_2(3)$ by
a point $(r:s_{01}:s_{10}:s_{11}:s_{12}) \in \mathbb{P}^4_K$
that lies in the open Burkhardt quartic
$\mathcal{B}^{\circ}$. The corresponding abelian surface $S$ is
the singular locus of the Coble cubic in $\mathbb{P}^8_K$ by Theorem
\ref{thm:abelian93}. If we intersect the abelian surface $S$ with the
linear subspace $\mathbb{P}^4$ given by (\ref{eq:ranktrica}), then
the result is the desired genus $2$ curve $C \in \mathcal{M}_2(K)$.
The conjecture asks about the precise relationship between
the tropical curve constructed from $C$
and the valuations of our $40$
canonical coordinates $m_0,m_1, \ldots, m_{39}$
on $\mathcal{B}^{\circ} $ inside $ \mathbb{P}^{39}_K$.
\section{Marked Del Pezzo surfaces}
\label{sec:delpezzo}
This section is motivated by our desire to draw all combinatorial types of tropical cubic surfaces together with their $27$ lines (trees). These surfaces arise in fibers of the map from a six-dimensional fan to a four-dimensional fan. These tropical moduli spaces were characterized by Hacking, Keel and Tevelev in \cite{HKT}. We now rederive their fans from first principles.
Consider a reflection arrangement of type $\mathrm{E}_n$ for $n=6,7$. The complement of the hyperplanes is the moduli space of $n$ points in $\mathbb{P}^2$ in general position (no 2 coincide, no 3 are collinear, no 6 lie on a conic)
together with a cuspidal cubic through these points (none of which is the cusp). For $n=6$, there is a $1$-dimensional family of such curves (this family is the {\it parabolic curve} in \cite[Definition 3.2]{CGL}).
For $n=7$ there are $24$ choices. We can use maps (\ref{eq:twomaps}) that come from Macdonald representations to forget the data of the cuspidal~cubic.
Consider the case $n=6$. Six points on a cuspidal cubic in $\mathbb{P}^2$
are represented by a matrix
\begin{equation}
\label{eq:threebysix}
D \,\,\,= \,\,\,
\begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 1 \\ d_1 & d_2 & d_3 & d_4 & d_5 & d_6 \\\
d_1^3 & d_2^3 & d_3^3 & d_4^3 & d_5^3 & d_6^3 \end{pmatrix}.
\end{equation}
The maximal minors of this $3 \times 6$-matrix are denoted
\[
[ijk] \quad = \quad (d_i-d_j)(d_i-d_k)(d_j-d_k) (d_i+d_j+d_k)
\qquad {\rm for} \quad 1 \leq i < j < k \leq 6.
\]
We also abbreviate the condition for the six points to lie on a conic:
\[
[{\rm conic}] \, = \,
[134][156][235][246] - [135][146][234][256] \,\,= \,\,
(d_1+d_2+d_3+d_4+d_5+d_6) \!\! \prod_{1 \leq i < j \leq 6} \!\!\!\! (d_i-d_j) .
\]
The reflection arrangement of type $\mathrm{E}_6$ consists of the
$36 = \binom{6}{3}+\binom{6}{2}+1$ hyperplanes defined by
the linear forms in the products above.
We list the flats of this arrangement in Table~\ref{fig:e6flats}.
The bold numbers indicate irreducible flats.
Each flat corresponds to a root subsystem, but not conversely.
Root subsystems that are not parabolic, such as
$\mathrm{A}_2^{\times 3}$, do not come from flats.
\begin{table}[h]
\small
\begin{tabular}{l|l|l|l|l}
\# & Codim & Size & Root subsystem & Equations of a representative flat \\
\hline
{\bf 1} & 1 & 36 & $\mathrm{A}_1$ & $d_1-d_2$\\
\hline
{\bf 2} & 2 & 120 & $\mathrm{A}_2$ & $d_1+d_3+d_6, d_2+d_4+d_5$\\
3 & 2 & 270 & $\mathrm{A}_1 \times \mathrm{A}_1$ & $d_1+d_4+d_6, d_2+d_4+d_5$\\
\hline
{\bf 4} & 3 & 270 & $\mathrm{A}_3$ & $d_1+d_4+d_6, d_2+d_4+d_5, d_5-d_6$\\
5 & 3 & 720 & $\mathrm{A}_2 \times \mathrm{A}_1$ & $d_1+d_5+d_6, d_2+d_4+d_5, d_4-d_5$\\
6 & 3 & 540 & $\mathrm{A}_1^{\times 3}$ & $d_1+d_4+d_6, d_2+d_3+d_6, d_2+d_4+d_5$\\
\hline
{\bf 7} & 4 & 45 & $\mathrm{D}_4$ & $d_5-d_6, d_3-d_4, d_2+d_4+d_6, d_1+d_4+d_6$\\
{\bf 8} & 4 & 216 & $\mathrm{A}_4$ & $d_5-d_6, d_3+d_4+d_6, d_2+d_4+d_6, d_1+d_4+d_6$\\
9 & 4 & 540 & $\mathrm{A}_3 \times \mathrm{A}_1$ & $d_3+d_4+d_6, d_2+d_3+d_6, d_1+d_4+d_6, d_2+d_4+d_5$\\
10 & 4 & 120 & $\mathrm{A}_2 \times \mathrm{A}_2$ & $d_4-d_5, d_3+d_4+d_5, d_2+d_4+d_5, d_1+d_5+d_6$\\
11 & 4 & 1080 & $\mathrm{A}_2 \times \mathrm{A}_1^{\times 2}$ & $d_1+d_2+d_5,d_2+d_3+d_6,d_1+d_4+d_6,d_2+d_4+d_5$\\
\hline
{\bf 12} & 5 & 27 & $\mathrm{D}_5$ & $d_5-d_6, d_1-d_4, d_1-d_3, d_1-d_2, d_1+d_5+d_6$\\
{\bf 13} & 5 & 36 & $\mathrm{A}_5$ & $d_5+d_4+d_6, d_4-d_6, d_3-d_5, d_2-d_6, d_1-d_5$\\
14 & 5 & 216 & $\mathrm{A}_4 \times \mathrm{A}_1$ & $d_6, d_4, d_3-d_5, d_2+d_5, d_1$\\
15 & 5 & 360 & $\mathrm{A}_2^{\times 2} \times \mathrm{A}_1$ & $d_2+d_4+d_5,d_2-d_3,d_4-d_5,d_2+d_3+d_6,d_1+d_4+d_6$
\end{tabular}
\caption{The flats of the $\mathrm{E}_6$ reflection arrangement.}
\label{fig:e6flats}
\end{table}
The Bergman fan of ${\rm E}_6$ is the fan over the nested set complex \cite{ARW}, a $4$-dimensional simplicial complex whose vertices are the
$750 = 36\!+\!120\!+\!270\!+\!45\!+\!216\!+\!27\!+\!36$ irreducible~flats.
We define the {\em Yoshida variety} $\,\mathcal{Y}\,$ to be the closure of the image of the rational map
\begin{equation}
\label{eq:yoshidamap}
\mathbb{P}^5 \,\, \buildrel{{\rm linear}}\over{\hookrightarrow}\,\, \mathbb{P}^{35} \,\,
\buildrel{{\rm monomial}} \over{\dashrightarrow} \,\, \mathbb{P}^{39},
\end{equation}
where the monomial map is defined by the root subsystems of type $\mathrm{A}_2^{\times 3}$.
Our name for $\mathcal{Y}$ gives credit to Masaaki Yoshida's explicit computations in \cite{yoshida}. (Warning: there is a closely related variety $\mathcal{Y}$ studied in \cite[\S 3.5]{hunt}. This is not the same as our variety.)
Explicitly, as shown in \cite[Proposition 2.4]{CGL}, the map into $\mathbb{P}^{39}$ is defined by $30$ bracket monomials like
$[125][126][134][234][356][456]$ and $10$ bracket monomials like
$[{\rm conic}][123][456]$. We divide each of these $40$ expressions by
$\prod_{1 \leq i < j \leq 6} (d_i-d_j)$ to get a product of $9$ linear forms.
Thus the rational map (\ref{eq:yoshidamap}) is
given by $40$ polynomials of degree $9$ that factor into roots of ${\rm E}_6$.
The {\em tropical Yoshida variety} ${\rm trop}(\mathcal{Y})$ is the image
of the Bergman fan of ${\rm E}_6$ under the linear map $\mathbb{TP}^{35} \rightarrow \mathbb{TP}^{39}$ defined by the corresponding $40 \times 36$-matrix.
The Yoshida variety $\mathcal{Y}$ has $40$ singular points \cite[Theorem 5.7]{vG}.
Its open part $\mathcal{Y}^{\circ}$ is the moduli space of marked smooth cubic surfaces \cite[Theorem 3.1]{CGL}.
The blow-up of these points is {\em Naruki's cross ratio variety} $Y^6_{\rm lc}$ (following the notation of \cite{HKT}) from \cite{naruki}.
The situation is analogous to Theorem~\ref{thm:IgusaBBS}.
As defined, we consider ${\rm trop}(\mathcal{Y})$ only as a set, but there is a unique coarsest fan structure on this set.
This was shown in \cite{HKT}. It is the fan over a $3$-dimensional simplicial complex that was described
by Naruki \cite{naruki}. We call them the {\em Naruki fan} and {\em Naruki complex}, respectively.
The $76 = 36+40$ vertices correspond to the two types
of boundary divisors on $Y^6_{\rm lc}$: the $36$ divisors coming from the hyperplanes of $\mathrm{E}_6$ (type a) and the
$40$ exceptional divisors of the blow-up (type b). The types of intersections of these divisors is given in \cite[p.23]{naruki} and is listed in Table~\ref{table:naruki}. The divisors of type (a) correspond to root subsystems of type $\mathrm{A}_1$ and the divisors of type (b) correspond to root subsystems of type $\mathrm{A}_2^{\times 3}$. The Naruki complex is the nested set complex on these subsystems. Its face numbers are as follows:
\begin{table}[h]
\centering
\begin{tabular}{l|l}
type & number\\
\hline
(a) & 36 \\
(b) & 40 \\
\hline
(aa) & 270 \\
(ab) & 360 \\
\hline
(aaa) & 540 \\
(aab) & 1080 \\
\hline
(aaaa) & 135 \\
(aaab) & 1080
\end{tabular}
\caption{The Naruki complex has $76$ vertices, $630$ edges, $1620$ triangles and $1215$ tetrahedra.}
\label{table:naruki}
\end{table}
\begin{theorem} \label{thm:yoshida}
The Yoshida variety $\mathcal{Y}$ is the intersection in $\mathbb{P}^{39}$
of a $9$-dimensional linear space and a $15$-dimensional toric variety whose
dense torus $\mathbb{G}_m^{15}$ is the intrinsic torus of $\mathcal{Y}^{\circ}$.
The tropical compactification $\overline{\mathcal{Y}}$ of $\mathcal{Y}^{\circ}$ induced by the Naruki fan is
the cross ratio variety~$Y^6_{\rm lc}$.
\end{theorem}
The polytope of the toric variety has $2232$ facets.
Its prime ideal is minimally generated by $8922$ binomials, namely
$120$ of degree $3$,
$810$ of degree $4$,
$2592$ of degree $5$,
$2160$ of degree $6$, and
$3240$ of degree $8$.
These results, which mirror
parts (b) and (d) in Proposition \ref{prop:toricvarT},
were found using
{\tt polymake} \cite{GJ} and {\tt gfan} \cite{jensen}.
The prime ideal of $\mathcal{Y}$ is minimally generated by
$30$ of the binomial cubics together with $30$ linear forms.
A natural choice of such linear forms is
described in \cite[\S 3]{yoshida}. It
comes from $4$-term Pl\"ucker relations such as
$\,[123][456] - [124][356] + [125][346] + [126][345]$.
There are no linear trinomial relations on $\mathcal{Y}$.
The $750$ rays of the Bergman fan map into ${\rm trop}(\mathcal{Y})$ as follows.
Write $m$ for
the linear map $ \mathbb{TP}^{35} \rightarrow \mathbb{TP}^{39}$
and $F_i$ for the rays representing family $i$ of irreducible flats of
Table~\ref{fig:e6flats}.~Then:
\begin{align*}
m(F_1) = m(F_8) = m(F_{13})& \text{ has $36$ elements (a),}\\*
m(F_2)& \text{ has $40$ elements (b),}\\*
m(F_4)& \text{ has $270$ elements.}
\end{align*}
All other rays map to $0$ in $ \mathbb{TP}^{39}$.
Each element in $m(F_4)$ is the sum of two vectors from $m(F_1)$ which form a cone.
The image of the Bergman fan of $\mathrm{E}_6$ in $\mathbb{TP}^{39}$
is a fan with $346=36{+}40{+}270$ rays that subdivides the Naruki fan.
That fan structure on ${\rm trop}(\mathcal{Y})$ defines a modification of
the Naruki variety~$Y^6_{\rm lc}$.
Here is the finite geometry behind (\ref{eq:yoshidamap}). Let $V = \mathbb{F}_2^6$ with coordinates $x_1, \dots, x_6$. There are two conjugacy classes of nondegenerate quadratic forms on $V$. Fix the non-split form
\[
q(x)\,\, =\,\, x_1x_2 + x_3x_4 + x_5^2 + x_5x_6 + x_6^2.
\]
Then the Weyl group $W(\mathrm{E}_6)$ is the subgroup of $\mathrm{GL}_6(\mathbb{F}_2)$ that preserves this form. Using $q(x)$, we define
an orthogonal (in characteristic $2$,
this also means symplectic) form by
\[
\langle x, y \rangle \,\,=\,\, q(x+y) - q(x) - q(y).
\]
There is a natural bijection between the 36 positive roots of $\mathrm{E}_6$ and the vectors $x \in V$ with $q(x) \ne 0$. There are $120$ planes $W$ such that $q(x) \ne 0$ for all nonzero $x \in W$. These correspond to subsystems of type $\mathrm{A}_2$. The set of $120$ planes breaks up into $40$ triples of pairwise orthogonal planes. These $40$ triples correspond to the subsystems of type $\mathrm{A}_2^{\times 3}$.
\smallskip
We now come to the case $n=7$. The {\em G\"opel variety} $\mathcal{G}$ of
\cite{RSSS} is the closed image of a map
\begin{equation}
\label{eq:gopelmap}
\mathbb{P}^6 \,\, \buildrel{{\rm linear}}\over{\hookrightarrow}\,\, \mathbb{P}^{62} \,\,
\buildrel{{\rm monomial}} \over{\dashrightarrow} \,\, \mathbb{P}^{134}.
\end{equation}
The linear map is given by the $63$ hyperplanes in
the reflection arrangement $\mathrm{E}_7$, and the
monomial map by the $135$ root subsystems of type $\mathrm{A}_1^{\times 7}$.
The full list of all flats of the arrangement $\mathrm{E}_7$ appears in \cite[Table 2]{RSSS}.
In \cite[Corollary 9.2]{RSSS} we argued that the
{\em tropical G\"opel variety} ${\rm trop}(\mathcal{G})$ is the image of
the Bergman fan of $\mathrm{E}_7$ under the induced linear map
$\mathbb{TP}^{62} \rightarrow \mathbb{TP}^{134}$,
and we asked how ${\rm trop}(\mathcal{G})$ would be related to the fan for $Y^7_{\rm lc}$ in~\cite[\S 1.14]{HKT}.
We call that fan the {\em Sekiguchi fan}, after \cite{sekiguchi}.
The following theorem answers our question.
\begin{theorem} \label{thm:goepel}
The G\"opel variety $\mathcal{G}$ is the intersection in $\mathbb{P}^{134}$ of a $14$-dimensional linear space and a $35$-dimensional toric variety whose
dense torus $\mathbb{G}_m^{35}$ is the intrinsic torus of $\mathcal{G}^{\circ}$.
The tropical compactification $\overline{\mathcal{G}}$
of the open G\"opel variety $\mathcal{G}^{\circ}$ induced by the Sekiguchi fan is the Sekiguchi variety $Y^7_{\rm lc}$.
Hence, the Sekiguchi fan is the coarsest fan structure on ${\rm trop}(\mathcal{G})$.
\end{theorem}
The result about the linear space and the toric variety is \cite[Theorem 6.2]{RSSS}.
The determination of the intrinsic tori in Theorems \ref{thm:yoshida} and \ref{thm:goepel} is immediate from Lemma \ref{lem:intrinsictorusimage}. The last assertion follows from
the fact that the open G\"opel variety $\mathcal{G}^{\circ}$ is the moduli space of marked smooth del Pezzo surfaces of degree two.
For this see \cite[Theorem 3.1]{CGL}.
The Bergman fan of type $\mathrm{E}_7$ has $6091$ rays. They are listed in \cite[Table 2]{RSSS}.
The $6091$ rays map into $\mathrm{trop}(\mathcal{G})$ as follows.
Write $F_i$ for family $i$ in \cite[Table 2]{RSSS}. Then:
\begin{align*}
m(F_1) = m(F_{17}) = m(F_{25}) \text{ has $63$ elements,}\\*
m(F_2) = m(F_{15}) \text{ has $336$ elements,}\\*
m(F_4) \text{ has $630$ elements,}\\*
m(F_{24}) \text{ has $36$ elements,}\\*
m(F_8) \text{ has $2016$ elements,}\\*
m(F_9) \text{ has $315$ elements,}\\*
m(F_{16}) \text{ has $1008$ elements.}
\end{align*}
Finally, $m$ sends $F_{26}$ to 0 (multiple of all $1$'s vector). The fan on the first $4$ types of rays is the Sekiguchi fan as described in \cite[\S 1.14]{HKT}. The image of the Bergman fan of ${\rm E}_7$ is a refinement of the Sekiguchi fan, as follows:
\begin{compactitem}
\item Every ray in $m(F_8)$ is uniquely the sum of a ray in $m(F_2)$ and a ray in $m(F_{24})$. This is in the image of a cone of nested set type $\mathrm{A}_2 \subset \mathrm{A}_6$.
\item Every ray in $m(F_9)$ is uniquely the sum of three rays in $m(F_1)$. This is in the image of a cone of nested set type $\mathrm{A}_1^{\times 3}$.
\item Every ray in $m(F_{16})$ can be written uniquely as a positive sum of a ray in $m(F_1)$ and a ray in $m(F_{24})$. This is in the image of a cone of nested set type $\mathrm{A}_1 \subset \mathrm{A}_6$.
\end{compactitem}
\smallskip
The Sekiguchi fan on ${\rm trop}(\mathcal{G})$ is a fan over a $5$-dimensional simplicial complex with $1065 = 63+336+630+36$ vertices.
It has $9$ types of facets, corresponding to the $9$ tubings
shown in \cite[Figure 2, page 200]{HKT}.
The significance of the Naruki fan and the Sekiguchi fan lies in the commutative diagram in \cite[Lemma 5.4]{HKT}, which we restate here:
\begin{equation}
\begin{diagram}
\mathbb{P}^6 & \rDashto & \mathcal{G}^{\circ} \\
\dDashto & & \dTo \\
\mathbb{P}^5 & \rDashto & \mathcal{Y}^{\circ}
\end{diagram}
\end{equation}
The horizontal maps are those in (\ref{eq:gopelmap}) and (\ref{eq:yoshidamap}). The left vertical map is defined by dropping a coordinate. The tropicalization of the right vertical map $\mathcal{G}^{\circ} \rightarrow \mathcal{Y}^{\circ}$ is a linear projection
\begin{equation}
\label{eq:GtoT}
{\rm trop}(\mathcal{G}) \,\,\rightarrow \,\,{\rm trop}(\mathcal{Y})
\end{equation}
from the tropical G\"opel variety onto the tropical Yoshida variety.
We wish to explicitly determine this map on each cone of ${\rm trop}(\mathcal{G})$.
The point is that all tropicalized generic del Pezzo surfaces of degree $3$ appear in the fibers of (\ref{eq:GtoT}),
by the result about the universal family in \cite[Theorem 1.2]{HKT},
and our Theorems \ref{thm:yoshida} and \ref{thm:goepel}. At infinity, such a
del Pezzo surface
is glued from $27$ trees, which are exactly the tropical image of the $27$ lines on a cubic surface over $K$. Each tree has $10$ leaves, which come from the intersections of the $27$ lines. Thus, each tree represents a point of $\mathcal{M}_{0,10}(K)$.
Thus tropicalized del Pezzo surfaces of degree $3$ can be represented by
a {\em tree arrangement} in the sense of \cite[\S 4]{HJJS}.
One issue with the map \eqref{eq:GtoT} is that its
zero fiber is $3$-dimensional. Namely, it the union of tropicalizations of
all constant coefficient cubic surfaces. The zero fiber has $27$ rays, one for each line on the cubic surface, and $45$ triangular cones, one for each triple of pairwise intersecting lines. This is the subtle issue of
{\em Eckhart points}, addressed by \cite[Theorem 1.19]{HKT}. Cubic surfaces with Eckhart points are special, for they contribute to the points in the interior of the $45$ triangular cones. Disallowing these removes
the interiors of the triangular cones, and we are left with a balanced two-dimensional fan.
This is the fan over a graph with $27$ vertices and $135$ edges,
representing generic constant coefficient cubic~surfaces.
\smallskip
In this section, we developed some tools for the classification
of tropical cubic surfaces, namely as fibers of \eqref{eq:GtoT},
but we did not actually carry out this classification.
That problem will be solved in a forthcoming paper by
Qingchun Ren, Kristin Shaw and Bernd Sturmfels.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,862 |
Linked Data Sniffer
===================
[](https://travis-ci.org/ldp4j/ldp4j)
[](https://raw.githubusercontent.com/nandana/ld-sniffer/master/LICENSE)
[](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html)
[](https://github.com/nandana/ld-sniffer/issues)
[](https://twitter.com/nandanamihindu)
Linked Data Sniffer is a tool for assessing the accessibility of Linked Data resources according to
the [metrics defined](http://delicias.dia.fi.upm.es/LDQM/index.php/Accessibility) in the
[Linked Data Quality Model](http://www.linkeddata.es/ontology/ldq#).
Linked Data Sniffer can be used either as a Web App or a command line tool.
## LD Sniffer Web App

Thanks [Idafen](https://github.com/idafensp) for the logo!
## LD Sniffer Docker Image
The LD Sniffer Web App is also available as a Docker image [(nandana/ld-sniffer)](https://hub.docker.com/r/nandana/ld-sniffer/) in Docker hub.
### Usage
$ docker run --name ld-sniffer -p 8080:8080 -it nandana/ld-sniffer:0.0.1
You should see the web app getting started like the following and it will be available in the URL : http://localhost:8080/

## LD Sniffer Command Line App
### Usage
Download the [ld-sniffer-0.0.1.jar](https://github.com/nandana/ld-sniffer/releases/tag/0.0.1) and use it as an standalone executable jar. You will need Java 8 as a prerequesite.
```
usage: java -jar ld-sniffer-0.0.1.jar [-h] [-md] [-ml <METRICS-FILE-PATH>] [-rdf] [-t <T-MINS>] [-tdb <TDB-DIR-PATH>] [-ul <URI-FILE-PATH>] -url <URL>
Assess a list of Linked Data resources using Linked Data Quality Model.
-h,--help Print this help message
-md,--metrics-definition Include the metric definitions in the results
-ml,--metricsList <METRICS-FILE-PATH> The path of the file containing the list of metrics to be calculated
-rdf,--rdf-output Output the RDF serialization of the results
-t,--timeout <T-MINS> Timeout (in minutes) for a single evaluation
-tdb,--tdb <TDB-DIR-PATH> The path of directory for Jena TDB files
-ul,--uriList <URI-FILE-PATH> The path of the file containing the urls of resources to be assessed
-url,--url <URL> URL of the resource to be assessed
Please report issues at https://github.com/nandana/ld-sniffer
```
## Evaluation results
* [Results of accessibility assessment of DBpedia resources](https://datahub.io/dataset/ldqm-dbpedia-2016)
* [Analysis of accessibility assessment of DBpedia resources](http://nandana.github.io/ld-sniffer/)
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,187 |
**Table of Contents**
Contents
Reason to Wed
Copyright
Blurb
Dedication
Chapter One
Chapter Two
Chapter Three
Chapter Four
Chapter Five
Chapter Six
Chapter Seven
Chapter Eight
Chapter Nine
Chapter Ten
Chapter Eleven
Chapter Twelve
Chapter Thirteen
Chapter Fourteen
Chapter Fifteen
Chapter Sixteen
Chapter Seventeen
Chapter Eighteen
Epilogue
Rebel Hearts series
Book List
Meet the Author
Richard brought his free hand to her neck and stroked the backs of his fingers over her throat.
"Windermere," Esme whispered in an annoyed tone. "What do you think you're doing?"
"I think I'd like to make love to you, if you don't mind."
"I don't think that's wise," she warned, but didn't pull away.
"Don't make up your mind that you won't like something until you've at least tried it once." He laughed softly. "Isn't that what you told Lady Small to do earlier in the night?"
"Listening in?" she complained.
"I agree wholeheartedly." He'd never so much as kissed Esme before and he was desperate to taste her suddenly. Richard bowed his head slowly and set his lips to her neck. He nibbled her skin as she shuddered. "You don't have to say anything. Just let it happen."
**Heather Boyd**
**Reason to Wed**
_Distinguished Rogues_
_Book 7_
Heather-Boyd.com
Facebook
Copyright © 2015 by Heather Boyd
ISBN: 978-1-925239119
First Published, December 2015
Edited by Kelli Collins
Updated May 2017
All rights reserved. This book or any portion thereof may not be reproduced nor used in any manner whatsoever without the express written permission of the author except for use of brief quotations in a book review.
This ebook is licensed for your personal enjoyment only. This ebook may not be resold or given away to other people. If you are reading this book and did not purchase it, or it was not purchased for your use only, then please return it and purchase your own copy. Thank you for respecting the hard work of this author.
This book is a work of fiction. Names, characters, places and incidents are products of the author's imagination or are used facetiously. Any resemblance to actual persons, living or dead, events or locales is entirely coincidental.
The best way to stay in touch is to be part of Heather's Readers List. Follow this link to receive new release alerts and more.
**Distinguished Rogues Series Reading Order**
Book 1: Chills (Jack and Constance)
Book 2: Broken (Giles and Lillian)
Book 3: Charity (Oscar and Agatha)
Book 4: An Accidental Affair (Merrick and Arabella)
Book 5: Keepsake (Kit and Miranda)
Book 6: An Improper Proposal (Martin and Iris)
Book 7: Reason to Wed (Richard and Esme)
Book 8: The Trouble with Love (coming soon)
_Richard Hill, the Earl of Windermere, might desperately require a wife and heir, but thoughts of duty fly from his mind when he rescues Esme, Lady Heathcote, from the embarrassment of a failed affair. They usually never agree about anything. He's never even kissed the vexing widow. But when the opportunity arises to whisk her away for a no-strings-attached rendezvous, Richard can't imagine a better way to spend a moonlit evening._
_Esme has never lacked for admirers, but having Lord Windermere's company goes a long way to ease the pain of losing her suddenly betrothed lover. And when Windermere suggests an affair, Esme is intrigued by the blazing-hot connection even while knowing their relationship has no future beyond his house party._
_But as with any temptation, it's a bargain they'll soon regret._
**Dedication**
For John—a man who challenges me, loves me and sticks with me through thick and thin. You are the reason I write.
I love you so much.
**Chapter One**
**E** very woman can appreciate the challenge of making a man do what she wants. Unfortunately for Lady Heathcote, Esme to her closest friends, her chances of success tonight seemed to have fled along with her lover. "Now where has he gone?"
She scanned the room in search of Mr. Albert Meriwether. However, it was becoming increasingly clear that inviting him to Lord Windermere's house party in Gloucestershire had been a colossal mistake on her part. She had his attention here even less than she had in London, and no one had even been stolen from or murdered. She should have broken with him when she'd sensed his repeated reluctance to take time away from his important work in the city.
"He's across the room, making good on his intention to win every lord in attendance over to his cause," Lady Ames warned.
_Again._
"I must say, if you had not mentioned your connection earlier today, I'd have had no idea you were such good friends," Lady Small whispered dramatically. "He's paid you less attention than our host and we all know you've been at odds with Windermere for an age."
Harriet met her gaze, her expression tinged with concern. "I've known of social climbers before of course, but never one so pointedly obvious as Meriwether."
"That wasn't why we came, Harriet," she complained softly to her friend and confidant.
Harriet squeezed her arm, full of sympathy. Esme had dragged Meriwether from London to reignite the spark of their affair before his obsession with his work snuffed it entirely. But he was determined to curry favor with the most influential lords in attendance. And if he couldn't gain their ears for long enough, he'd started being friendly toward their wives too.
She'd had more of his nonsense than she could tolerate, and turned away.
Esme left Harriet to her own devices and wandered the public rooms of Windermere Park on her own. The loveliest property she'd ever visited was home of the most arrogant man she'd ever met. She'd been surprised by the invitation to come this year, but never considered refusing after reading Windermere's most sincere apology for losing his temper with her. With his last lover, Lady Bartlett, being so proficient at amateur theatricals, it hadn't been surprising the woman had pulled the wool over his eyes, professing to a pregnancy that was just a myth. And it had been Esme's unfortunate sense of fair play that had prompted her to warn him that Lady Bartlett wasn't the least bit pregnant. His vitriol had fallen on her head-first, of course, but at least he'd listened and not married the devious woman.
In the hall, she encountered Windermere's ancient butler, a kindly soul who'd served the family forever. Oswin was a sweet old man who never failed to treat her well, so she stopped to speak to him when so many others wouldn't bother. "Good evening, Oswin."
He nodded. "Might I be of service, Lady Heathcote?"
She took in the lean to his posture and his tired expression and smiled. "Yes, you can go and sit down and let young Pip run around in your stead for the rest of tonight."
Pip was the newest footman employed here, but Esme was confident the young man wouldn't mind the extra work or the experience.
"It's my pleasure to serve the family," Oswin replied with his usual dignified loyalty.
"As you wish." She'd let the matter slide but privately thought a man his age should be already training his replacement. If she were mistress of this house, she'd have begun long ago. A long house party like this could send him to his bed from sheer exhaustion, and then where would the family be?
She glanced back inside the drawing room once more.
Lingering by the hearth, Meriwether laughed with Windermere's guests, most part of her extended circle of friends too. She considered each man in turn...their intelligence, their reputations. Their chances of being won over to Meriwether's cause to formalize protection for the wealthiest homeowners with a private, trained guard. Her host, Lord Windermere, and his younger brother Lord Avery Hill were among them, and both were extremely shrewd gentlemen. They would have the greatest influence on the others if Meriwether won their support this week.
As far as causes went, Meriwether was entitled to his opinion that such a service was needed. But the truly needy of London were most at risk from robbers and couldn't afford to pay for their own private guard. Truth to tell, she was finding it hard to support Meriwether's ambitions as completely as she once did. She'd also come to suspect their affair had become a way to gain entry into the upper ten thousand by association, a means to an end for him.
However disappointed she might feel about that and his motives, Esme would never allow herself to depend on a man for her entire happiness. If she wasn't involved with Meriwether, there was always someone handsome to fantasize about and encourage into her bed down the road. Over the years of her widowhood, she'd never lacked for male companionship.
She nodded to Lord Avery Hill and Miles Hammond as they strolled past. The glow of appreciation in both men's eyes practically shouted their interest and soothed away her hesitation to break with Meriwether. She'd easily find someone who wanted to be with _her_.
Lord Avery Hill moved toward Harriet and she smiled with understanding. The pair had been lovers on and off for years, and it seemed this year would be no different.
Mr. Miles Hammond, however, was another matter entirely. A friend of hers since the final days of her largely unhappy marriage, his inclusion in the house party guest list confounded her. He was not a particular favorite of their host, or even of his brother, yet all had seemed to be in quite a genial mood with each other since the party began. She'd have to find out why Hammond had been included.
She glanced about those gathered for tonight's ball. Champagne was being passed around freely and everyone seemed happy and infinitely agreeable to enjoy the party atmosphere to the fullest extent. Parties such as these were opportunities to mingle and conduct discreet liaisons without expectation of deeper, longer-term connections in many cases. It was all very civilized. As the quartet hired to play tonight tuned their instruments, she smiled. She might find her host a trifle wearying, but she could ignore the little irritations in Windermere's company, given her expectation of every other pleasure.
She moved away from the hall as new guests were welcomed by Oswin and turned her gaze on Windermere. Couldn't he see his nearest neighbors had arrived and needed to be introduced to the first-time guests?
But no, he remained watching Meriwether talk, a slight frown on his face.
After a long moment, Lord Windermere cast a questioning glance her way, catching her watching him. Unwilling to be ruffled by his scrutiny, Esme stared back. Good God, those cornflower-blue eyes of his would render a lesser woman immobile if she was unprepared. Esme knew Windermere well though, well enough not to be affected by his handsome face. He knew he was attractive, too; he thought far too well of his appeal for her taste, and she sometimes stared at him overlong just to make him a tiny bit uncomfortable.
His grin faded slowly as she held his gaze and then his glance cut to those gathered about him and back to her, a question now in his eyes. Esme hid a smile, tipped her head in the direction of the hall, waiting for Windermere to catch on to why she stared at him so pointedly. It certainly wasn't for his looks alone.
He shook his head, as if clearing his mind of a thought, and hurried off to do his duty as host, leaving her laughing at his befuddlement.
The man needed a wife sooner rather than later to manage himself and his home affairs better. Someone to point him in the right direction from time to time, or even daily.
She turned back to her quarry only to be disappointed yet again. Meriwether was headed in the opposite direction. He snagged two glasses of champagne, glancing over his shoulder once or twice, as he navigated the crowd and slipped into the hall.
How sweet. Perhaps she'd misunderstood his preoccupation and he was arranging a private rendezvous for them both beyond the ballroom. Esme didn't require the fuss of a perfect seduction, but his hands on her body would be very fine tonight.
She moved toward him but again lost sight. Esme drew in a deep breath in frustration. It wasn't the first time the man had vanished so completely since they'd arrived two days ago.
The hallway beyond the drawing room was filled to bursting with chattering guests and she moved smoothly through them, nodding and speaking occasionally to some. While she admired the elegance and comforts to be found in Lord Windermere's home, she kept her eye out for Meriwether. She turned into the library, but the room was startlingly empty.
"Looking for me?" Windermere asked as he came to stand near. His gaze raked her from head to toe in the most gauche way.
_Arrogant and presumptuous._ "Hardly. You should pay more attention to your guests and the health of your servants."
Instead of taking the hint that she wasn't in the mood to talk, he caught her hand and raised it to his lips. His blue eyes danced with amusement. "I do love when you're friendly. How have you been, Esme?"
She scowled at him and withdrew her hand to her side. "I've not given you leave to use my first name and I am not of a mood to spar with you. Go back to your other guests for amusement and send your butler to his bed. Anyone can see he's on the verge of collapse tonight."
"I already banished Oswin to rest." He laughed suddenly. "Young Pip has assumed his duties until Collins comes up."
"Just as well," she replied, thankful for such sensible decisions at last.
"Only you would ever dare tell me what to do in my own home. I wanted to thank you for coming," Windermere murmured. "But to convince everyone we're not at odds, you will have to talk to me occasionally with a little less acid in your tone."
"We've spoken as much as needed to quell any gossip." She smiled at him. "Or was it your wish to have me chivvy you out of your mopes too."
"I will say again you were right." Windermere sighed and raked a hand through his dark, wavy hair. "You're enjoying rubbing my nose in that business with Lady Bartlett, aren't you?"
"Perhaps." She smothered a laugh. He hadn't wanted to believe he was being used until it was almost too late to extract himself from the connection. "You were so indignant that day, and after venting your pique at me, you charged down the street—on foot of all things, my man and your horse trailing after. I laughed for at least a whole day afterward. But I am sorry you were let down."
He inhaled sharply, his jaw clenching before he relaxed and shook his head. "No, you're not. You're positively gloating that you were proved right about her."
She allowed herself the briefest smirk. "You should learn to listen to good advice when you hear it, even if it comes from a direction you don't care for. I did try at first every subtle method I could imagine to make you really look at her figure and behavior. She wanted to trap you and almost did. An adventuress of her poor standard is not suitable to be your countess."
"I believe you wholeheartedly." He leaned closer, bracing one hand on the doorframe beside her head so she was partially trapped by his body. "In fact, I'm considering leaving the matter of who should be in that position in your capable hands."
She stared at him in shock. "You'd let me choose your wife for you?"
"Well, perhaps not a wife." He grinned and his attention dropped to her bust. "But I'm open to hearing your suggestion for my next lover. I seem to have the worst luck in that area and you seem to have developed an interest in those I take to my bed."
Esme laughed at his absurd suggestion and ignored the overwhelming urge to unbutton her gown for him. She did not lead a man on while involved with another, even if that _other_ was leading her on a merry chase tonight. "You hardly need advice on that. Any pair of breasts will do. But next time, if the lady claims she's carrying your child, at least find out for sure she's speaking truthfully _before_ you request a special license."
"Breasts come attached to the lady." He sighed again and drew back. "Given my near miss, I'm no longer confident I've the patience for marriage."
Last year, Esme had formed a suspicion about Lord Windermere, what set him to sigh so often when someone married or was heard to have fathered a son or daughter. He implied he lacked patience, but that probably wasn't true. There were countless other gentlemen of their acquaintance with both legitimate and illegitimate children attached to their names. Lord Windermere had not lived the life of a saint, but he had no children of his own that Esme had ever learned of.
She could sympathize with his situation, though she'd never let on or embarrass him by speaking of it. At his age, nearing three and forty years, he must have begun to worry for the succession, since his brother appeared even less ready to settle down than he was. After that, the estate and title fell to a cousin who hadn't the bearing of an earl, in her opinion, although he did possess a sweet wife and two sons already.
She didn't know what to say to make him feel better anyway because nothing really could. She'd long since accepted her own barren state as a certainty. "Things might be different with the right woman," she suggested gently. At least that is what well-meaning family had always advised her.
He shook his head then assumed the warm expression so common for him that lit up his eyes so brightly she wanted to draw closer. "So, are you going to tell me what you were looking for?"
She glanced away, glad he'd changed the subject and that the uncomfortable personal conversation between them was over. She wouldn't confide in him about her exasperation with her lover, but Windermere had invited Meriwether knowing they were intimately involved. It should have been clear to him whom she'd be looking for. "I'll let you get back to tending your guests and charming your next dance partner."
He sighed dramatically. "You're a cruel woman but you are correct. I have obligations. Until we meet again."
Esme turned on her heel and left the library and Lord Windermere behind. If not for the lingering feeling of shared sadness, she didn't plan to think of him again tonight.
**Chapter Two**
**R** ichard Hill, third Earl of Windermere, prowled his home, checking that everyone he'd invited to his house party ball was happy and felt welcome. The Gloucestershire estate was his pride and joy, and his annual summer event that had begun as just a few brief days with friends now stretched to a week or more, depending on the weather and the guests' willingness to be entertained.
He didn't mind other people enjoying the comforts of his home and his beds. He hoped to send each guest away in a better mood than when they'd first arrived. The season of balls and routes in London wearied a man, and at his age, he'd come to think of comfort first. Richard hosted his annual gathering as a means to foster deeper friendships in society and also to provide a respite from the pressures of life, so he never hurried anyone to be on their way and overlooked a great many indiscretions.
Bed hopping was a common practice among those who'd not married for love or had yet to find the one who centered their world. The anticipation of adventurous sex with no strings attached or expectations of marriage was an added bonus most of his guests took full advantage of. Richard had always indulged in the past, but this year he'd adjusted his expectations a little higher.
He needed a son more than a casual fling, but fatherhood was proving elusive, as was settling on a suitable bride to wed first. He would have almost blundered, if not for Esme. He'd thought Eleanor loved him, but she had lied to him about a babe—and everything else, he'd soon discovered.
He'd never felt more angry or humiliated as he had then.
_Esme_ was his secondary goal for this year's party. He needed to make amends for the public spectacle he'd made of himself where she was concerned. Esme might often be a prickly, managing wench, whose opinion frequently differed from his own, but that was as far as any discord went between them. His outburst in Town over Eleanor's scheme had caused Lady Heathcote to lose support among the _ton_ , and he was annoyed by the whispers of a broken affair between them. He'd not expected such a ridiculous assumption to be believed and spread about. By inviting Esme to the estate for the party, he intended to prove to one and all how ridiculous any falling out had been.
As he passed close to the front hall doorway, he was hailed by a familiar voice.
"Windermere, there you are, and a sight for sore eyes indeed in this mad crush."
Richard hurried forward and embraced his cousin, Mr. Adrian Hill. "I expected you both days ago," he told the man. "What kept you?"
When they drew apart, Hill's wife Carolyn stepped forward to kiss his cheek. "We spent a few days in Berkeley, taking in the sights, which made us a bit later than we'd hoped."
"It's lovely there, so I can understand the attraction to linger." He smiled. He'd only fleetingly wondered about their delay, and since he could see all was well with them, he wouldn't worry about them again. "I trust the servants are seeing your luggage is taken upstairs to your usual rooms."
"Collins assured us it would be done immediately." Carolyn craned her neck to gawk at the guests. After a moment, her hand flew to her hair. "I must look a sight arriving in the middle of a ball wearing a carriage dress."
"You look lovely." Richard squeezed her arm then led them to the base of the stairs so they could retreat to their rooms and change for the ball. Unless... He turned to them. "I say, if you don't feel up to joining us for dancing tonight after the long day of travel, I completely understand. There is another smaller fete later this week too."
Carolyn smiled, her shoulders sagging. "You would not mind?"
Richard liked his cousin's wife very much so he nodded. She hated to let anyone down, but she always seemed fearful of disappointing him particularly. He'd no idea why. "I will catch up with you both tomorrow I am sure. Ask a footman to deliver a supper if you're hungry. There should be spirits aplenty in your room."
"Thank you, but we will both join the guests as soon as we have changed," Hill advised, straightened his shoulders as he glanced around the hall to see who was in attendance.
"Yes, of course, my dear," Carolyn quickly murmured, chin dropping.
Hill shook his hand and swiftly led his wife upstairs, an arm curled protectively about her back. Richard followed their progress with a heavy heart. They had actually met here under his roof and had been inseparable ever since their marriage. However, his cousin liked to do things his way, despite his wife's opposite feelings or rather obvious exhaustion.
Avery joined him and draped an arm about his shoulders. "No luck, still, in luring her away from Adrian's side," he whispered.
"Avery, do stop talking nonsense. She appears to be tired and I was merely concerned." He threw off Avery's embrace. "I'd never be interested in our cousin's wife and I strongly suggest, again, that you don't let anyone else think so either. She's too devoted to Adrian to allow anything improper."
"She likes you, for some strange reason." His brother shrugged. "But the more the merrier has always been my motto."
"And doesn't every woman who accepts an invitation into your bed come to regret it later." Richard pursed his lips and glanced about to check who was nearby. "Stay away from her and find yourself a wife. A legal wife, rather than the family nonsense. You might just need an heir before too long."
Avery's eyebrows shot up. "It's you who needs the heir, not me."
"That there isn't one already should be a warning to us both." Why lie to his brother? It wasn't as if they hadn't discussed marriage and babies, the pressing need for them, before. He was getting a bit long in the tooth for having never sired a child.
Avery frowned at him. "Why do you resist the family traditions and go out to capture a bride?"
"The family traditions are perverse," he insisted, scowling at the idea. He drew close to his brother. "I will _not_ abduct a woman, chain her to a tree stump, and fuck her while she hangs there helpless. That is not my idea of how to begin a _loving_ marriage."
"Don't criticize what you haven't tried." Avery smirked. "I, however, will take full advantage of the excitement our family traditions inspire in the females of my acquaintance without the hindrance of a real marriage. You don't have to complete _all_ that the ritual entails. What's a bit of dangling between friends, eh?"
"I won't do it." Richard stalked away, dissatisfaction gripping him. By tradition, the titleholder and heirs of the Windermere estate were to abduct, seduce and consummate their relationship in the forest on the east of the estate before they legally wed. In the dead of night, of all times. Richard had been sired in that manner, born four months after the wedding, as had his father and every earl before him for six generations. Even his cousin Adrian had taken his Carolyn to the wishing tree in pursuit of his heir. Nine months later, Carolyn was delivered of a son and Adrian had been smug ever since.
It was all nonsense, of course, that such a ritual would ensure the succession. Perpetuating the myth was something he'd never, ever subject a woman to. He wanted the practice to die with him.
He forced his disgust away and concentrated instead on being a good host. Occasionally, he saw Esme flitting about the crowd ahead of him, but for the most part she continued to keep a distance. She had not made it easy on him when he'd apologized. He'd been a fool, and a rude one, and she made no effort to hide her amusement.
_Damn vexing wench._ Always one step ahead of him no matter how hard he tried not to be trailing behind. She was a woman of passionate opinion and he never backed down without reason. Their frequent debates had gained notoriety for the _ton_ , which was likely why society had assumed they were engaged in a heated affair that had turned sour as he'd stormed away from her London residence without his horse.
She'd been so right about his butler though. Oswin wasn't a young man anymore, and the late nights of the party and demanding guests were already taking a toll on his health. The man hinted he'd remain at his post until Richard had his heir, too, which was another worry. How much longer could it take to fall in love and make an heir the right way?
He found Esme in the crowd again. Despite the smile she bestowed on those around her, she seemed unhappy, and it surprised him that he noticed. She was usually bubbling over with energy when she was around the friends they shared.
He moved toward her, drawn in a way he wasn't used to. Meriwether was thankfully elsewhere, boring others with his tedious quest for a private guard in London. For the life of him, Richard could not work out why Esme was with the man. They must have next to nothing in common besides sex. "Ladies, I do hope you are having a pleasant evening."
Lady Heathcote promised she was but became distracted by another guest and turned away. He stared at her graceful back a moment, admiring the lovely curves before him and her pale-blonde hair fashioned into an elaborate style on the top of her head. She was the standard many young ladies should aspire too. She was never without a clever quip; never without her composure intact, no matter the circumstance.
That left him with Lady Ames for company. "Will you honor me with a dance tonight, Harriet?"
"At least you ask," she muttered under her breath before handing over the little cards his sister had passed out to all the ladies for tonight's event. Harriet's card was bare, not even Avery's name marked upon it, so he claimed the next set and a later one, and remained to converse with her until their dance was at last called.
Harriet had been coming to his estate for many years and while they were not particularly close, never once intimate, she had been his brother's longest romantic partner and he genuinely liked having her here. However, as they settled into the dance, he couldn't dismiss his partner's distraction. She sighed a great deal more than was required of the chore of dancing with him. He was concerned enough to ask about her current mood. "What's wrong?"
"Absolutely nothing, my lord," she responded quickly, a touch more bite to her words than called for. Harriet was not much like Esme, who ordinarily expressed every single disappointment she came across openly to him. She usually hid her feelings much better. "I'm having a lovely evening. Jillian has outdone herself on your behalf."
"She has and I'm very grateful," he murmured. "As you might imagine, Avery is no help at all."
Harriet dipped her face low, staring at his cravat as they danced, leaving Richard a clear view over her head. His brother stood on the sidelines chatting to Lady Small, a woman recently widowed and uninvolved. When they passed the pair by, Harriet sighed heavily. "I can believe it."
Was Avery's attention to the widow the cause of her low mood? If so, he wouldn't blame the woman for being put out. Avery should have asked Harriet to dance, since he'd been the one to invite her to the house party in the first place. "Avery's head is always turned by a pretty face, but it never stays there," he said, hoping to soothe Harriet's disappointment.
"But it turns so often it begs the question will he ever stop looking elsewhere first," Harriet replied in a voice edged with resignation.
"I don't know," he answered honestly. It was a pity Avery hadn't settled down with Harriet. The woman was good for him, but Avery wasn't exactly the most committed man to any woman. He and Avery were poles apart in nature and attitude on that score. Avery was wild. Richard had responsibilities he couldn't shirk much longer. He needed an heir.
He changed the subject. "Have you everything you need in your room?"
"Yes, the room is as comfortable as ever. Thank you. I've always enjoyed coming to Windermere each year." She frowned. "But I have not seen Mr. and Mrs. Adrian Hill as yet. Are they visiting too this year?"
"They arrived not half an hour ago. They stopped on the journey in Berkeley, and Carolyn appears quite done in. I suggested they skip the party but Adrian assured me they would join us soon."
"She's a woman with child, or was when she last wrote to me, so the journey would have tired her," Harriet murmured, her gaze drawn across the floor to where Avery now danced enthusiastically with a red-faced Lady Small. She stared then shook her head. "Would you excuse me, and from our next dance too? I urgently need to speak with Carolyn."
"Yes, of course I don't mind. By all means, seek them out." Richard escorted Harriet from the floor before the dance ended and followed her progress as she weaved through the crowd and swiftly disappeared.
While he rejoiced at the news his cousin's wife would have a third child, he was also swept up in a wave of sadness. No wonder Hill had supported his wife on the stairs. He was looking out for his growing family.
_The one that will replace mine and take the title from us if we don't father our offspring first._
Richard swallowed his bitter pill of resignation and went in search of a worthy distraction to dull the ache of wanting a son to follow after him.
Maybe he should try the wishing tree at least once, just to be sure lack of faith, for want of a better term, wasn't the only problem with him.
**Chapter Three**
**E** sme searched the remaining public rooms for Meriwether, but she found no sign of him. Disappointed and more than a bit put out from the fruitless chase, she retraced her steps to the ballroom and mingled with the crowd while admiring the dancers twirling beneath Windermere's triple chandeliers.
Although asked more than once to take a turn upon the floor, she declined the invitations. She wasn't in the mood for that kind of dancing. She'd much rather do something more intimate and invigorating and in private with her lover.
She huffed out a breath. _If he could be persuaded to stay by my side long enough._
Her favorite footman in Lord Windermere's employ, a young brother to her indispensible Penny, appeared and presented her with a glass of blessedly cold champagne.
"Thank you, Pip," she murmured as she gratefully sipped her drink. "Did you manage to spend any time with your sister yet?"
"Not much, my lady," he whispered back.
The preparations for tonight's ball generally involved all the staff and allowed little time for any servant to stand around idly chatting, even to a member of their own family. Pip and Penny Bradshaw were all the family they had, and they'd been apart a year. "Tomorrow will be easier for Penny. I will sleep late so you'll have ample time to catch up in the morning."
"You're very generous, my lady."
She winked. "Anything to prevent your sister sighing so loudly when she misses you when we leave again. Anyone would think you'd gone off to war instead of gaining a position in a beautiful country estate like this."
He smothered a laugh, for he knew his sister's habit of wild exaggeration all too well, and then turned away to continue serving the guests champagne.
Esme was very proud of how the young man had turned out. Pip had spent a few months in her home under the tutelage of her senior staff, acquiring the polish to gain himself a better position than she could offer. He'd been successful in winning over Windermere's staff and gained employment as a footman, but she knew his ambition was for a butler's position to make his sister proud.
Esme made her way to Lady Small's side, where she stood alone, clapping along as a dance ended. "Quite the gathering, isn't it, my dear? Did you by chance notice where Harriet went? I have not seen her for a while."
Lady Small's expression was one of sour disapproval. "I expect she is entertaining Lord Avery Hill in his bedchamber by now."
_More than likely._ "You disapprove?"
"Indeed I do." Lady Small shivered. "Do you know what that man dared suggest to me? I'm in utter shock still."
Lord Avery Hill had a penchant for speaking bluntly of his sexual adventures and appetite. Some women liked that sort of thing, others not so much. "That often happens when one speaks to him for the first time," Esme informed her in all seriousness. "Consider it a test of character. I'm sure he will not be so brazen again."
"It's scandalous," Lady Small hissed. "And to suggest a dalliance with someone else in the room too."
Ah, he'd suggested a ménage a trois. "Each to their own." Esme shrugged. "Don't make up your mind that you won't like something until you've at least tried it once."
Lady Small stared at her in shock. "I don't think so."
"You might be pleasantly surprised by the experience," she murmured. Esme had tried, but preferred to be the center of attention for one man only.
The crowd quieted suddenly and she looked about them in surprise. The quartet that had been playing in competition with the crowd's noise trilled a few notes and fell silent too. All the guests faced the far side of the room, and since a broad, masculine back blocked her view, Esme stepped up beside the man, noticing belatedly that Lord Windermere had been close enough to likely eavesdrop on her conversation with Lady Small.
A hearty male voice called for attention and Esme stretched to see who dared interrupt the dancing. Sir Jeffrey Follows, a local knight who'd joined them for several dinners in past years, kept clearing his throat rather importantly. "If I could have your attention please," he said at last. "It gives me great pleasure to announce that Mr. Albert Meriwether has asked for my daughter, Jane's, hand in marriage, and I have given them my blessing. He shall marry our darling girl before the month is out."
Esme froze as everyone else clapped, unable to believe what she'd heard. Meriwether smiled at Jane with a satisfied expression on his face.
The heartless scoundrel. How could he stand there as his marriage was announced without warning her first?
"I guess he wasn't yours after all," Lady Small whispered in her ear cruelly.
Esme fumed. How could he embarrass her in such a way? He had never hinted he was involved with any other lady and certainly never mentioned a wish to marry. If he had, she'd never have slept with him. By morning she would be the laughingstock of the entire house party.
Windermere held out his arm. "A good match," he mused aloud. "Come, my dear Esme. Shall we toast the happy match in private?"
Although surprised by Windermere's support at such a moment, she was grateful for the lifeline he offered. She needed to get out of the room before she said or did something she'd regret. She could not wish Meriwether happiness. Not in her current temper. She needed a good excuse to slip away and give vent to her irritation in private.
She placed her arm through Lord Windermere's and allowed him to lead her from the room, satisfied beyond reason when they passed Lady Small. The woman stood with her mouth agape.
When they reached the entrance hall, he squeezed her hand and glanced around. "Almost out of earshot. Just a bit farther. Did I show you the new paintings in my bedchamber?"
His question cut straight through her anger. She raised her eyes to his. "I've never visited your bedchamber before, Windermere, and you know it."
He grinned. "Then you must come. Multiple times, I think."
She drew away from him. "Don't imagine—"
His expression turned serious and he chivvied her up the staircase. "It's the fastest way to cut off spiteful gossip. If we are both noticeably absent for a while, there's something else for my guests to talk about. Deprive them of their fun at your expense with a different tale of conquest."
"You knew about him?"
He took her arm and sped her up the next flight of stairs. "Not about the marriage. That caught me by surprise, and I would have spoken of it if I'd had any inkling. I must say I am not unhappy about it. The damned fool was hardly deserving of your intimate company."
She reached the next landing before her head cleared enough to consider the right reply. "And you believe you deserve me?"
"Oh, no." He laughed and released her. "I fully expect you to toss me over as early as tomorrow. I'll make a long face while you can claim my arrogance off-putting or something harmless. A few well-told exaggerations and all will be well."
Esme glanced around to confirm they were alone. "Why would you do this for me?"
He sighed and set his hands behind his back while leaning forward a little to look her in the eye. "Because I remember that you once tried to prevent me from looking foolish, and I appreciate that."
"You are not making sense, sir."
He drew close to her ear, his breath hot against her skin. "Wouldn't you rather have your revenge on Meriwether without risk? Would an affair with me tweak his nose?"
What Windermere suggested had merits, and if there was no risk involved...
He smiled wickedly as he drew back. "It is clear you expected the house party to proceed in the normal fashion. I thought Meriwether understood your rules. You rather famously don't dally with married men, or engaged-to-be-married men, so his assumption shows a distinct misunderstanding of your character. He should have warned you of the impending wedding announcement and didn't. That makes him not particularly admirable in my book. Come with me, let other's believe we've patched up our differences in bed. I cannot believe I invited him."
"Why _did_ you invite him?"
"To make _you_ happy, I thought." He tilted his head to the side. "Now, instead, I think we shall have fun at his expense."
"You seem to have a knack for revenge."
He winked. "It is a spur-of-the-moment feeling, encouraged by your long face. We would not have to actually be intimate. Only my guests need to be convinced we are."
Esme forced a smile to her face, but it was a brittle thing. Windermere's plan would help her save face, and if he did not expect intimacies then their relationship would remain the same. For a change, the man made sense. "Where might your chambers be located?"
"I thought you'd never ask." He linked their arms, drawing her closer to his side, a wicked smile playing across his lips. "This way. I claimed this part of the east wing after the redecoration last year. My brother and sister claimed everything else."
That was a lot of manor house to have given up. "Surely not."
He opened a door to her with a short bow. "Well, perhaps I exaggerate just a touch. A man must be allowed some idiosyncrasies."
"You have more than a few." Esme stepped into his apartment and gasped at the Spartan interior. She had heard some people enjoyed uncluttered spaces, but Windermere's sitting room was bare and his bedchamber, when she reached it, held only a bed. There were no looking glasses to be found on the walls. No small furniture of any kind. Just one huge bed and a blazing fire that sent flickering light to all corners of the room.
She turned to regard Lord Windermere curiously. She'd never imagined him a frugal man.
His smile was a touch uncertain. "I prowl about in my sleep and crash into things."
Esme frowned. Now _that_ she hadn't heard about him. "Really?"
"Unfortunately, yes. I'd offer you a chair if I had one, but won't you make yourself at home?" He indicated the bed was where she should sit and with no other options available, Esme perched on the edge. Windermere loosened his cravat. "I feel compelled to apologize. I had no idea about Jane and Meriwether, though he was much in her company yesterday, come to think of it. Jane is sweet enough in her own way, but I'm rather annoyed that they announced the marriage without a word of warning or even asking my permission. I've no interest in turning my house party ball into their engagement celebration. I find that extremely presumptuous."
Esme let her dancing slippers fall from her feet and then comfortably tucked her legs beneath her. She stripped off her gloves for good measure and flexed her fingers. "Yes, I can see they were extremely discourteous to you."
He came closer. "And to you. Are you very much upset?"
The fact that she had to consider her answer before she spoke proved her heart had survived the disappointment. "My pride perhaps."
He sighed and leaned against the bed. "Good. I—"
A discreet panel in the painted wall opened and the butler hurried in, arms full of firewood. He froze when he discovered his master wasn't alone. "Forgive me, my lord. I had no idea you'd retired for the night."
Windermere scowled. "Oswin, do get out. I sent you to bed, not to replace my valet."
The butler cast a curious glance at Esme before he all but ran from his master's presence.
Windermere pursed his lips momentarily and then laughed. "There. Any suggestion you were disappointed by Meriwether will vanish for good. By morning the talk of the house party will be of a certain fetching lady who was seen gracing my bed. Isn't that easy?"
Esme leaped from the bed. "Then I shall see you tomorrow."
He caught her arm. "To be convincing, you'd have to stay a bit longer. I do have a reputation as an eager lover to protect, too."
She stilled. He did have a point, and Lord Windermere should not come out of this arrangement with his reputation besmirched yet again. They would both benefit if she stayed a little longer. "That is true."
His grip loosened and he teased the inside of her arm with a soft caress, setting gooseflesh sweeping over her skin.
Eventually his hand fell away, his expression growing speculative. Esme was almost certain he hadn't wanted to release her. Yet, becoming intimately involved with Lord Windermere was a ridiculous idea. She did appreciate his help tonight, but there was a limit to what she'd do for revenge. And if they were intimate, Windermere would be unbearably smug afterward.
She climbed back onto the bed and raised her hands to her hair to remove a pin that had grown uncomfortable. After further consideration, she removed all of them to let her hair tumble down her back and shook out her blonde locks. "Emerging from your bedchamber in a completely disheveled state, and needing my maid to set me to rights again, would be better than appearing barely ruffled too. If we are to perpetuate a lie, the rumors of our time together as lovers might as well be exceptional. Tell me about the paintings."
"My sister is the artist." Windermere held out a hand and, bemused, she dropped the pins into his palm. He strolled toward the mantle and placed them there, then shrugged out of his evening jacket. "As you can see, she's recently found a way to decorate without giving me movable objects to damage, and simply paints on the walls."
"This must have taken some time." Esme admired the extraordinary work around them. "Your sister is very clever."
"Don't tell her," he warned with a laugh, dropping his jacket to the floor in a careless heap. "She'll want to paint the rest of the estate the same way and I like being the recipient of unique gifts."
Esme fell back on the mattress and wriggled, ensuring her gown would be suitably rumpled at the back. If this were her bedchamber, she'd have Jillian paint lovely clouds on the ceiling with a cherub or two peeking from behind each one. "Of course you do. Always thinking of yourself first and foremost."
"Not always," Windermere murmured in a low, deep voice that sent alarming sensations racing over her skin. The next moment, he flung himself on the bed at her side. "The project kept her busy during her mourning, but I don't want her to spend her life here, hiding from being hurt again."
She turned her face to his. "She'll embrace her life when she's ready."
His blue eyes softened. "Is that what you did when Heathcote passed?"
She grimaced and looked up again. "Heathcote and I were strangers long before then, so I...went through the motions for the sake of appearances."
"I had a feeling you'd say that," Windermere murmured, capturing her hand and squeezing. "He wasn't a warm man, was he?"
"To his mistress he was." She winced again. Sometimes the pain of her husband's betrayal caught her by surprise, as it did now. "I prefer not to think about him if you don't mind."
"Of course," he murmured, then filled the next hour with harmless chatter about the party, their mutual acquaintances, and the entertainments organized for the coming days.
Everything but the fact they were still holding hands.
**Chapter Four**
**R** ichard braced his hands on the stone bannister above the great ballroom and scanned the exuberant crowd twirling below. He had made sure he'd invited an equal numbers of ladies and gents for the ten-day house party. But to his chagrin, the ladies he'd considered his best chances for improving his acquaintance with had already paired off with other men while he'd been locked away with Esme, pretending to enjoy a jolly romp in his bed.
_Damn Esme and her long face_.
He hadn't thought through his decision to save Esme from embarrassment properly and paid the price for it now. Although he had to admit that the chaste encounter hadn't been entirely a waste time. He thought, perhaps, he and Esme had reached an accord over the past. He felt entirely better for that and looked forward to more civil conversation with the lady in the future.
"I think the party is a resounding success," Jillian, his younger sister, murmured as she looked over the assembled guests with pride shining on her face.
A great deal of the arrangements for the party had fallen into her capable hands and he was pleased to see her in such good spirits. She'd been entirely too quiet since returning home following the death of her husband of three years. Benjamin's death last winter had knocked the joy from her eyes and he could almost see it glimmering there again. "You've exceeded my expectations, little sister. Very well done indeed."
He saw nothing but disappointment below him though. The woman he'd invited with the secret purpose of considering for marriage, Lady Beatrice Small, was dancing in the arms of someone else, and, given what he'd heard of her unguarded opinions to Esme earlier in the evening, Richard had concluded Beatrice be a great deal too much trouble as a wife.
He couldn't marry a prudish woman. Oh, no indeed. He needed a sensible, open-minded woman who was not easily offended. His brother Avery was bound to frighten away any timid souls he might consider for his wife, so he had to choose with his head too. "I think this might be our last year. What do you say to a summer by the sea next year?"
Jillian, unaware of the source of his disappointment, laughed outright at his suggestion. "You love showing off the estate, and summer wouldn't be the same without this event and our friends visiting."
As he glanced down, he spotted the prickly, if occasionally lovely Esme, speaking with the friends she'd made among the locals over the past years of visits. His cousins had finally made an appearance and the pair hung on her every word. She charmed everyone she met, made them feel like wanted, desirable companions—all except him, normally.
A pity, that. He _had_ owed Esme the favor of rescue from embarrassment.
Esme had saved him from a grievous mistake. He still felt the fool for being nearly duped into marrying a woman who pretended to be carrying his child.
He was still astonished Esme had thought him worth rescuing in the first place.
_Damn Esme for revealing her hurt feelings._
His sister nudged his arm. "Don't skulk about like this. Go and mingle with your guests."
"Soon," he murmured, distracted by Esme's warm smile to Mr. Miles Hammond, who'd joined her little group. He tensed at the ease between them. Esme and Hammond were very old friends and frequent companions around London of late. He was fairly sure they'd never been lovers, but with Esme one could never be certain of anything. Until tonight's farce with him, she'd always been incredibly discreet in her affairs. "Tell me why I invited Hammond again?"
"Because he is Esme's friend and you thought having him here, too, would lend her support," Jillian said with a laugh. "He's actually very nice once you can get him to talk."
As if sensing his scrutiny, Hammond glanced up to where they stood. He stared a moment and then nodded before returning to hang on Esme's conversation.
Richard's tension remained. _That man._ Richard could never decide whether to like Hammond or not.
When Jillian was drawn away by Lord Hogan to dance, Richard watched her go with a wry chuckle. His sister had a suitor chasing after her. Lord Hogan had been keeping a rather close proximity to her these past months since she'd packed away her black gowns and started to embrace life again. Always at her elbow, always interested in what she was doing. Richard wouldn't mind the connection, should the man propose, which was why he'd also been invited for the week, to see what might come of the connection.
He tapped his fingers along the balustrade as he made his way to the dance floor below. He did not want to spend the night watching lovers flirt and sneak away to quiet corners. If he could not have a wife, he wanted to lose himself in the arms of someone who did not expect a commitment from him.
Or, if that were not possible, he'd rather spend the night with someone who challenged his mind. That meant his best chance of amusement tonight was sparring with Lady Heathcote.
_Damn Esme for being so damn intriguing._
He'd always fought the attraction, but tonight he was feeling distinctly adventurous. Since their interlude on his bed, he'd wanted to get under her skin in the worst way, and not just to earn another scowl.
Once he reached the ballroom floor, he scanned the crowd. Esme had moved on from his cousins and was currently out of sight. Irritating woman. Why could she never be where he expected her to be? She was like every other woman he'd known. Always making him chase after them for a bit of attention.
It comforted him that she wasn't with Hammond, who was leading some other lady to the dance floor.
Without the hope of even uncivil conversation, Richard stepped out onto the terrace to enjoy the moonlight alone.
Or so he first thought.
Ahead along the terrace, Esme stood just outside the ballroom windows, looking in through a distant set. Candlelight played over her face and he could tell by the way her head tilted that her attention followed the dancers inside. She did not turn to greet him as Richard approached her. He stopped close behind and shared her view.
Meriwether and his intended bride pranced on his dance floor, looking as smitten as young lovers were supposed to do. His cousin's strolled past, arm in arm, with only eyes for each other.
"The party is going well," she murmured without turning or taking her eyes from the dancers.
Jillian and Lord Hogan danced past next. "So it seems."
"Warn your sister away from Hogan if you can." She sighed. "He will not be good for her."
He bristled. Although he should have listened to Esme about his last lover, there was only so much advice a bachelor could stand, particularly when it concerned members of his own family. "They're just dancing."
"That's how it starts." Esme shook her head. "He's all wrong for her, but she likely won't know it until it's too late." Meriwether and his future bride twirled past and she shook her head again. "Or she will, and won't act on her intuition to run."
When Esme still did not turn away, Richard caught her hand and tugged. Staring after the one you lost only led to one's friends considering you a fool. Esme was hardly that, but in case her heart had truly been involved with Meriwether, and wounded, Richard would be the one to do the saving this time.
If she allowed him.
"Come away," he whispered.
Her lashes lowered over her eyes and her grip on his fingers tightened.
He tugged again and thankfully she followed him toward the terrace stairs and out into the moonlit gardens without raising a fuss. _A remarkable feat._
Richard led her away from the house, strolling through the still gardens with little thought to direction. It was blessedly peaceful after the chaos of the ball and he was glad to share a few more rare quiet moments with Esme.
Eventually they came upon the river house, clinging to the edge of the fast-moving stream, where they could talk and be comfortable. He led her to the steps.
She peered at the building then lifted her face to his. "My, my. This is intriguing."
The timber building had once been used by boatmen on a daily basis, but had fallen into disuse long ago. Richard liked to come here to think, and especially so since his recent near brush with matrimony. He'd spent quite a lot of time in the river house and never gave the dark interior too much thought. He'd had several creature comforts installed. Esme would not find the interior too rustic. "Are you afraid to be alone with me again?"
"Hardly."
Richard grinned. Esme was quite the adventurer and she'd never minced words. She also didn't seem to care where they were going. This was so much better than watching couples dance and sneak away for pleasure. Perhaps he and Esme could work up the energy to have a rousing good discussion by the end of the night. He could be satisfied at least with that and looked forward to a stirringly good one.
He unlatched the door and, still holding her arm tightly, led her up the shallow steps. Inside was black as pitch. He took her with him to the shuttered river-front windows and threw them wide. Moonlight and stars gleamed, lightening the shadowed space into something remarkable. He'd thought this place fascinating as a boy, even more so with a pretty woman on his arm.
His breath caught as he stared down at Esme. Without the scowl, she really was a very beautiful woman. "What do you think?"
Esme paced the chamber, returning to the window to stare out at the fast-moving water sliding past. "Breathtaking. It's as if no one else exists." She leaned as far out as she could and sighed.
Fearing _how_ far she might lean, and knowing what dangers awaited her immediately below the window, he caught her by the back of her gown and held on tightly. "Be careful."
She settled back on her heels. "I assume the current is too fast to make swimming pleasant."
"The rain last week has made that inadvisable for the party but during a dry stretch, it is not so bad a little farther downstream." He smoothed her gown over her back where he'd gripped her, then spread his hands over her shoulders. She was quite soft, something that hadn't sprung to mind when they'd argued. He was also startled to feel the beginnings of desire.
_For Esme?_
_Would surprises never cease tonight?_
He teased her delicate skin with his thumb and was rewarded with a shudder. Richard had never spent much time alone in Esme's company and as his cock thickened with arousal, he wondered why. His attraction to her tonight took him by surprise. Tension always built swiftly between them, but this was not the usual prelude to hostilities. The idea of arguing with Esme had been replaced by a much better one.
Anticipation licked along his skin. He inhaled the sent of rosewater that clung to her hair. "Jillian used to swim with us when she was much younger, but prefers to laugh at us when we complain of the cold afterward."
"I quite agree with her thinking," Esme murmured. "I don't much care for the cold myself."
Richard eased closer a half-step. Esme leaned back into his chest. Cautiously, he slid one arm around her waist. The anticipation of holding her even closer yet caught firmly in his mind. From all he'd heard, all he'd seen of her character, Esme would not play games other than those of the erotic variety. For an affair, Esme would be a wise choice, and she was suddenly without a lover now that Meriwether was promised elsewhere.
He brought his free hand to her neck and stroked the backs of his fingers over her throat.
"Windermere," Esme whispered in an annoyed tone. "What do you think you're doing?"
"I think I'd like to make love to you, if you don't mind."
"I don't think that's wise," she warned, but didn't pull away.
"Don't make up your mind that you won't like something until you've at least tried it once." He laughed softly. "Isn't that what you told Lady Small to do earlier in the night?"
"Listening in?" she complained.
"I agree wholeheartedly." He'd never so much as kissed Esme before and he was desperate to taste her suddenly. Richard bowed his head slowly and set his lips to her neck. He nibbled her skin as she shuddered. "You don't have to say anything. Just let it happen."
**Chapter Five**
**_J_** _ust let it happen._ If all of Esme's liaisons began this easily, she'd never have reason to be discontent.
Lord Windermere's breath teased her skin and tempted her to set aside all the reasons why an affair with him wasn't wise. She did not wish to become another conquest for the earl, but she was still peevish and in need of an outlet for her frustration. Sex had always been her preferred method to rid herself of irritation, and he had offered nicely.
His suggestion too had loosened something reckless inside her that she'd never known existed. A desire to break her own rules grew. To experience something new without forethought or planning had never been her way. But Windermere's offer did tempt her. She knew him very well. She was aware of his character, his failings and values. Society at large assumed they'd been intimate, but they each knew there had never been any intention of that.
_Until now._
He curled his fingers about her waist in a soft caress that left her wanting more. She closed her eyes as he did it again with more conviction.
A shock of want consumed her. She wanted his hands on her skin, his body entwined with hers, the thrill of fulfilled desire soon to follow. Esme had never considered Windermere as a potential lover. An annoyance, certainly. A man with whom to spar when she felt peevish, as she did now; not that she'd ever admit that as the sole reason she'd enjoyed finding fault with him in the past. Yet tonight she was hard-pressed to find a reason to deny him because she wasn't satisfied yet. He'd been kind and his timing and understanding of her mood had been excellent. Who would have thought he might interpret her needs so well?
Sharing a bed would change things between them.
She would know his taste, the sound of his pleasure. The most intimate of knowledge only possible when a man and a woman had lain together and driven each other wild.
She turned in his arms. An expectant grin twisted his lips, making him appear boyish and even more attractive. Her pulse raced with anticipation as she saw the gleam of hope in his eyes. "This would mean nothing."
"Of course." He swooped to kiss her, their first ever, and the initial brush of his lips turned every angry feeling to flames of a far more worthy emotion.
Desire. A hot, living flame consumed her. She kissed him back, holding his head to hers so he couldn't get away. He stroked his tongue into her mouth, tasting her, and she did the same to him. Esme threaded her fingers through his dark, wavy hair and clenched the locks tightly.
Windermere pressed her against the window frame at her back. His hands framed her face, his thumbs caressing her cheeks as he drove his tongue into her mouth again and again. He drew back and one thumb slipped to the corner of her lips. Esme turned her head and took it into her mouth, sucking hard on his flesh until he moaned.
It had been a long time, perhaps forever, since Esme had ever wanted a man so desperately or so immediately as she did now.
Windermere cupped her breasts and warmth pooled between her legs when he squeezed them with firm pressure. She pulled him back to kiss and boldly stroked her tongue into his mouth to taste him once more. He squeezed her nipple through the gown and that excited her enough to release what was left of her hesitation and chase after what was being offered so boldly.
She stripped off her gloves so she could really feel him.
A ragged groan left Windermere's throat as she allowed his tongue to invade her again and then she mimicked fucking him with her mouth. He clutched her body to his, holding her head firmly. His aggression and need gave her the thrill of holding power over him. Some men treated her too gently. He did not seem that way. Esme liked a man who was committed to her pleasure in bed, and straightforward about what they wanted in return.
He hoisted her into his arms, leaving her feet barely touching the ground. When he grasped her backside and pressed her against him, she discovered him aroused. Being wanted this desperately was exactly what Esme needed. Lust was always a balm for a wounded pride and she had been feeling sorely disappointed lately. When he lifted his head from the kiss, he was grinning, his eyes alight with mischief.
He propelled them into the shadows and when her legs bumped against something hard and heavy enough not to move, Windermere eased his grip to one of eager exploration. His next kiss gave her no time to reconsider but, given his enthusiasm, she hardly wanted to. He thumbed her nipples, kneaded her breasts as if they were ripe fruit.
She twined her arms about his neck and gave herself up to his desire and control. A little passion couldn't possibly change things between them in any meaningful way. They'd both had many lovers. She could still find fault with him later and by tomorrow evening, either one of them might have moved on to someone else. She expected nothing more than a little passion from him tonight. There was no reason to expect more.
With one hand cupping her head and the other sliding firmly down her back, Windermere set her body aflame. He drew her against the hard swell of his erection, and ground her against his length.
Windermere pinched her nipple hard through her gown and she gasped in shock and delight. She tightened her grip on his thick hair and made love to his mouth with the intention of never stopping, no matter what he did to the rest of her body.
He broke the kiss suddenly, turned her around, and pressed close against her back. "Can't think when you do that."
His hands slid everywhere: over the fabric of her gown to cup her breasts, low to tease her sex with a possessive touch. He bent and caught her skirts, dragging the fabric up her legs with a thick, desperate groan. The warmth of his palms on her inner thighs made her body quake and a ragged gasp escaped her control. He boldly teased her curls, slid his fingers between her lower lips and demanded immediate entry. Esme closed her eyes, astonished by how delightful she found his technique and how willing she was for him. She was filled with impatience, her body restless and hot.
"Esme," he whispered hoarsely against her neck then nipped her skin. "Touch me."
She knew what he wanted without having to ask where. Despite the awkwardness of the task, Esme put her hands behind her back, grasped the fastenings of his trousers to release his cock from confinement. She caught his length in one hand, pumping his flesh languidly, and discovered he needed little encouragement to moan. He was thick, hard and long, quite possibly perfectly proportioned though she'd never dare mention that to him.
Windermere fought to bring them closer, shoving her gown up to her waist so they were skin to skin, her flesh pressed to the burning heat of his. He lifted one of her feet to a chair, opening her body. As he pressed his cock inside her, a flicker of astonishment filled Esme at his haste. She ordinarily did not like to rush penetration with a new lover. It was over too quickly, often lacking the passion she craved and a degree of closeness she found necessary to find release.
His initial thrusts were deep and fast but soon slowed. Sliding out until almost leaving her body then back in as deeply as it was possible to be connected to her. He took his time, drawing out their passion so well that Esme was in heaven. She had little to do but accept and encourage. His fingers remained on her clitoris, teasing on and off so she was never too close to the brink of release. She wrapped her hand around his thigh where it pressed hard against hers and kept him close.
Esme curled one arm over her head and tangled her fingers in his hair again. He might be in danger of marking her skin with his kisses and little nips but that only added to her excitement. The chair teetered forward from the force of his movements, so far that Esme feared they'd topple over entirely before he finished with her.
But Windermere never allowed her to fall. He held her so tightly that they barely parted a moment for each thrust. The discovery of his hunger only made her crave his touch more.
She tugged hard on his hair.
Her reward was a deep, dark masculine growl. "Woman, you're driving me wild."
Their thighs slapped together loudly as his thrusts quickened. Esme braced herself on the chair, little caring that she was crying out or that Windermere was grunting too. This was exactly what she'd needed—a man to make her forget everything else.
"Come for me," he demanded as he stilled inside her. "I will wait for you and then withdraw."
Chills raced over her body at his words. He would withdraw to spare her an unwanted pregnancy, but such an action, one that would curtail his pleasure, wasn't necessary. She had been with many men and, although some withdrew for the same reason Windermere gave, she had never conceived before. "I am barren, Windermere. There's no danger of a child."
Windermere tightened his grip around her hips. He didn't speak, just held her close as if uncertain whether to believe her or not.
Esme had had a long time to accept her situation. She didn't need his doubt or his pity, she needed his passion more.
She slipped her fingers beneath his where they'd stilled on her sex and she stoked her clitoris herself. His fingers joined hers soon after, teasing in tandem. Windermere's hot breath blistered her nape. The sensation of being utterly surrounded by him sped her release. When he rolled his hips, her body clenched around his cock and she thrashed in his arms. As her desire peaked, she shrieked his name, something she rarely did with a lover.
He began thrusting as soon as she quieted. His lips slid away from her neck as he groaned heavily against the top of her spine and spilled his seed deep inside her. He held her tightly against him as he dragged in deep, gasping breaths.
"Dear God," he whispered hoarsely. "Imagine what we might feel with the comforts of a proper bed around us."
Content to be held a while, Esme stroked the arm wrapped around her waist and then began to laugh. She'd had no idea his reputation was so well-deserved when it came to desire. Not even the revelation of her barren state had truly distracted him from his passion.
For herself, truth be told, she was feeling a little unbalanced by their romp and her confession. She usually didn't mention she couldn't have children in the heat of the moment. It tended to throw cold water over most men's amorous moods. "I think once was enough, don't you?"
He softly kissed her cheek before he disengaged and righted his clothing. When he drew close again after she'd straightened herself, he whispered, "I don't think I could say no to you if I ever had the chance again. Think of me tonight and let me know tomorrow?"
She met his gaze. The man stared at her, his blue eyes compelling her to agree with him. Esme's opinion of him wavered a little in his favor. "Perhaps."
**Chapter Six**
**"P** lease, don't be vulgar." Esme adjusted the collar of her Spencer and admired her reflection carefully in the early morning light. For a woman her age, nearing six and thirty, she was relieved to see her late-night cavorting with Lord Windermere had no visible effect on her outward appearance.
"I thought we agreed to share all our secrets," Harriet protested. "I cannot ignore that you dallied with our host, a singularly mind-boggling decision on your part. I thought you didn't particularly care for him, and certainly not in that way."
Esme faced the mirror, picked up a firm-bristled brush and stroked it over each eyebrow carefully, forcing the fine hairs straight. Her blue eyes were bright with the energy welling beneath her skin. "He _is_ arrogant."
"Well, I imagine he'll be far worse now that he's had you." Harriet slumped back in her chair with a huff. "They all are. Whatever possessed you to become intimate with a Hill?"
_He'd asked nicely?_ No, she couldn't admit to that out loud. Harriet would fall all over herself with laughter and make fun of her for months to come. She was aware of how often she'd criticized her current host in the privacy of their respective bedchambers. It was too often to pass unnoticed that she wasn't feeling particularly indifferent toward him today.
She felt excited, as if she stood on a precipice and whatever lay below was a mystery. She scoffed. There was no mystery surrounding Windermere. She was intimately acquainted with every aspect of his personal life. His taste, the feel of his hands on her hips, the rasping desperation of his voice as he commanded her during intimacy, sent a thrill of desire through her body even now.
Her pussy tingled with anticipation yet again. At least the tenth time since awakening alone that morning. A singular romp shouldn't have overset her sense this much. Later she would think about the encounter properly, with a rational mind and cooler logic to place the event where it belonged—a memorable encounter and nothing more serious. When Harriet wasn't around to pick apart her feelings about Windermere, she might make sense of them and him.
She was _not_ friends with Windermere, nor ever likely to be. The unlikeliest of lovers. Even so...
"He caught me at a weak moment." Esme frowned at her friend. "What are you doing up and about before midday? I didn't expect to see you recovered from last night's revels for a few more hours yet."
Harriet's smile slipped away. "I couldn't sleep."
Esme turned back to her mirror and secured a gold pendant around her neck. "Has Avery been that wicked again? There are laws against some of the things he likes, you know."
"That wouldn't stop him," Harriet said quietly. "Esme, I have broken with him completely. I told him I'd never share his bed again."
"What?" She spun about. "When?"
Harriet wrung her hands. "Last night, actually. I couldn't find you. I suppose you must have been in Windermere's arms by then."
Esme immediately shifted to sit at her side and threw an arm around her shoulders. Harriet and Lord Avery Hill had been intermittent lovers for a long time. Such a change was unexpected. "But why? Did he do something wrong? Did he hurt you?"
The other woman shrugged then she looked away. "Not the way you imagine."
Esme caught her chin and turned her face back to hers. "Who did he invite to join you both last night? You know I'll not tell a soul."
"It is not who he invites, it is that he always does. We want very different things from life." She shuddered. "Esme, do you ever worry that the reason we are both still alone is because we're too particular?"
"Is that what he suggested you were?" Esme shook her head in disgust. "I doubt Lord Avery Hill could have found a more open-minded bed partner anywhere. I certainly wouldn't put up with his wandering eye the way you have, or his penchant for indulging in romps with more than one partner at a time."
"I won't ever be enough for him."
Esme caught her hand and squeezed it. "And he has never appreciated what he had in you."
"That is what I realized last night. I had a painful decision to make, and in the end he made it easy for me to give him up for something better." Her smile grew brittle. "But in light of my choice, it might be uncomfortable for me to remain for the duration of the house party. I just wanted to warn you that I might leave on short notice. However, if you are involved with Windermere, I'll understand that you might wish to remain behind."
"Windermere was a fling and nothing more. You are my friend and we came together. If you wish to leave then so will I." Esme peered at Harriet's face closely when she winced. Her friend had parted with lovers before and never once showed regret or discomfort. In this case, though, it seemed she wasn't capable of keeping her feelings so well hidden. She was deeply upset and Esme thought she knew why. "Are you in love with him?"
After a long moment, her friend dipped her chin to confirm it. "I fear so."
She drew Harriet closer as her shoulders shook with silent sobs. A small wail of misery slipped out past her control as she vented her grief over the end of what had been a lengthy and often tempestuous affair. Harriet had never cried when an affair ended with her other lovers. Nor did Esme. They were alike in so many ways. Her friend would be better off without the blighter, and Lord Avery would undoubtedly move along to another conquest without hesitation.
She did her best to soothe her friend. "Then it is a good thing that you've broken with him. I cannot imagine it was easy sharing him with other women before. Even worse if you loved him."
Harriet straightened suddenly, pulling a polite mask over her emotions and broken heart. "Enough of my troubles. Tell me about Windermere. Did he make you happy?"
A small thrill raced through Esme and she worked hard to suppress it. "He is talented at making a lady feel rather special."
"Good. At least his reputation is deserved. I would not have you unhappy too." Her friend stood suddenly. "This might be cowardly, but I'm not quite the thing today. I'm going to make myself scarce. I will see you before dinner."
"You're not a coward. You just need a bit of time." Esme followed her to the door. "I am sorry about Avery, my dear."
"So am I." She let herself out and swiftly traversed the distance to her bedchamber down the hall.
Esme waited until Harriet's door closed behind her then closed hers slowly with a sigh.
There was always a danger in conducting intimate relationships that one party might grow to feel more than the other. So far, she had been lucky that her partners had never stirred her heart. Was she too particular about who she loved?
She liked to think she was, and with good reason. She had never wanted any man to take her affections for granted. Her late husband had done that. Heathcote had turned to another the moment they'd both realized Esme would never bear his child. To this day, she could not forgive him for making her doubt her own worth. She wouldn't ever give a man that much of a hold on her emotions again. She enjoyed men but kept a distance.
After all, what was the point of falling in love with a man who would undoubtedly want children she couldn't have given him?
The life she had was the one she needed. Uncomplicated and undemanding of her emotions. She was happy as a widow. Indeed, she'd never missed being a wife.
With that thought in mind, she headed downstairs to enjoy tea on the terrace with people who had become dear friends since she'd learned to be happy on her own.
**Chapter Seven**
**R** ichard scratched his jaw as undeniable satisfaction and conflicting confusion filled him with restlessness. How had he gone from arguing with a woman constantly to wanting to spirit her to his bed in the space of a few hours? He'd like to drag Esme away from his guests, over his shoulder if she became difficult about it, and make love to her all afternoon.
Admittedly, their encounter at the river house last night had been glorious and unplanned. The spur-of-the-moment decision to seduce her had been well-timed and swiftly executed.
But making love to her had been akin to holding fireworks. Dangerous and exciting, she was entirely capable of making a man's heart stop from the pleasure to be found in her passion. He'd known, of course, that Esme liked sex. She'd had no lack of lovers over the years since becoming a widow. For the life of him, he didn't understand why they hadn't been intimate long before this. When he looked at Esme for any length of time, his tension grew until, without a shadow of a doubt, he knew he would pursue an affair with her for as long as they could stand each other.
Last night had not been enough to quiet his need for her.
He was sorry she wouldn't ever have a child, but he wasn't fool enough not to take advantage of it. Since he didn't need to be careful where he spilled his seed, there was no reason not to indulge with her every chance he got. The house party ran for six more days and that would give him many opportunities to be alone with her. If she accepted his invitation to indulge in a purely intimate affair, that was.
He glanced across the terrace to where she sat among the women of the party, taking in the sun of another perfect country day. Outwardly, she seemed no different, but he remembered all too well how she had reveled in his attention last night. It astonished him how much she'd clearly enjoyed their hasty romp. Normally, he wasn't quite so aggressive with a new lover, but she hadn't seemed to mind his impatience.
Esme brought out the worst in him.
_Or was it the best?_
"Penny for your thoughts, old man," his brother remarked as he took a nearby chair, half-empty glass dangling from his fingers. On first glance, one might think Avery was merely tired, but his blue eyes were bloodshot and he seemed not altogether steady. He was completely cup shot, and very much earlier in the day than was normal for him during their usual house parties.
Richard had no interest in overindulging in spirits along with him. Not with Esme on his mind. "Is there a problem?"
"No. No problem." Avery chuckled. "But I had to come and see you. I've just been the recipient of the most astonishing bit of gossip from my valet."
Ah, the gossip. Richard was coming to regret the decision to allow his servants to spread tales that he and Esme had been intimate just to salvage her pride. "And what would that be?"
"Is it true you dabbled with the Lady Heathcote?" Avery stared pointedly across the terrace to Esme. "You risk your appendage to frostbite there."
_He risked being scorched._ The woman was wild and he certainly intended to explore every inch of her body, discover everything she liked most and do it to her repeatedly. He clenched his jaw, astonished how just thinking of fucking her caused his cock to thicken.
"I guess your distraction answers that question. Despite the temper, she is lovely?" Avery snorted. "When I saw the guest list, I must admit I was entirely taken aback. I thought she didn't like you. Didn't you two exchange strong words? Some claim it was a lover's tiff but I didn't believe it at the time."
They had frequently been at odds. But Richard would give their arguments entirely just to hear her moan his name again in the heat of passion over and over again. "She likes me enough."
That was also a problem. Despite becoming lovers last night, he had no idea how to go on with her. Esme had brushed aside suggestions to make a night of it and retired to her bedchamber alone last night. Richard had accepted but not liked her refusal terribly much, and had made a cursory circuit of his home before he too headed to his bed alone. They hadn't spoken this morning beyond common courtesies. Esme had immersed herself in conversation with the other guests and barely glanced his way after that. Normally, a lack of polite conversation with her wouldn't concern him, but he would give everything he owned to know how she viewed last night. And him.
The group Esme had been sitting with broke up and she excused herself from them to walk into the garden with Jillian. He tracked her movements, his body already awakening to the idea of a daylight tryst. A romp in a sun-filled glade with Esme would fill his mind with clearer images of the body he'd made love to last night. She was much softer than he'd imagined. Not weak but strong and flexible.
She and Jillian stopped to converse beside the fountain and he took a pace forward. Was Esme going to mention her disapproval of her relationship with Lord Hogan? He watched Jillian closely and although she did not seem outraged, she did grow more subdued during the conversation. Esme was upsetting her, but then they embraced and everything appeared to be congenial once more.
"Hmm, is that competition I see poised to take your place?" Avery mused, pointing toward the stables.
Albert Meriwether had paused in the shade of a tree, watching Esme and Jillian converse rather obviously. Since he appeared dressed for riding and hadn't been in sight all morning, Richard assumed he'd recently returned from visiting his new fiancée on her neighboring estate.
"He hasn't a chance," Richard insisted. "Married men, or about-to-be-married men, are not her type."
"Ah, is that why she settled for you last night?"
Richard bristled. The idea that he'd come second to Meriwether rankled. He chose to offer no comment. Avery would needle him no matter what he said to deny it anyway, and drunk, he'd be ten times as obvious to others.
"Better claim what's yours, brother, before someone else does. As I have learned, women are fickle creatures, every last one." Avery sighed. "She _is_ lovely. Out of curiosity, just how adventurous in bed is she?"
"I have no idea yet but I intend to find out." He spun about and saw speculation in Avery's eyes. They had bedded the same lovers in the past but never at the same time. Usually before or after the other was done with them. But Esme and Avery? He couldn't bear that idea. "Do not even think of approaching her."
Avery winced and he drained his glass. "There's nothing wrong with additional companionship in bed."
Richard scowled and shook his head. "I doubt Lady Ames' and Esme's friendship extends that far. Find someone else for a third, Avery."
"Fine. You can keep the little dragon." His brother scowled and stood. "But I'll do what I want with whoever I want."
Richard ignored Avery's belligerent tone and was grateful when his brother went on his unsteady way in an obvious huff. He tried to relax about Esme. They were nothing to each other really. One night with her warm body to play with should not make him feel so damned possessive. She was very good at hiding her real feelings behind a polite mask and his tension increased. When it came to Esme, looks were absolutely deceiving.
God help him, he'd never imagined he could feel so strongly about where Esme spent her time. She confused him, attracted him and yet with her, he was wary of putting a foot wrong. He was entirely without sense this morning and he didn't know what to do with himself—but watch her and wait for some sign that she might want him again.
**Chapter Eight**
**A** warm summer's day amused by friends had been just what Esme had really needed. Listening to their lives, their concerns and hopes for their families, brought a sense of inclusion to her life that a hundred balls never could. One could hardly talk candidly at a ball, there were too many ears and not enough friends among them, and she had much to say to one lady in particular.
Lady Jillian frowned. "And there are gentlemen like that?"
"Oh yes," Esme insisted, pulling Windermere's sister farther along the path and deeper into the garden. She was very glad to have a chance to talk to Jillian alone but she had to be delicate about how she'd come about her knowledge. "Many men enjoy taking a firm hand with women who like that sort of thing for pleasure. But believe me, Hogan is an out-and-out bully about it. Not the way a casual observer would notice, but it is there."
Esme kept her eyes on Jillian. The woman was still young; a widow who'd loved her older husband dearly. But by all accounts, she'd been utterly controlled by that man. Not cruelly but certainly kept as a possession. Rumor had it that Jillian's late husband had been a man with an extensive collection of sexual accoutrements designed for both pleasure and pain too. How far Jillian had enjoyed that life wasn't clear, but Esme suspected the woman was lost and without someone to talk to about her old life.
She squeezed Jillian's hand. "It will start out small, a gathering left early, a favorite dancing partner you will be pressured to refuse more often than not, a hat he doesn't like changed at the last moment. Over time, you would lose friends and disappoint your family by being so wrapped up in his concerns as to have no time for anyone else. You might not make any decisions without consulting him first just to keep pleasing him."
"But I always consulted my late husband."
"Not in everything I suspect." Esme sighed, thinking of how she'd failed one friend already in her life. She could not afford to be so timid again. Not where Hogan was involved. "I never noticed what Hogan did to Vera's life until it was far too late to make a difference."
"Who was she?"
"A neighbor in London whom I saw frequently, but not so often as every day. Luncheon invitations were the first to go, and then she was often not at home to me when I called, although I realize in hindsight that Hogan had likely told her not to receive callers. I allowed her to sever the acquaintance when I should have fought harder to stay involved in her life."
Jillian's brow creased. "What happened to her?"
"She died." Esme remembered that tragic day, and the events of the few before that she'd pieced together afterward. "When he broke it off abruptly after a row over her gloves, of all things, Vera was distraught and begged him to forgive her. She chased his carriage down the street and then collapsed in tears when he wouldn't stop. My servants and I helped her return home and that's when I discovered how much she'd changed. He'd made her so dependent on his opinion that in the end, when he refused to see her anymore, she chose to die rather than go on alone. He destroyed her confidence in herself, a little bit at a time, until there was nothing left to go on with."
"Oh," Jillian said, her face pale. "That's a tragedy."
"You must be careful of such men." Esme bit her lip and then sighed. She couldn't speak of this in half-truths forever if she wanted to spare Jillian future pain. She had to be blunt. "Benjamin Moore was very different. He treasured you. He had a way with you that excited your body, but he never forced you to change your mind. Never punished you because you chose kid gloves over silk. He used warning words, a secret language perhaps, to let you know what he wanted you to do for him and what he would do to you."
Jillian licked her lips and her eyes darted in all directions. Her breath came fast, undoubtedly panicked by Esme's knowledge of such matters. She took a pace backward. "How could you know so much about my husband?"
"I have been acquainted with men _like_ him and their unusually dominating passions before," Esme murmured gently. "I have rarely been comfortable in a passive role, but for some women, and men too, a firm hand is bed play and beyond is required for their happiness. I believe you are such a woman, and I assure you there is nothing wrong with that."
All the air left Jillian's lungs and she sagged. "I'm so lost without Ben."
"I can see that, but you won't always be alone," Esme assured her, relieved the woman would confide in her. "You must be careful whom you reveal your true nature to and whom you trust. There are good men who can fill your needs without crushing your will into the bargain."
"Like who?"
"Like..." Did she really want to push Jillian into another man's arms, control, so soon after losing her husband? Jillian undoubtedly needed time to grieve for Benjamin and decide if she wanted that life still. She could give the woman hope though. "The relationship you had depended on mutual respect and affection. That takes time. I could not in good conscience suggest any one man who could suit you. Only you can know what you need by spending time with them."
"You've given me much to think about." Jillian stared at her and then blushed. "When I married Ben, I did not know I was like this. For a long time I feared I was perverse."
Esme curled her arm through Jillian's again and led her along the path that would take them back to the house. "You are a good woman and deserve respect no matter how you find pleasure," she promised. "Do not rush into the next bed just because a man will say you must belong in him."
She had always liked Jillian. Although not a fixture in society, Esme had come to feel a deep affection for Windermere's sister from the few weeks they'd spent in each other's company over the years. Jillian was funny and kind and possessed a keen intelligence Esme couldn't bear to see harmed.
"I will do as you suggest and not rush. Thank you, Esme," she said with a smile. "My brother might think he's patching up your supposed rift but I am very glad you've come, if only to advise me. I might have made a terrible mistake with Lord Hogan."
Esme said nothing to that, but was very glad Jillian's eyes had been opened to the variety of men in the world, and the power she had in choosing one.
Jillian's gaze sharpened, focusing over Esme's shoulder. "I suppose I should be getting back to the housekeeper too. My brother's parties at least keep my mind occupied so I don't miss Ben so much. And I think a gentleman wishes to speak with you."
Esme rolled her eyes and turned, expecting to find Lord Windermere ready to berate her for warning Jillian off Lord Hogan—and instead found Albert Meriwether approaching, his stride determined as he bore down on them.
She squeezed Jillian's hand. "Do pass along my compliments to the housekeeper and staff too. Last night's ball was a wonderful success."
"They will be thrilled to hear you think so. The poor dears worry so." Jillian laughed softly. "I'll see you later for tea with Harriet."
"Of course." Esme affected an ease to hide her irritation when Meriwether bowed. "Mr. Meriwether. What an unexpected surprise. Glorious day, isn't it?"
The man smiled shyly. "Am I disturbing you?"
She had once thought his smile endearing, but now his manner annoyed her. She shrugged, glancing over her former lover and seeing him in an entirely new light. His affectionate nature hadn't truly increased with a longer acquaintance. What had she been thinking to believe a holiday together would bring them closer? It had done the exact opposite. He'd come here with an agenda that Esme played no part in. "Certainly not."
He drew closer. "I couldn't help but notice you seem very somber."
Talking with Jillian and speaking of Vera's tragic death had been a melancholy business, but such moments were short-lived. "I am in excellent spirits, as always," she told him. She gestured to the riding crop in his hands. "Are you just come from your fiancée's home?"
Meriwether shuffled his feet. "Oh yes, a pleasant luncheon with her family."
"Ah," Esme murmured, realizing how little she was affected by his news. Neither angry nor sad nor even disappointed in the situation. She didn't feel anything about the loss of her lover and his status as an engaged man. "You must be happy."
"I am." He suddenly frowned. "Despite my marriage, I hope we might remain friends and that you know you might always rely on me."
Esme blinked. "What could I possibly need to rely on you for? I am not in any distress. And I imagine your marriage will not increase our knowledge of each other to the point where a deeper friendship can develop."
"There's no reason we cannot remain on the best of terms." He drew near. "If we were discreet."
She narrowed her eyes. Did he assume she'd ignore the fact he was to marry? Men who did, placing so little importance on the commitment they'd made to another, were pitiful, in her opinion. The one thing she would never willingly do was usurp a wife's place in a husband's affections. They'd discussed marriage once too, her disinclination to wed again, but perhaps in the exuberance of his successful suit, he'd forgotten her prohibition on entanglements with married and engaged men. "There is every reason. Your future wife's feelings, for one. I will not be a party to breaking her heart."
Meriwether caught her hand in his. "Jane will be a dutiful wife."
"So I have heard." And that was from Lord Windermere. His praise of Jane had been a commonplace compliment at best. She removed her hand from Meriwether's clinging grip. Jane would be like any properly raised young woman in society today. She'd overlook her husband's wandering eye and if he strayed into another woman's bed, she'd hide her hurt from everyone. She would bear the insult in stoic silence, but Esme would never be the one to cause it. "Do give her my regards when you see her next and my best wishes for your happy marriage."
"That won't be for a few days. I had hoped to enjoy the rest of the house party with you." He smiled a little too warmly. "We came with that intention and I apologize for being distracted."
He considered marriage a mere distraction? Good grief, he was cold! "My interest currently lies elsewhere," she assured the man.
He glanced behind him. "With our host?"
Esme was aware that Lord Windermere was watching her every move very closely this morning. Ordinarily, she could ignore him, but today her feelings were mixed. She didn't mind him looking, yet she didn't quite know how to react to him. Last night had been good between them. Surprisingly good, and she'd slept well afterward and awakened refreshed and invigorated. But considering the fact that she had been fielding discreet questions about her fling with Windermere all morning, and even this fool had noticed, she would have to speak to Windermere alone and have him stop being so obvious about his interest. "With whomever I choose. Good day, sir."
"I understand," he cried out urgently as she turned away. "You were angry with me last night. I forgive you."
Esme pivoted slowly, unable to hide her surprise. "I beg your pardon?"
"You must have been hurt very badly by last night's turn of events if you'd make the mistake of letting that man seduce you." Meriwether removed his hat and raked his fingers through his hair. "Things with Jane happened so fast and I know you were expecting me instead. There is no reason to share his bed again. I'm free to be with you until the end of the house party and when we return to London, I want to call on you at home as usual."
Anger surged through her. How dare he think her affair with Lord Windermere existed purely because she'd lost _him_. Windermere was not second-rate to anyone. He was as vigorous a lover as any she'd had, far more commanding than Meriwether in fact.
Esme pressed her fingers to her temple. _Dear God, am I about to defend Lord Windermere's prowess after a lifetime of disdain?_ Apparently so.
She dropped her hand and straightened her shoulders. "Lord Windermere is a generous lover of great skill and expertise. I enjoyed _every_ moment in his arms."
Meriwether's face leeched of color. "You couldn't mean that."
"Actually I do." She smiled thinly. "We are very much alike, Lord Windermere and I. And he understands me much better than it appears you do. I would never lower myself and sleep with a man whose affections are supposed to be engaged elsewhere. I chose him last night and will undoubtedly do so again. Excuse me."
**Chapter Nine**
**R** ichard moved to the steps of the terrace, prepared to risk Esme's wrath by intervening in what appeared to be an argument between her and Meriwether. A burst of unexpected anger trickled through him that she'd been pushed so far as to be upset. The irony was not lost on him. Normally he was the one upsetting Esme, and that fact had never bothered him one bit in the past.
If Meriwether continued to aggravate Esme, he'd send him from the estate. The man had his chance with Esme and had thrown it away to marry the blandest debutante in the district.
Richard would not be so stupid.
After a moment, Esme headed toward the terrace steps and him. Although her face remained impassive, her steps were the clipped march of someone trying not to show her hurry. Richard had always been able to judge her anger fairly well. She was utterly furious.
Their eyes met and instead of caution, an unexpected jolt of lust struck him like a blow. Had he always been so affected by her or was it just because they'd been intimate once?
She stopped a pace away, near enough that he could reach out and haul her into his arms. He waited to see what she wanted first.
"Do you have a moment for private conversation, Lord Windermere?"
Gods, he hoped she wanted him for sex. He glanced at Meriwether and allowed a small territorial smile to spread over his face. "I am at your disposal immediately, my dear."
"Good."
She stepped around him and headed for a little-used entrance to his home through his open study doorway. Richard followed and, thinking she'd rather not have anyone overhear whatever came next, he closed and locked the doors. She was usually _very_ expressive when angry, and even more so when she climaxed. "What did Meriwether say to you?"
As he turned, Esme pressed against him. "Enough of that fool." She caught his head with one hand and dragged his face to hers for a deep, hungry kiss. Her passion stole his breath away and it took him a moment to respond. Unfortunately, that hesitation seemed to build a fire in her. He was shoved roughly backward until he all but fell into his desk chair.
Esme hiked up her skirts and settled astride his lap without preamble. "I don't like to be stared at all the time."
Uncertain what to make of this development, he gently placed his hands on her bottom to secure her to his lap. "I did not mean to do it."
She made a little growl, much like a hungry cat on the hunt for an escaping mouse. She dug between them for the buttons of his trousers and practically ripped the garment open. When she slid her hand inside and curled her fingers around his cock, he groaned loudly.
Esme smiled wickedly and kissed him once. "Liar."
With her tongue dancing across his lips and her hot little hand working to bring him erect, Richard had no power to resist and certainly no desire to stop her. He jostled her to shove his trousers lower off his hips and then dug under her gown to touch between her legs.
He moaned at the state he found her in. "You might not _like_ me staring but I think you are incredibly aroused by me doing it."
"I am frustrated."
He was glad to have made any impression today, so he chuckled and nibbled her neck, earning a gasp. "I can accommodate you," he whispered. "I can give you exactly what you need and more."
"You'd better." She rose up and Richard grasped his length so she could impale herself. She moaned as he filled her. "Yes."
He cupped her face, now flushed with heat and desire, and stared into her eyes, almost gray but rimmed by blue, he realized. She was incredibly kissable. "I'm starting to think you like me."
"Oh, be quiet." She closed her eyes and began to move.
Straddling his thighs as she was, it seemed a bit awkward at first to make love this way, so Richard grasped her hips to guide her. "Allow me."
He lifted and lowered her on his length, driving himself toward crazed very quickly. Esme was unlike other woman he'd made love to. She aroused him so easily and now that he had her wrapped around him once more, he never wanted the moment to end.
He met her gaze as they made love to each other. Esme rocked and ground her sex against him as if she too had fallen under a spell, oblivious to anything but the passion curling around them. Richard kissed her hard, thrusting his tongue into her mouth as he had last night and letting his hands fall from her hips so he could embrace her.
He needed her. Gods, he'd needed her for so long.
Esme continued her slow grind on his cock and he widened his legs a touch so she would have more of him. She groaned and buried her face against his neck. Her body quivered. "Oh, that's perfect."
Quickly, Richard touched her clitoris. The bud was large and when he rubbed it, Esme clutched him. She came apart with a sob muffled by his shoulder, quite unlike her throaty yell by the river last night but just as satisfying to his ears.
She lifted her head immediately, stared at him with dazed, heat-filled eyes and red, parted lips. Her pussy fluttered and squeezed around his length. When it happened again and again, he shuddered. If she could keep that up his release would be unavoidable. Her look alone would bring any man to his knees, but combined with her other talents he might not have any control where she was concerned.
Esme rose and he forced her back down as he filled her body with his seed on a hoarse shout.
Richard dragged her head to his shoulder again and she cuddled against him for a long moment while he caught his breath. "I must confess I'd hoped for another romp with you all morning, but that was entirely too quick. Maybe the chaise for a second round? As soon as I catch my breath I will carry you there and begin again properly."
"There's no need."
Esme abruptly extracted herself from his embrace while he sat in stunned silence, trousers around his thighs, cock softening and exposed. When Esme wasn't arguing with him, she was delightful in every respect. "I hadn't minded holding you. That was lovely."
Her expression when she looked at him was apologetic. "I don't normally do that."
He wanted to rejoice in her embarrassment but bit his lip to hold it in a moment longer. A flustered Esme was too amusing and rare. "Use men for sex when you're angry?"
She nodded and he let loose a hearty chuckle. "I didn't mind in the least," he assured her. "In fact, you could do that to me again and again and I'd never complain. Hopefully, next time I can last a little longer. I am grateful it wasn't me who irritated you so much though."
"You _did_ irritate me. But it was something Meriwether said that provoked me to," she waved a hand in his direction, "do that."
"Oh." Richard stood and straightened his clothes. "What did the buffoon have to say for himself today?"
"He forgave me for falling for your charms and in the next breath assured me we could go on as usual when we return to London."
"As I said—a buffoon." Richard shook his head. "Your avoidance of married men is well known."
She considered him a long time and then her eyes sparkled with amusement at last. "I am very glad you understand that. You have to marry soon so this must end before then."
"Agreed." He observed Esme. Intelligent, passionate, a perfectly poised woman in any situation, including arguments. Yes, passion with Esme was all very fine, but was anything more, deeper, out of the question?
He smiled at her as an idea took hold. "What would you say to a second romp out of doors? Are you adventurous enough to risk other people seeing us together in a compromising position?"
An excited gleam filled her eyes but then dimmed just as quickly. "Not today. I promised to meet with your sister and Harriet very soon."
Disappointment filled him and then acceptance. He could wait. "Well, perhaps tomorrow you might come riding with me in the morning. There are a few things changed from your visit last year. I'd like to show you around the rest of the estate since you so clearly enjoy telling me what to do with my affairs." He smirked just to annoy her.
She brushed aside his remark with a wave of her hand. "I'm sure your guests would enjoy the tour."
"Just us, Esme. I would explore this peace between us fully while it lasts."
She frowned then. "We can end things if you prefer. You should not neglect your guests."
"Not yet." He grinned and caught hold of her fingers. "Not when there is so much pleasure to be had with you. I should ask, rather than assume: Would you consider being mine for what remains of the house party?"
The corner of her mouth lifted into a wicked smile he'd grown to crave. She reached to his waist and, to his surprise, secured a button that had come undone on his waistcoat. "Well, I suppose I could tolerate your attentions a little longer."
He smiled broadly and drew her into his arms one last time and nuzzled her neck, earning a gasp from her. Yes, definitely another bout was required today to dull the ache of want already returning. Perhaps they could meet tonight. "What if I come to regret this bargain of ours and want a longer affair?"
She drew back, hands rising to straighten his cravat into neat folds. Had he been that ruffled as to need his valet? Esme appeared utterly impeccable despite what they'd done on his chair. "I like a man who can keep his word."
"I can." But it would be so easy to continue to explore their mutual passion. He brought her fingers to his lips and kissed each one. "My bedchamber or yours tonight?"
Her eyes flickered over his body. "Yours, but only so I might lie surrounded by Jillian's beautiful art. That is the only reason to visit your room again."
"Liar." He kissed her lips, hard and greedily, anticipation for tonight's adventure consuming him all at once. He smiled. "But a lovely one. You want me, admit it."
She gifted him with a smile. "Perhaps I do. You show _some_ talent."
He laughed hard at that and sent her on her way before he proved on his settee just how talented he could be.
**Chapter Ten**
**A** s Adrian and Carolyn Hill rode away, back toward the manor house and guests, Esme scowled at Windermere. "Why do you invite them to your estate if you don't wish to spend time with them?"
"I invite them so she can visit with her family."
She frowned. "They haven't left the estate."
"Yes, I've noticed. My cousin is too busy dragging his wife from pillar to post furthering his acquaintance, from everyone in attendance at the balls to the tenant farmers."
She had noticed that last night too, and wondered why the man was cozying up to so many of Windermere's friends. She chose her next words with care. "Is it my imagination that Mr. Hill acts as if this will all be his one day?"
Windermere's mount pranced and he took a moment to calm the animal. "He can wait until we're dead first," he bit out savagely.
Esme was surprised by the display of anger. "I didn't suggest I agreed that he should."
Richard stared at her and then shook his head. "Forgive me. I know I shouldn't react but I've had it up to my ears with his hints and suggestions for how things should be done. Let's ride."
He kicked his mount ahead and Esme followed, her mare keeping pace with Windermere's larger beast fairly well. She enjoyed riding here. The vistas were beautiful, the air fresh and clean. A far cry from the pace and stench of London.
To be honest, she hadn't found anything about this visit that she'd have different, save for having to look at Meriwether over the dinner table each night. She'd been blind to his faults, his ambition to marry into the _ton_ and elevate his family to new heights. It had become obvious to her now when she considered the matter with a calmer head that Meriwether had chosen a young woman who'd grant him what she couldn't: a way into the best circles _and_ the offspring she could never have.
However, once she wasn't looking at her former lover, Esme didn't think of him again until the next meal came around. But she would never let such a situation happen to her again, and would choose her future lovers with more care and consider their likely ambitions before becoming intimately involved.
Windermere eventually reined in, a fair distance from the manor. He turned his mount in a wide arc and then fell in beside her. "Discussion of the succession annoys me."
"I did notice that." She cleared her throat. "You don't need me to suggest what must be done."
"I must marry." His grip tightened noticeably on the reins. "I know it."
At his age, he'd better marry soon. "Do you want my advice or should I keep my thoughts to myself?"
He glanced at her with a smile. "I'd rather hear them today than be surprised later."
"Lady Alice Beauchamp."
His brows shot up. "Who is that?"
"She is a widow, a mother of two small girls. She has a passing acquaintance with your sister and is very kind and responsible. She might be young, but she has proven herself as a wife and mother."
He stared at her for a long time. "By bearing daughters?"
"Yes," Esme murmured, remembering the quiet little girls she'd met during the season. Windermere needed a son though. "The getting of an heir is up to fate unfortunately and cannot be predicted, but while her fortune is of no great consequence, her connections are excellent. She has the makings of a perfect countess for you."
The man at her side grunted. "What happened to her husband?"
"A riding accident." She winced. He did not appear happy with her choice or suggestion or the subject under discussion, but he had asked for her opinion. "Beauchamp's death was sudden and her year of mourning ended some time ago."
"I see. How well do you know her?" he asked.
"Enough to wish her a better future than her present."
"All right, I'll bite." He turned in the saddle, one hand on his hip. "What is that supposed to mean?"
"She lives with her husband's parents. The second son will inherit and she's looked upon I think as if she's let them down. The second son mocks her openly. I do not like that."
Windermere fiddled with the reins. "I'll think about it. After the house party."
After their affair ended was the meaning she heard behind his words. She would be sorry when their time was up, actually. She had enjoyed his attentions a great deal the past few days. They hadn't argued, but the time they spent together was filled with tension of a different sort. A pleasant excitement that never failed to arouse her.
After she left his estate, after the house party, she would have to make sure she never lapsed into foolish nostalgia because she felt certain she would remember every intimate detail of him.
He turned into the forest, and she let the matter of whom to wed drop. It really wasn't her business when or who he married, but Lady Alice Beauchamp was a nice young woman who'd already proven herself in childbirth. The woman would be a good candidate if he'd but listen to her suggestion and make a wise choice this time around.
Esme held her mare in check when it would have followed him and stared across the valley behind her. To her left, farmland patched the valley floor and to the right, dense woodland stretched up to the highest peaks. The estate was a well-run enterprise with no loss of beauty or hardship for those who lived here. Behind her back was the forest marking the eastern border. Windermere kept his own woods sacrosanct and it had been rare he suggested a guest venture there, so she was doubly sure to enjoy such a boon.
She enjoyed the view a time alone then urged her mare to follow him into the dark wood. The trail appeared to take them far away from prying eyes and into a world of dappled light and quiet sound.
Windermere had waited for her a short distance in. He turned slightly in his saddle to see her. "Ride slowly," he advised. "The branches are low in places and I'd not like you hurt."
She lifted her brows. "Such a gallant gentleman."
He grinned. "Compliments so early in the day? But it's only eleven, Esme."
"Ungrateful rogue," she teased, with only half her heart in the rebuke.
Her breath caught as the real world fell far behind them. After a short ride, Windermere dismounted and led his horse toward a rough-hewn stone enclosure, almost a ruin. The place was likely private enough that no one would stumble upon the horses for some time.
She remained in her sidesaddle, watching him tend his horse with experience and familiarity in every movement. He was in his element, running away from his responsibilities and his guests to play with her. The idea of it pleased her immensely, if only briefly.
He came back to her grinning. "We'll walk from here."
"Where are we going?"
"Somewhere I've never taken anyone else."
She leaned forward onto her knee. "Should I be honored?"
"If you like." He caught her about the waist and lowered her to the forest floor, then stole a long, hungry kiss from her lips. He teased into her mouth with his tongue, feeding the desire that lingered when he was near. She was breathless when he drew back and then he smiled, one brow rising over his striking blue eyes. _Far too handsome for his own good._
He took the reins from her, tended to her horse then secured both mounts in the small space. She frowned at the meager barrier.
"Don't worry," he said, reading her mind. "They're used to being left here and will remain calm until we return."
He held out his hand and she slipped hers into it. "I am very intrigued, Windermere."
"I hope so."
They walked a further ten minutes through pristine woodland, though following a well-trod path that she could never see the end of. "Others must come here."
"The family does, and some tenants if they've the stamina for the climb."
She glanced at the canopy of leaves above her head, but couldn't see the peaks above. "Are we going up the mountains today?"
"No, we're here. Almost."
The rush of water greeted her, a fast-flowing stream, tumbling over mossy rocks not far ahead. Windermere guided her to the stream and then away. They squeezed between two trees grown close together and her heart lightened immediately.
"Oh my." He'd led her to a clearing, a grassy field protected by tall trees where the sun shone brightly through the gap in the canopy around them. "Heaven?"
"Very close to it," he murmured as he stripped off his gloves and shoved them in his pockets.
She was drawn to the center and spun about in a slow circle. A tight feeling began in her chest and she set her hand over her heart. "I knew your home was beautiful but _this_ I never expected to find hidden away."
Windermere strolled toward her, dropped the blanket he'd carried and caught her face between her hands. "As beautiful as you. Are you done communing with nature alone?"
"Perhaps never," she whispered, heart pounding at the wicked gleam in his eyes. Windermere removed her gloves and drew a straight line from the tip of her longest finger to her wrist. She swayed toward him a little.
"I'll tempt you away somehow." He kissed her, slid his fingers into her hair and held her against him tightly. When the kiss broke, he brushed her cheek with his. "I'll spread the blanket."
He chose a spot with a slight incline and removed his coat, waistcoat and footwear. He patted the space at his side. "We won't be disturbed here. Come and lie down with me."
Then he tugged his cravat undone and left it dangling around his neck. He stretched out on the ground, sprawling on his back, his gaze locked on the patch of blue sky above them without a care in the world.
Esme considered her surroundings a moment more. They were very alone and unlikely to be stumbled upon. She removed the dark-red jacket of her riding habit, and on a whim stepped out of the heavy skirt too. She stood in corset and shift beneath a soft undershirt but kept her footwear on, letting the sun bathe her in light and warmth. When she joined Windermere on the blanket, his eyes were fixed on her body. She held herself away from him and stretched her limbs.
The peace of the place enveloped them. After a time, birds darted overhead, moving from tree to tree, the stream bubbled off in the distance. But nothing else moved or caused sound to be made. It was as if they were the only two humans in existence. She found Windermere's hand and held it. "How long can we stay?"
"An hour. A day. Forever." He smiled lazily. "I'm in no rush to leave."
His words, although certainly not intended that way, caused her pulse to jump. He made her think of intimacy and sex and his grasping hands no matter where they were or what they were supposed to be doing. Making love to him came so very easily and as she lay on the forest floor, an undeniable surge of anticipation filled her. She wanted him. She always seemed to want him the moment they were alone.
Windermere rolled toward her at the same moment she rolled toward him. They kissed again, lazy brushes of lip that set her body trembling. He cupped her breast and then dragged her close against his body. With the warmth of the sun and the scent of nature filling her lungs, her desire doubled.
She'd always enjoyed the outdoors but rarely enjoyed it half dressed.
Esme pushed Windermere onto his back and then straddled his lap, determined again to bend him to her will. Windermere slowly unfastened his trousers, pushed them down his legs and met her gaze. His cock was full, hard already.
"I need you," she whispered.
"I need you too."
He smiled as she moved to mount him. Her body was ready and moist and she ached in lovely places. The slide of his heat into her core sent a moan tumbling from her lips. "That's what I want," she whispered.
"On top again?" Windermere squeezed her bottom with both hands. "Ride me then, Essy."
She overlooked the shortening of her given name. She had overlooked half a dozen things he had done in the last days that had previously annoyed her. All so she might make love to him again.
She let him slide in and out of her body, teasing herself, fulfilling her deepest desires to have her way. It had been rare to make love outdoors too. She'd never been this entirely private with a man outside and relished the sheer indulgence. She made good use of the occasion and expressed her pleasure openly, loudly.
She pleased herself, pleased her own body's astonishing craving for Windermere. She made her pussy ache, shoving him deep then holding him at her entrance. She rotated her hips and ground down onto him until she could take no more. And still it wasn't enough. She was still hungry for him.
She met his gaze and saw his eyes were wide, brow coated with sweat. His fingers dug into her thighs as she slid up and down his length. She was torturing him, exciting him. She couldn't deny he did the same to her.
"Tell me what you want," he whispered.
"I want to hear your voice." She caught the ends of his cravat where it lay on his chest and tugged him up. "Tell me how you feel."
He rolled up, one hand slipping between her legs, the other sliding over her rump. His panting breath tickled her ear. "I would fuck you three times an hour if I could manage it. I would spread you over my lap, everywhere in my home, and possess you. I want my mouth on you, my cock in you at the same time. Dear God, I've never been with someone who made me feel so alive, so bloody eager. It's like I'm sixteen again and with my first woman."
He squeezed her bottom again and she gasped. She tightened her grip on his cravat, using it as a tether, as reins as she rode him.
He teased the bud of her clitoris with firm, fast strokes.
She came undone suddenly. Caught unprepared, she screamed his name even as he shouted hers.
The shocks of her climax continued to roll on and on until Esme was left shaking and weak.
Richard caught her as she fell boneless into his arms, her grip on his cravat loosening. He eased her to the blanket with such gentleness that she closed her eyes. She had not experienced a release like that in her entire life and she hadn't been prepared for the rush that had swept over her. She wanted to hold on to the feeling, the blinding contentment enveloping her, forever and never return to earth.
"What the devil?" Windermere exclaimed when she did not stir. He slapped her cheek very gently. Once, twice.
On the third, she opened her eyes and laughed at him. "I still breathe."
He sucked in a sharp breath. "Did you just black out?"
Had she? "No, but I believe I've been to heaven and back in the space of a few perfect moments."
His brows rose. "Perfect?"
"Oh, don't let it go to your head."
Unfortunately, he puffed out his chest. "I am very pleased madam is so thoroughly satisfied with me."
She stared at him a moment then covered her face with both hands. "Gods, I thought you would be smug but that smile is beyond reason."
"I'm satisfied too, you know. I did not know it would be like this. This good between us."
"Well, now you do." She lowered her hands slowly, just a little bit proud of herself as well. "Don't ever doubt me again."
He brought her hand to his lips and kissed her knuckles. "Never, ever again."
He pressed a kiss to her breast as he rolled away to lay flat on his back and watch the sky above. He caught up her hand again and held it tightly within his. "What will I do after the house party?"
She shaded her eyes with one arm, wondering the same thing. "There's always another set of breasts, Windermere."
And there would be other men for her too.
"True, but the rest of the lady matters, as you've just proved to me." He kissed the back of her hand reverently once more. "I'll return shortly."
Esme lowered her arm from her eyes to watch him walk away, hitching his trousers up as he went and fastening them. Her heart skipped a beat. She would be sorry to lose him as a lover but it was inevitable. He needed a wife who could give him babies and she never dabbled with married men.
For an affair he was possibly perfect, but the rest she could never hope to have.
**Chapter Eleven**
**E** sme gritted her teeth a moment before responding to the insult her lover's brother had just delivered very loudly to a crowded room. "Lord Avery, your compliments take my breath away."
For the life of her, she couldn't fathom why Lord Avery Hill had taken a set against the world tonight, but he'd grown particularly cutting toward her in the last few minutes.
"Wasn't a compliment," Avery slurred, the effects of his indulgence in spirits very obvious in the way he formed words. "Dragons need slaying and my sword is at the ready, ill-tempered wench."
The hovering crowd of guests sucked in a collective breath and the ladies began twittering behind their fans. Esme shook her head at their whispers. Avery was not always a nice man. Somewhat belligerent and entirely too sure his way was right, and they occasionally locked horns over his attitude that women were nothing more than disposable pleasures. Esme would weather whatever insults he threw out well enough, but the small crowd would remember every word he said and spread the news all over London.
"I'd say you fit that description better than I." She studied Avery with growing annoyance. He had already offended three other ladies tonight, made a pass at more than one servant, and refused to take himself somewhere more private to continue his drinking.
And after such a satisfying ride with Windermere, and their tryst under the warm summer sun earlier, Avery's attitude threatened to suck all the joy out of her day.
Unfortunately Windermere was nowhere to be found, and since Oswin and Lady Jillian appeared helpless to control Avery, Esme could see no choice but to step in and save the evening from total ruin. "If you could lead the others away," she whispered to Lady Jillian. "You might leave him to me."
Both Lady Jillian and the butler frowned at her request.
She smiled reassuringly to both. "I know exactly what to do with him. Trust me."
Lady Jillian bit her lip a moment, torn but too timid to take charge herself. "You do not mind?"
"Never fear. I grew up around belligerent men."
Jillian kissed her cheek. "I will owe you for this."
"And I promise to collect one day," she returned halfheartedly before she ushered Jillian and the milling guests toward the door. Best not to have Lord Avery utter any further scurrilous _opinions_ on her character to others. If he didn't like her, so be it. But why he chose tonight to tell the whole world was disappointing.
The butler hovered at the doorway, dragging his feet in leaving. "I should stay," he offered.
The faithful retainer appeared so concerned that Esme shook her head firmly. Better to have the older man elsewhere in case Avery's attitude worsened. "Make sure everyone else is comfortable, Oswin," Esme advised, and then slowly closed the door in his face.
She knew how to handle men like Avery Hill. Her mother had died young and her brothers and father had been fond of drink in the years after. It had always been left to Esme to clean up the mess and right the furniture they'd knocked over as well as soothe ruffled feathers when she'd lived at home with them. Men often thought they were so clever, but a little gentle persuasion tended to go a long way.
She turned around and squared her shoulders. Avery followed her movements about the room through hooded eyes. He had made quite a mess and she straightened a few things back as she remembered them being positioned.
"Why are you still here?" He scowled. "Get out too. Stay away from me. I'm not going to poach on my brother's preserve."
Esme rolled her eyes at his contrary statements. " _As if that were possible_. A man of your appetites leaves me quite cold, I assure you."
His face darkened with anger. "I wouldn't touch you with a ten-foot pole."
"Say what you like to me, but I've always thought it preferable not to offend absolutely everyone at a house party when it's only halfway done." She smiled. "I sent the others away so you might enjoy keeping some friends for tomorrow and not completely embarrass your brother and poor sister."
"You can't tell me what to do," Avery hissed. "Just because my sainted brother got under your skirts doesn't mean you're on your way to becoming mistress of this house."
She burst out laughing at the mention of her fling with Windermere leading to a permanent situation between them. "Oh, my word, you must be foxed. I have a certain interest in your brother for the moment but I've no right or desire to keep him about for long, nor he me. In fact, I don't believe I've ever entertained the idea before, but I do thank you for the amusement."
"Harridan."
She reached for the bottle at his side and lifted it to her nose. "Gin. How very common of you."
"It does the job. Damn Oswin hid the wine cellar key from me again," he grumbled, sprawling deeper into the chair, holding a silver tankard at a clumsy angle in his fingers. Avery was so far gone he hadn't noticed he'd run out. A little more spirits in him and hopefully he'd pass out and that would be the end of him for the night. Esme grinned at the idea, steadied his hand, and refilled the tankard halfway.
Avery stared at the mug a moment. "My thanks. At last, a woman who knows what a man needs without having to talk a subject to death."
When he lifted it to his lips to drink, a fair portion escaped to slide down his jaw and the chair beneath him. Esme shuddered as the cushion grew damp and ruined from his spillage. The chair, the pattern particularly, was one of her favorites. She might not be mistress of this house, but she appreciated the beauty around them more than Lord Avery Hill ever could.
"Such a mess again," she chided. "Beyond last year's efforts, indeed. I'll never understand why Harriet put up with you. She should have found someone who wouldn't embarrass her with the way you go on."
"I fuck well." Avery saluted her with his tankard. "Good luck to her finding a replacement that makes her scream like I can in bed."
For added clarity, he lewdly grabbed his groin with his spare hand, but it was easy to tell that what lay beneath his clothing was in no condition to live up to his boast.
"I'm sure there will be dozens of men who vie for her attention when we return to London," she replied in a bored tone but watched him for a reaction. He winced and then wiped the expression clean. Esme sat across from him. "But is that all you do with her?"
Avery swallowed a large mouthful and then wiped his sleeve across his lips. "I've had dozens of women in my bed. They've never complained about my techniques."
She knew his history and of his skill with women, thanks to Harriet's shared confidences. "They've little chance of complaining since after sharing your bed, you barely see any of them again," she muttered under her breath. "But you always kept coming back to Harriet before she could find someone better."
He squinted as if he'd stopped listening. "Eh?"
"Oh, nothing." She would laugh aloud at how slow-witted Avery was under the influence of spirits, but this was the man her best friend had fallen for. If he had any sense, he'd be trying to mend fences with Harriet rather than drowning his sorrows that he'd lost her through his own actions. The safest thing for all concerned was to get him into his own bed, alone, and perhaps tomorrow he'd be a touch more civilized, at least enough to apologize to those other offended ladies. "Have you had enough?"
"Keep out of my affairs." Avery hugged his tankard against his chest, slopping more onto his shirt. He glared into the cup, and then drained every last drop in one swallow. Afterward, he swayed forward to show her the empty tankard. "I can hold my liquor."
"Is that so?" From years of Harriet's confidences and Esme's own observations, she knew he actually couldn't. Her plan was working so well. "Well then, let me refill that for you, my lord, and you can prove to me how fine a gentleman you really are."
She refilled the tankard to the brim and then sat back to watch him drink in peace. All too soon the man could barely find his mouth.
He jerked upright suddenly. "Do you know what that woman wants?"
His sudden question surprised her, but Esme assumed "that woman" referred to her friend Harriet. "She wants what we all want."
"Excitement. A bit of fun." He scowled. "I gave her a damn good time."
Esme shook her head. "You haven't a clue about women, do you?"
He stared, and then smoothed his damn shirtfront. "More of an idea than my sainted brother."
"Now, _that_ is where you are wrong." She sighed. "All we want is a man who will put our needs first."
"There, you see," he said with a renewed burst of energy. "I always wait for her."
Of course his mind ran to sex but that wasn't what she'd meant. "I'm not speaking of intimacy, but I fear you are too blinded by drink to understand the difference at the moment."
He lurched toward her, falling to his knees in his haste. "Tell me."
Esme pushed him away with the tips of her fingers and he collapsed onto the floor in an untidy heap.
He squeezed his eyes shut. "Gave her every pleasure I could think of."
He groaned and rolled away, curling onto his side, almost in a ball, resting his head on his arm like the most innocent babe. But this man wasn't at all innocent and he hardly deserved her sympathy for the way he lived his life.
Esme prodded his shoulder with the toe of her shoe after a long moment of silence from him and received no response. She sat back and regarded him sadly. "Fool. All she wanted was your undivided attention."
He snored suddenly and her heart ached for Harriet. She knew very well what her friend had wanted most of all and had given up. What she wanted would never be possible with Avery unless he was prepared to change his life completely. "She wanted your love, Avery. That is all a woman really needs to be happy."
**Chapter Twelve**
**"H** ow the hell did this happen?" Richard shook his head in disgust as he stared at the body at his feet. His brother was passed out and drooling on the damn rug in the little sitting room just off the ballroom.
An appalling thought struck him. His brother wasn't a good drunk. Too mouthy by far, too rude to all and sundry. Richard had been occupied in the stables for hours so he'd have had plenty of time to cause trouble. "Oh God. Do I need to make amends to anyone in particular tonight?"
The butler winced. "He spoke rudely to several ladies until Lady Heathcote intervened. Lady Jillian has smoothed the other ladies' ruffled feathers I believe but an apology from your brother might be in order."
Richard winced. Esme took offense quite easily to the things he said and he didn't want to imagine her mood over his brother's rudeness. An upset would affect his hopes for the rest of the house party. "I'll speak to Lady Heathcote and the others myself tonight."
"Actually, when the countess left your brother, she didn't seem particularly offended by events," Oswin remarked. "She merely asked me to lock him in as he was and to wait for your instructions. She returned to the other guests quite happily, I feel. When I saw her last, she was laughing with Mr. Hammond on the terrace."
Unease filled him at the idea of Esme and Hammond laughing together while he'd been stuck out in the stables attending to urgent estate business. He studied his brother and then glanced at the clock. Almost two in the morning and the house party was only just winding down for the night. The whole time he'd been in the stables, he hadn't been imagining Esme dealing with his brother, or smiling and laughing with someone else. He'd foolishly hoped she'd been wondering where he'd gone.
His brother snored on, uncaring for the difficult position he was likely in. Richard wasn't sure what was going on with Avery, but he'd have to get him upstairs and into his bed. He hoped there wasn't a woman waiting because Avery was in no condition to make any sort of good impression. "Do you know who is sharing my brother's bed at the moment?"
The butler nervously glanced around. "I don't believe there is anyone this year."
Richard frowned. "Surely Lady Ames?"
"Not currently." The butler dug a finger beneath his neckcloth, clearly discomforted by the topic of their discussion.
It was accepted there was often more than one woman sneaking from his brother's bed at any time of the day or night. Until now, Richard had never needed to discuss those peculiarities with his butler.
Oswin glanced around quickly to make sure they were still alone before continuing, "The maids say Lady Ames broke with Lord Avery. Quite upsetting for her. She's kept to her room tonight, I am afraid to say."
Oswin glared at the body on the floor.
"And my brother has been drunk since yesterday morning." Richard groaned. "A lover's tiff. That explains everything. When will he learn not to lead women on?"
Oswin cleared his throat. "Might I ask about your interest in Lady Heathcote?"
"My interest?" He glanced at his butler in surprise. "Of what concern is that to you? You are not her father."
"No, but I felt compelled to inquire, given the care she has shown to me during her many visits." Oswin lifted his chin proudly. "She is a good woman and her influence on you has been noted and approved of by your staff and tenants."
"I see." Richard blinked, blindsided by the fact that his staff had made so much of Esme's visits. "That is unexpected."
"The nursery has been empty for a long time," Oswin said, smiling awkwardly. "It would be nice to hear the voices of the very young at all hours of the day and night and to have a woman of her strength of character once more commanding us. We all agree on this."
The entire staff had placed their stamp of approval on Esme? It was too much. He scowled at the man he'd depended on to run his home. "My relationships are none of your business, not anyone's, Oswin, and you'd do well to remember your position here. Anyone can be replaced."
"Yes, my lord." However, Oswin looked anything but abashed. The man had meant his endorsement of Esme.
Richard turned away, shaking his head. He nudged his brother with the toe of his boot and got no response that could be considered intelligent communication. "I'll need some help getting him upstairs."
A throat cleared. Richard glanced over his shoulder and spotted Mr. Hammond hovering at the door. He breathed a sigh of relief to discover Esme was not at his side. She would not appreciate the conversation he'd just had.
Hammond drew close, scowling at the man on the floor. "I wouldn't trust me not to drop him on his head, but I am willing to take his feet."
"Thank you." Richard appreciated the offer of help. "I've always found that a tempting thought too, to drop him. He never could handle the drink."
With Hammond's help, and a great deal of grunting, they got him up the back stairs, awkwardly thrown onto his bed still in the clothes he was wearing. He stank of gin, but there was no help for leaving him that way. Avery slept through it all, heavy as stone and just as sensible. It wasn't worth anyone's time to try to undress him.
Hammond lingered a moment. "Stupid fool. You'll never know what you had in her."
Richard frowned at his remark, uncertain if the man was speaking to him or his inebriated brother. "I beg your pardon?"
The other man smiled tightly. "Never mind. Good night, one and all."
He strolled out and disappeared quickly down the hall. Richard took one last look at his brother then closed the door to the room. Tomorrow he'd talk to Avery and find out what was going on. But tonight he needed to ensure Esme wasn't upset.
He hurried to her bedchamber and tapped lightly on her door, hoping Hammond hadn't left Windermere only to join her there.
"It's open," she called out.
He glanced around as soon as he stepped through, relieved to find she had not taken up with Hammond or anyone else in his absence. She was curled up in the window seat, an empty sherry glass in hand, her bare toes peeking from beneath her robe-covered nightgown. She was entirely too kissable like this and he hurried across the room.
She screwed up her nose when he got close. "Oh, it's you."
His hopes died. Had she not wanted him to come? "Were you expecting someone else?"
"No, but I thought to catch sight of you sooner than this."
He slowly approached her. "One of the horses foaled tonight. I had wanted to be there."
Her expression cleared. "A filly or colt."
"Colt."
She turned to glance out the open window. "Congratulations."
Esme's greeting was not terribly warm and his tension grew. "I apologize for my brother's behavior if he has upset you. He seems very drunk."
She leaned her head back on the wall and studied him. "I know. I confess I helped him along until he passed out."
The news was somewhat of a relief. "He was rude to you."
When she said nothing more, Richard quickly squeezed into a spot by her feet. He placed his hand on her calf and softly stroked her limb through the robe. "Is everything all right?"
Her smile was tight. "Of course."
"Are _we_ all right?"
"We?" She shook her head. "Windermere, we've enjoyed a few exciting trysts that have nothing to do with anything else. Your brother cannot affect my enjoyment of you no matter how drunk or uncivilized he becomes."
He shoved her robe aside to touch bare skin. "It's far more than sex between us and you know it." He skimmed his fingers up the back of her leg and higher still, until he could feel the damp heat of her quim.
Her legs parted on a sigh. "Very good sex indeed."
He stared at where his fingers played. Tonight, he wanted Esme in his bed completely naked for what remained of the night. "I hoped you might be amenable to joining me in my room."
She moved her hips closer to his fingers. "You're here. Why stop?"
He frowned. "I really do walk in my sleep."
"Oh." She glanced around swiftly, taking stock of the room with widened eyes. "Entirely too many breakable items?"
"Yes." He swallowed and teased her curls again. "This would be an easy room to hurt myself in. I had hoped to make a night of it. To exhaust us both."
"I would hate to see your blood on the carpets. These are lovely and your brother has already spoiled one chair with gin. A pity, that. You should make him clean up after himself tomorrow, and perhaps he would stop burdening your staff with extra work at a time like this."
He grinned at how no matter what he did to her body, her managing tendencies remained unaffected. No wonder his staff liked her. With Esme, he always knew what she expected.
He held out his hand. "I might just suggest it. It's about time he took responsibility for his actions and settled down."
After a moment, she slipped her small hand over his and he brought her to her feet. They stood next to each other, barely touching but his pulse sped. He bent to kiss her cheek then guided her toward the door.
In the hall, she faced him. "Did you leave your brother snoring below stairs?"
"Hammond and my butler assisted me getting him into his room. It was a near thing for all of us. We each wanted to drop him on his head, I think. I cannot help but worry about him." He glanced at Esme in time to see a dark expression form on her face. "Esme, do you know what's amiss?"
She waited until they stood outside his bedchamber before answering. "Perhaps."
"Lady Ames is upset." At her obvious surprise, he shrugged. "I don't always wait for you to tell me everything I need to know. My servants advise me of small tidbits of gossip from time to time during the house party. I am sorry. I hope my brother hasn't been an utter beast to her."
"It's more than that."
He blinked. "What else is there?"
Her frown grew. "Another time."
She slipped into his bedchamber and before the door was properly locked, she slipped off her robe. As was his usual habit, Richard crossed the room and slid the room key under his dressing-room door. "Don't be alarmed. My valet will unlock the chamber at half-five, long before the guests awake. You'll have ample time to return to your room unseen."
He faced Esme and his eyes widened. She was already undressed, slowly stroking her own breast. Her lips curved upward at his shock. She clearly enjoyed surprising him every way she could.
He hardened immediately, the stirring of desire he'd experienced earlier roared into impatience to have her again. Three strides placed him directly before her. He didn't dare touch her but admired the pert breast she caressed. He let his gaze rove over her deliciously pink skin, taking in every dimple, every curve, and growing even more aroused by what he found before him. "You have the figure of a young woman."
The wicked sparkle dimmed from her eyes. "The only advantage of never being blessed with children."
Her mention of children brought out a keen desire to see her body big with child. His. He almost groaned out loud as he imagined a life with Esme as his wife. Could she be happy with him and him alone?
He gently touched her head, slipping his fingertips into her silky hair.
He brushed his lips against hers slowly, keeping their first kiss light and gentle. The delicious scent of Esme's arousal curled around him and before he realized it, Richard had her cuddled into his chest, her fingers digging into his shoulders through his clothes as they held each other close.
He lowered one hand to her shoulder and circled his thumb over the delicate skin of her collarbone, finding her softness compelling. But it was her strength of character he'd grown to adore, and not just the sex that kept him coming back for more.
Against his lower back, Esme's touch was light as a feather, skimming his waist and quickening his heart with the relentless torture. He backed her toward the bed, but stumbled as his trousers slid down his legs and tangled about his knees.
He glanced down in surprise. Esme had undone them and he'd not noticed. The minx.
Her smile was rather smug, so he released her to finish undressing himself. "I take it you want me naked too?"
"I do." Esme circled him slowly, her fingertips skimming his stomach, his hip, gliding over the muscles of his arms and chest, teasing across the swells of his bottom before she stood before him again. "Much better."
Richard caught her up in his arms and held her against him. "Definitely, my sweet."
"Sweet?" She scoffed.
"Would you prefer to be called a feisty wench instead?"
"Oh, yes." She grinned then laughed softly. "I'm no milksop madam."
The smooth slide of her slender body against his increased the urgency to join with her. Instead, he lowered her to the bed, rolled her onto her stomach and kissed both cheeks of her bottom. Next, he whacked the right one.
"Windermere," Esme warned as she twisted around to see him.
To his relief, her expression wasn't outraged but puzzled.
"I have always wanted to do that," he confessed with a laugh.
She raised one brow haughtily. "I assume you mean during our arguments—or was it every time you laid eyes on me?"
He waggled his eyebrows, and traced the faint outline of his fingers on her white skin. "Usually as you walk away from me. I've often wanted to reach out, throw you over my knee, and spank you for making such devastating use of the naughty tongue in your head. You've the devil in you some days, I swear."
Esme flattened herself on the bed, resting her head on her folded arms. "I'm rather restrained compared to some."
"Believe me, you are certainly not." Richard bent and kissed the mark he'd made because his heart truly wasn't interested in punishing her. He just wanted to do everything with her, at least once, to see how she'd react. He was going to store up her every reaction for later use. He intended to have ample material at his disposal when he met with her during the coming months. It was about time he had something to hold over her head.
"You are exciting." He ran his fingers along her sides. "Do you enjoy lovers who spank you?"
"Occasionally."
He pressed a kiss to her waist and whispered, "What other naughty things do you like to do, Esme?"
"Richard, you don't need any suggestions to improve your technique with me."
He grinned at her rare use of his given name. So far she'd only said it, screamed it, during release. He took in her form—arms raised; bare, tempting skin—and rejoiced in his good fortune in being her lover at last. He'd never intended to become intimately involved with Esme but he couldn't imagine denying himself the satisfaction of making love to her.
He slid his fingers around her ankles and widened her legs, posing her so he could glimpse the damp pink lips of her quim.
But if she were standing, she would be the most perfect Hill bride in history.
He swallowed hard at that train of thought. Damn Oswin for suggesting it. He could actually imagine abducting, tethering and fucking Esme in the dark forest according to family tradition. Where he normally grew appalled at the notion, imagining Esme that way filled him with intense desire to take her there tonight. Gods he wanted her. Wanted to keep her. Wanted to fuck her all night and every night for the rest of his life.
He hung his head and tried to push the desire away. He couldn't do that to Esme; take her to the woods, not without her permission or at least feeling she could accept such a situation.
He turned her over and crawled onto the bed. He set his hands on each side of her shoulders and stared into her eyes. "Have you ever been bound for sex?"
Her smile was amused. "Yes."
"And?"
"It is not something a woman should do without knowing the man very well." She touched his face with just the tips of her fingers. "I like to use my hands."
He quivered as she stroked his throat and then moved on to caress his chest. She teased his nipples, her bottom lip pinched between her teeth. He had to know. "Would you trust me to restrain you, should the opportunity arise during a tryst?"
She slid her hands down his sides. "What do you have in mind, lover?"
He met her gaze hesitantly and saw only amusement reflected in her eyes. "That, I cannot confess. It is a fantasy that I am wary to speak of."
"I would trust you not to hurt me." She smiled then, blindingly bright and honestly. She raised her hands above her head and mimicked being bound for him. "I believe no matter the position, I would always find pleasure with you."
Richard kissed her hard.
All his life he'd been somewhat ashamed of his family's history with women. With the manner in which they took their wives, and the perpetuation of a superstition that promised prosperity and relied on blind faith. It was ridiculous, backward, and utterly wrong in this day of enlightenment. He'd never intended to take his bride, but he could share a night like that with Esme for all the right reasons. He knew it bone deep that he wanted to have her, make her tremble with pleasure until she screamed.
He would not be using her or degrading her. Esme was beautiful and confident in her nudity. She was everything he wanted in a woman.
Her gaze skimmed down his torso and dropped to his erection. He ached for her to touch him. She did, but softly, the tips of her fingers trailing fire up and down his length. She skimmed the head of his cock with her finger, spreading his seed all around.
When she raised her finger to her lips and licked it clean, he growled. "So be it."
He lowered himself to her body and wrapped her arms and legs about him. They kissed, deeply and with all the skill they possessed. Esme dug her fingers into his hips and urged him to enter her.
"You _do_ like me." He filled her, watching her blue eyes widen. "As much as I like you."
"I..." Esme squeezed her eyes shut and drew his head to her shoulder. "Make love to me."
Richard loved her slowly, never allowing the frenzy of their previous couplings to overtake him. The feelings she'd already stirred in him amplified. With her legs wrapped around him, her hands clutching his sides, he felt something shift inside him a little more. He could get used to this feeling. He could get used to her being in his life.
As he drew back to look into her face, Esme shifted to force him up to his knees and rolled to her stomach beneath him. Then she set her arms to the mattress and rose. She presented him with her pert little bottom and Richard reentered her from behind, fighting an urge to grasp her hair and fuck her hard and fast. He'd fought the same inner battle their first night as lovers and almost lost his mind trying to hold back.
Richard liked this position too much. In this pose, Esme was as submissive as he'd likely ever find her. He skimmed a hand along her back, over her shoulder, and then grasped the back of her neck. He squeezed and Esme's body clenched around his cock like a vise. He bit back an oath. Fighting to keep control as he rammed into her.
He released her and caught her hips firmly with both hands. Esme shuddered and gasped, rocking back into him relentlessly. He increased his pace until she was moaning to every thrust. He grinned at her incoherent babble. Clearly, he wasn't the only one who enjoyed vigorous sex in this manner. So far he'd yet to discover anything she didn't like. Their passions matched; if only their tempers could always. He would take whatever he could get tonight and be content.
He exhaled sharply.
But before the house party ended, he would take her to the woods. He didn't think he could live without this. Without her passionate response to guide his.
Bowing to the inevitable, he set a steadier pace and glanced down. His cock slid wetly from Esme's tight sheath and he threw more of his body weight against her.
She braced herself with her arms, her head lowered to the mattress and her soft moan filled his ears. "Yes, Richard. Oh yes."
He slammed into her hard and fast, causing those soft moans to grow to desperate sobs. He gave her everything he had, his heart slamming against his ribs, his skin slicking with sweat. Damn, but he could make love to her all night and never tire of it.
Esme squeezed him suddenly, her body clenching tightly around his length as she came. He cried out a moment later as his own release spilled inside her. He fell over her, and they knelt there on his large bed, gasping for every breath.
Slowly, Esme crumpled to the mattress and Richard followed, sprawling half over her. He moved her damp hair away from her eyes. "You are amazing."
She caught his fingers and brought them to her lips. All of a sudden, she started to laugh against them. The soft, earthy chuckle filled his soul with wonder. Esme was happy, and he'd caused her to be.
"What is it?"
"I imagine I will regret saying this as soon as tomorrow, but," she kissed his fingers, "you're not so bad yourself."
High praise indeed, but... "Wait and tell me in a few days' time if you still mean it."
**Chapter Thirteen**
**"I** must say, you've impressed me," Harriet whispered to Esme. They sat alone together in a quiet corner of the drawing room after dinner. "It takes a determined woman to tame that rogue of his wandering eye and by all accounts, Windermere has turned aside every invitation to dabble elsewhere during the last few days."
A pleasant hum of anticipation coursed through her body at the thought of how she'd spent her time during this particular house party. Esme was well satisfied with her unexpected fling. She'd had Richard as a lover and there seemed no wavering of his attention or a reduction in pleasure. "I've not tamed him."
Despite the thrill of her affair, Windermere was not a man to keep around for long. She reminded herself that their time together as lovers was almost over.
Esme yet again considered their affair. Unfortunately, it no longer seemed to align to her deepest wishes to end it, even though she must. During those moments after release when Windermere held her in his arms so tightly, one's breath slowing in time with the other's, their hot bodies cooling, her traitorous thoughts strayed to a future that just might feature more of the man holding her.
A future she couldn't ever take up.
A woman of her years and experience shouldn't delude herself that they had a chance just because he was exciting, funny and rather thoughtful, now she'd spent more time alone with him. He did still annoy her at times, but it had become harder and harder to express her dissatisfaction out loud. She didn't want to spoil their time together. It was his knowing smile, perhaps, or the discreet slide of his hand over her body that silenced her and made her think of carnal pleasures so often.
_Always, with him._
Harriet pursed her lips a moment then leaned close. "Then you won't mind if he meets with another lady?"
The question jarred Esme. She might have a mutually satisfactory arrangement with Windermere for the time being, but there had never been any doubt she'd lose him to another woman sooner rather than later.
A long-term affair with a man like Windermere was as ludicrous as suggesting she would have a child of her own one day. The earl needed a wife and a son more than a barren lover, no matter how thrilling their intimate encounters were.
A wave of sadness ripped though the idea of Windermere holding his son and shadowy wife in his arms. Caught by surprise, she blinked at the distress such a scene caused her. "No, of course not," she managed to choke out. "He must marry to ensure the succession."
"Oh, Esme. You look about to cry." Harriet gently patted her hand. "Are you certain you could bear it?"
She shook her head to sweep away her confusing feelings. "I must not consider any other outcome. You know what I am."
"Barren," Harriet whispered and then caught up her hand. "I wished it wasn't so, for I've never seen you as content as you have been this week."
"He has faults." However, there wasn't much wrong with him aside from his high opinion of himself. _That_ hadn't changed with increased intimacy. He was always fishing for compliments.
"Would it surprise you to learn that Hammond has invited me to visit with him next? I believe I will take him up on his generous offer and go there after the party, instead of returning to London."
Esme snapped her attention back to her friend. "I thought you were eager to go home to your son?"
"Alexander doesn't really need me." Harriet sighed. "His uncle likely won't want me there either. He writes often to say my son has blossomed under his care."
Harriet had a difficult relationship with her son and her late husband's family. Her son listened too much to them and disregarded his mother at every opportunity.
"A change of scenery would be nice and Hammond has been going on about his new property so much that I'd like to see the place for myself." Harriet's gaze darted across the room and a frown creased her brow. "Yes, a new view of a different countryside will be just the thing for me."
Esme discreetly turned to see what Harriet peeked at.
Lord Avery Hill had rejoined the party at last. He'd not been seen for several days and Esme had assumed he'd left the estate entirely. Dressed to perfection, the gentleman appeared no worse for wear and was even bowing over the hand of a lady he'd insulted while deep in his cups. Judging by the lady's smile, all had been forgiven between them. "He appears to have recovered his composure."
Harriet lifted her fan and beat it furiously before her face. "He indulged in a fit of childish pique at being refused, nothing deeper."
Esme did not believe that for a moment. "He did seem genuinely upset when we spoke."
Her friend snapped her fan shut. "If he was indeed as distraught as you claim, might the man not have taken steps to keep me in his life? I've seen nothing of him for days and now he's flirting with every woman who comes near him."
As Lord Avery buckled over and laughed heartily at a jest made by another smiling lady, Esme groaned. Perhaps she had been wrong about him after all. "I see your point."
"Now whose point are we discussing? Mine, I hope." Miles Hammond dropped into the space at Harriet's left and caught up her hand. When he kissed the back of it, Harriet blushed and fluttered her fan before her face. The two of them shared a secret smile.
Esme gaped at them. "What are you doing?"
"I rather think that is obvious." Hammond smiled wickedly. "Are you feeling left out of my affections, sweetheart? There's always room for you in my heart."
His sudden switch from friend to the appearance of a hungry swain set her teeth on edge. "Don't be absurd."
Hammond shrugged. "Suit yourself."
He leaned close to Harriet and whispered in her ear. Whatever confidence he shared caused Harriet to laugh softly and lift her hand to his face. "What a delicious idea."
Esme tore her gaze away from the flirting couple, utterly shocked to her core. Harriet and Hammond? Not in a million years could they be right for each other beyond friendship.
Lord Avery Hill hadn't missed the romantic exchange between Harriet and Hammond either. His smile slipped away as he openly stared at the couple. He clenched his fists at his side and Esme closed her eyes. This was worse than attending the theatre.
Hammond chuckled again, pressed a kiss to Harriet's fingers. "Until later, sweetheart."
When he excused himself, Esme pounced on Harriet for an explanation immediately. "What do you think you are doing with him?"
"I am living the life I want to live," Harriet declared hotly.
"Are you sure?" Harriet had wanted a life with Lord Avery Hill. She'd confessed herself in love with the scoundrel. And Harriet never fell hard for anyone. "This new direction into Hammond's arms makes not the smallest amount of sense. You've always been wary of his dark nature. You don't like what he likes."
"Our interest in each other is very real, I assure you." Harriet smiled tightly. "Do not interfere."
"I don't mean to interfere but I do question your judgment. Hammond couldn't be more wrong for you," Esme warned. "I'm afraid you're making a terrible mistake without thinking of the consequences."
"He wants to change. To settle down, and in that our interests align." Harriet held her ground a moment longer then her shoulders sagged. "Must you always have to stick your nose where it's not wanted?"
"I thought I knew what you wanted, so of course I will speak up." She frowned severely. "You don't have the same tender feelings for Hammond as you have for another man who shall remain nameless. You're playing with fire and I don't like to see either of you hurting yourselves when there's no reason to. You're my friends."
Harriet shrugged. "We are not playing, Esme."
"Oh!" she stuttered, and took another quick glance around the room. "Well, a certain idiot is openly staring at you, so I'm not the only one surprised by your changed relationship with Hammond."
"Kettle. Pot. Black." Harriet tossed her head. "Windermere stares at you just as much tonight. Wasn't there a part of you that wanted revenge against Meriwether when he announced his marriage?"
She had indeed. She sank into the chair, feeling embarrassed with her outburst. She had no call to question the way Harriet lived her life when her own love life was just as unconventional.
"I suppose you are right. I'm sorry, my friend. I'm just as guilty of playing with fire as you are." She met Harriet's gaze and her friend nodded, accepting her apology. "He does that a lot. Stares at me, I mean."
But when she glanced across the room to where Richard stood in mixed company, her pulse jumped in the most disconcerting way because he _wasn't_ paying her any attention.
_Look at me, lover_.
Richard turned his head in her direction and heat smoldered in his blue eyes when their gazes met.
The corner of his mouth lifted in a subtle smile of acknowledgement.
"Well, you are exquisite." Harriet laughed softly. "We both deserve to be noticed."
She lowered her eyes quickly, disconcerted by how much she craved Windermere even now. They had met briefly this morning in his study, a fast and furious tongue lashing across his mahogany desk that had sent papers and ornaments crashing to the floor. Heat warmed her cheeks and she could almost feel his breath beating across her skin as he whispered her name in the wake of her release. A release he'd not shared this time. It was the first time he'd ever held back. She owed him and would repay him soon, but she had to get her traitorous libido under control first.
"Ladies." Richard's deep voice interrupted her train of thought, sending tremors of lust tumbling through her limbs. "I trust you have everything you need."
Her pussy quivered as she met his gaze. _Not yet._
"Oh, yes." Harriet glanced between them and then chuckled softly. "But I was telling my dear friend how weary I am this evening. I was about to wish her a good night and retire. Until tomorrow."
Esme watched her friend saunter out then braced herself. She was not so undone by her desires to let a man, and everyone else in the room, see she was as close to being out of control as it was possible to be.
"Penny for your thoughts," he said softly as he took Harriet's place at her side.
Her thoughts were filled with his gasps and moans and the intensity of his lovemaking. She swallowed down her panic. She did not want to need Richard like this. They had no future together, but as soon as she saw him her thoughts quickly plotted out how to drag him off to a quiet room and make mad, passionate love. "I was thinking of this morning."
"What a wonderful coincidence. So was I." He studied the clock over the mantelpiece, the tip of his tongue resting on his upper lip momentarily. "Care for another?"
Her pussy clenched in memory of what that tongue could make her feel. "I do owe you for this morning."
"I thought to wait until tonight." The corners of his mouth lifted into a wicked smile as he held her gaze. "I would be so happy if you could indulge me particularly tonight in something a little unusual."
Esme nodded eagerly, though a little frightened by how her feelings had changed for him. A week ago she would have laughed at the idea that she looked forward to being intimate with Windermere. Now, she couldn't wait to strip him of every piece of clothing he possessed. Even a few minutes' wait suddenly seemed an eternity until he touched her. Esme slipped her hand over his thigh and squeezed. "Yes."
"Thank you." He stared at where her hand rested and she jerked it back, astonished by what she'd done in public. They did not ordinarily touch where anyone could see such caresses. Esme preferred discretion in her lovers. In herself too. A hand held to exit a carriage or to dance was far different from groping his thigh in front of his guests. Heated glances were normally the only outward display of emotion she allowed herself in public.
"I think we should slip away sooner than later." He met her gaze and exhaled slowly. "Meet me outside my study on the terrace in five minutes. There are a few things we will need."
"Anything you wish."
"I was hoping you'd say that." When he excused himself after a long interval of silence, Esme was glad to see him go. She was embarrassed, and she hadn't felt this unbalanced for a long time. Had she ever felt this way for a lover?
Esme slowed her breathing deliberately in an attempt to regain control. She had a few minutes to wait and then she could escape into his arms. She frowned. Since when had she ever fallen apart when a man so much as suggested a tryst? She did not need men in her life. She chose to have them for the fun of it. Making love to Richard was exciting but all too addictive. To her chagrin, they were as well-matched out of bed as in it, of late.
There hadn't been anything to disagree about since their first night together, actually.
As the clock reached the appointed moment, Esme all but flew out of her seat, hurried along to his study, and let herself into the poorly lit room. Richard stood in the open doorway, lantern and burlap sack stacked at his feet. He picked up a cloak—her own, she discovered—and dressed her in it. "The woods can be cold. I hope you don't mind riding at night."
"No, of course not," she whispered, picking up on the tension in his voice.
He took her hand in his and before she could draw back, he bound her wrists together firmly. She tested her bonds. "I see the reason behind your questions now."
He led her toward the stables by the dangling end of rope without a word and when they reached the dark structure, the unlikely figure of Oswin holding two saddled horses appeared from the shadows.
The butler bowed formally. "Good evening, my lady."
If the butler thought it odd she was bound and being led around, he said and did nothing to suggest it. "Oswin."
Oswin held the bridle of her mare as Windermere lifted her into the sidesaddle and made sure she was secure in her stirrups. He gave her the reins to hold, and then mounted his own horse, settling his sack on his lap before he took her horse's bridle from Oswin.
The butler backed away, taking the lantern with him.
"I'll lead you," Richard insisted.
Esme sighed. "I can ride on my own, even with my hands bound like this."
"Would you rather be put over my lap on the saddle?"
She gaped. "Windermere, what's got into you?"
"You...and there's only one thing left to do about you." He shook his head. "No more talking until we're away from the house."
**Chapter Fourteen**
**R** ichard rode directly into the woodland enclosure and dismounted his horse, running through the litany for the night ahead. He could still feel the touch of Esme's hand on his thigh from the drawing room. He was ready for her, so hard that riding had been a painful experience he never wanted to repeat.
He turned his attention to the woman he'd abducted. So far, she'd barely complained about his silence and treatment. He expected her to have a lot to say soon when he threw her over his shoulder for the climb up the mount.
She didn't appear to like being helpless.
Neither did he like to make her so, but if he was going to do this, he had to do it all more or less properly.
He approached her, grasped her about the waist and deposited her outside the gate of the enclosure with perfunctory care. The histories expected him to bend his bride to his will. He wanted to kiss Esme witless instead.
He tended their horses with brisk efficiency and then faced her. Esme, however, had wandered away, staring up at the dark canopy overhead, her demeanor calm and unruffled by his behavior. Richard gritted his teeth for the next part and pursued her. He hoisted her over his shoulder before she realized his intent. Her shriek of shock echoed in the night. "Fight me all you want but it must be this way."
He grappled with the sack while she struggled to regain her freedom.
"This is ridiculous," she complained.
He was not supposed to react to pleas for mercy. He was supposed to be a bloody tyrant about this abduction and force her to go with him by any means.
He gritted his teeth against softening. He'd decided to give the family tradition this one chance to be proven false. He might never get another chance for months, so it was tonight or never.
He made the trek uphill as best he could, her complaints ringing in his ears and turning them pink. At the point where she'd begun repeating herself, he'd already reached the high lookout, a break in the woodland that afforded the best views for miles around. The place was lit by moonlight almost as clearly as it would be on a summer's morning.
Three large stones had been placed around the base of a tree that had once thrived, although now resembled a weathered stump. Roughly six feet from the base, an iron spike had been hammered into the wood.
And that was where he took Esme and secured her so she couldn't get away.
She blew out a breath, moving her fallen hair from her eyes. "I could have walked," she told him, her tone full of sarcasm.
He couldn't have her too angry, so he gently smoothed her hair back from her face until it was neat. "Then where would be the fun in seeing me sweat?"
Her gaze raked him. "Why are you doing this?"
"Because I want to and because I must." He kissed her, cupping her face and devoured the mouth that had just flayed his manhood, his honor, his character on the long walk up the mountainside.
He stood back eventually, leaving her panting and still bound. Her breasts rose and fell rapidly and he bared them to the evening air, grateful for the front fastenings on her gown. Her nipples hardened as the cooler air hit them and he played with one. "You are so beautiful. Everything I've ever wanted in a woman."
A soft gasp left her lips as he tugged harder.
"Even when you're angry with me, I can still affect you." He bent his head and licked her nipple before taking the tip into his mouth and sucking for a while.
Esme whimpered when he eased back and he blew over the tip lightly, torturing her. He could do that as long as she enjoyed it. She thrashed against her restraints, no doubt seeking to escape her bonds to hold his head to her breast, the way she'd shown him she preferred in the past days.
He turned away to rummage through the sack instead. A bottle of brandy, a fine glass wrapped in silk, a phallus made in the image of his manhood, and a soft wool blanket to wrap her in afterward.
Next, he stripped. He removed everything he wore and laid it aside in a neat pile, everything except the ring bearing the family crest that always graced his left hand.
He poured the brandy and took the glass to her. "Drink."
To his surprise, she obeyed, her expression full of questions. He refilled the glass and turned it so he'd place his lips exactly where hers had been. He downed the lot quickly, hating the taste. However, the ritual demanded this particular elixir as the accompaniment.
Not for the first time did he wonder if his ancestors had drugged all their brides and themselves to get through this night.
He wouldn't do that to Esme. One glass for each of them would have to be sufficient.
He put the glass aside, and lifted his cravat from his pile of clothes. He ran the fabric through his hands and then tied the stark white linen around her tiny waist. He stood back and then paced the circle, first clockwise then turned back time by walking in another direction.
He felt utterly ridiculous.
When he was directly behind her back, he danced a few stumbling steps of a country dance.
"Richard," Esme called, her tone full of exasperation. "Come back and kiss me."
He rushed to her, eager to answer her summons. Richard pressed his hips against hers. "My darling Esme."
He kissed her, dragged her gown up her slim legs so they were skin to skin from the waist down. Despite being bound and essentially his prisoner, Esme made her desire abundantly clear in the way she pressed her body to his.
He caught one of her thighs and she jumped to wrap her legs about his waist, arms still secured above her head. She flexed her body so her quim rubbed against his hard cock. He caught her legs and shifted each so her feet rested on rocks placed to either side of the stump. Positioned in this manner she was entirely open to him, entirely helpless.
She braced herself against the tree trunk and took in what he'd done. Her gown rested on the top of her thighs and he nudged it higher still, baring her quim.
"Oh, my word," she whispered. "Am I your slave tonight?"
He closed his eyes, feeling horrible and helpless, but unable to stop now he'd begun. He'd never expected to be as aroused by Esme like this as he was. He was desperate for her. "You belong to me."
Without the barrier of clothing or position in his path, he slid into her body effortlessly. A few thrusts and he was properly seated. Esme moaned darkly and sought his mouth for a kiss.
In this, Esme's satisfaction was supposed to come second, but he kissed and touched her, aroused her, and did all he could to make her respond as he claimed her.
Beneath his grunts, Esme sighed and moaned, unwittingly encouraging him to continue. He held her face, fingers framing her jaw and holding her steady. Met her gaze as he ground into her. She came apart, shrieking his name like the wildest of woodland creatures.
She sagged in her bonds, helpless but sated. "This was your fantasy all along. To have your way with me like this. That's why you asked what I'd allow."
He slowed his thrusts. "It hadn't been at first, but..."
"I do trust you. I want you to come inside me like this," she whispered. "I want you to have your darkest desires come true with me."
His body flooded with heat at her words. His desire to keep Esme in his life became a desperate, palpable need. He wanted to marry her. He pledged his heart and his soul to her keeping, and with his body, begged her own in return. He fucked her harder than ever and when he exploded so powerfully, he almost couldn't breathe for the thrill of it.
He told her the truth then. A truth he hadn't realized until that moment he was so urgent to share.
"I love you, Esme. I couldn't ever want anyone the way I want you. Not like this. You're mine and I'm yours. Completely and forever."
**Chapter Fifteen**
**E** sme quickly shook off the happiness Richard's declaration of love evoked. No man meant such a declaration so soon after climax, and she knew that all too well. However, her heart wasn't listening to her head at the moment and rejoiced instead that Richard felt so much for her as to want to say such sweet and tender things.
She flexed her fingers above her head where they were still tied. Her arms were beginning to ache from the position. The dead tree she was tied to was not at all comfortable either for her back. She hoped Richard's fantasy, at least the part involving tying her up, was done with for tonight and he'd soon cut her down. She wanted to touch him so much.
When she'd taken a moment to really look at and listen to Richard when they'd first arrived at the clearing, she'd understood he'd been very nervous about sharing so intimate a fantasy.
It did seem a little out of character for him, but she hadn't truly minded being bound. Restrained as if she were a barbarian's conquest—stolen in the night as if she had no choice. She'd had a choice, and had chosen Richard. She did trust him and his fevered lovemaking had given her great pleasure in return.
As he withdrew, she exhaled and placed her feet back on the ground. "Ah, that's better."
He unhooked her hands and before she could take even one step, he lifted her into his arms and held her against his heaving chest as if he was still her master. "Did I hurt you?"
"Of course not. I would have certainly mentioned any real discomfort." He laid her out on a large rock that was still warm from the sun and lifted her skirts again. He untied the cravat around her waist and set it aside. She pushed at him, eager to be free to run her hands over his body too. "Let me up."
"I'm not done yet," he warned.
"Truly?" She frowned at him but reclined again. His cock was soft and he did not appear even a little amorous. "What else do you want to share with me?"
He held up a dildo and kissed the tip. "This."
He spread her legs and gently invaded her with it. After so much pleasure from his cock, she shrunk away from the chilled intrusion. "That's cold."
"Forgive me." Next he bound her legs together at her knees with his cravat and pulled her skirt over her legs. He flung the blanket over her and stood back. "That's it."
"Richard, I cannot walk like this."
"You're not supposed to walk anywhere just yet." He bit his lip. "We have to stay here a while longer."
She frowned at that. "But we _are_ going back to the manor tonight," she told him.
"Yes, soon."
He left her there, hands bound, knees strapped together, an uncomfortable feeling growing inside her. This was not the man she'd come to know. Richard Hill, Earl of Windermere, was a gentleman to the tips of his perfectly polished hessians. A lion in the bedchamber, but not so perverse as to leave a woman in discomfort.
Or so she had thought.
He returned half dressed and watched her silently as he buttoned himself up in his clothing. There was a peace about him she'd not seen ever before. He appeared confident. Content. Pleased, when she was so helpless.
She licked her lips to wet them. "How long have you wanted to bring a woman here?"
"Not long," he murmured, toying with a lock of her hair that spread out toward him. He sifted the strands between his fingers then carefully returned it to the rock she lay upon.
"Why tonight?"
He glanced up at the sky. "The moon is full and the house party is a success. I was confident everyone would be too busy to wonder where we had gone. It was the right time."
"Oswin knows."
"And knows enough to keep his mouth shut about what we're doing here," he insisted.
She glanced around them as fear crept into her thoughts. She twisted a little to take in her surroundings again. It was very strange to have a dildo inside her body without using it for pleasure, and they were very alone. "Will I be murdered next?"
"That is what I thought you'd ask." He chuckled softly then stroked her cheek with the backs of his fingers. "I'd never hurt you. I wouldn't dare, and I don't want to except for the occasional spanking you might require."
That wasn't reassuring. She lifted her head and struggled. "I want to leave."
"As you wish." He sighed and flung the blanket away, unbound her knees and then carefully removed the dildo. He flung it away, into the woods lying dark below them. "Is that better?"
"Yes." She eased off the rock and put a little distance between them, quickly buttoning the front of her gown so she was decent again. Her hair she couldn't do much about but it was undoubtedly tangled.
"Oh, you should see your face." He laughed outright then, like a young man in love with his life and everything in it. "Come, come, Esme. Let us walk back to the horses arm in arm as the very good friends we are. I have what I want."
"What is it you wanted?"
His laughter died. "Everything. To make love to you in the most significant way a man in my family could. Without doubt or holding back anything I was feeling."
When he stretched out his hand, she accepted it. Assured he wasn't about to murder her, she drew close to him. He tucked her arm through his and led her toward the path he'd traversed carrying her over his shoulder. She'd been too wrapped up in scolding him to notice the terrain wasn't smooth or even. Far fewer people went to the mount than must go to the woodland glade. She shivered, understanding the effort the trek must have required and impressed he'd carried her up the mount in the first place to save her the hardship.
He tugged her along until they reached the horses then helped her mount. He was gentle and sweet and not at all like the man who'd thrown her over his shoulder and carried her off like a barbarian intent on ravishing her.
She'd enjoyed his ravishment, actually.
They rode in silence again but she couldn't stop looking at him. He _was_ different somehow.
His smile widened suddenly and she grew alarmed. "Why are you smiling like that?"
"It seems Oswin couldn't keep his mouth shut after all." He pointed ahead. Lanterns had been placed at regular intervals from the stables leading toward the manor house. "You're being directed home, my lady."
"Home?"
He nodded. "In my family, every woman a Hill takes into the woods, to the high clearing, becomes a bride that morning. It's tradition to take the woman you want to marry to the wishing tree to ask for the blessing of offspring and prosperity."
Her stomach dropped. "I'm not a bride. You are certainly not my husband."
"Not in a legal sense, no." He smiled without concern for her protest. "Not yet."
"Not ever." She reined in her mount. "I will not marry you."
"You already did. I gave you my heart, my soul, in those woods. I am in love with you, and that only happens once for a Hill."
"You did not mean it," she protested. "No man or woman ever means what is said during intimacy. You cannot love me."
"I do. I will always love you."
She peered ahead and noticed shapes moving in the shadows. "Dear God, there's a welcoming party?"
"You were a popular choice. My butler even pulled me aside and demanded to know what my intentions were. I didn't have any that day, but I do now."
Esme covered her face. "Stop this nonsense, Richard. I cannot and will not marry you."
"Why wouldn't you?"
She kicked her feet clear of the stirrup and dismounted recklessly without help. She staggered away from the horses, horrified that he'd not understood. Darkness was better than facing up to an impossible situation.
Richard must have dismounted too because he grabbed her and tried to embrace her. "You love me. I know you must."
"Love doesn't matter."
"It matters to me."
Esme pushed hard against him, seeking solitude. "Love won't matter one bit when I deny you the son you need. I told you I was barren. I cannot have children."
"We will in time," he protested. "Esme, I cannot marry someone else and feel this way about you."
"Of course you can, and you must. Your happiness matters a great deal to me, but it will matter even more to the people who look up to you, depend on you. You need a son and I cannot ever give you one. I will not go through a second marriage like that."
"Surely—" he started but when she held up her hand for silence, he held his tongue.
"You and your people deserve better than me. I am, as my husband so succinctly put it once, as barren as a brick. I will never give you a child, much less an heir."
He was silent a moment.
"I know my limitations, Richard. I have never conceived. Not once with the dozen or so lovers I've had since becoming a widow." She held out her arms. "Why do you think my affairs are short-lived? They must be. Too much is riding on your succession for me to be so selfish as to allow this. I will not permit the title to pass to Adrian Hill's offspring if I can help it. You must marry someone else and have a child with them."
"I had hoped you were simply trying to reassure me you had no expectations beyond being my lover." He dragged her into his arms. "I'd thought you put on a brave face so you couldn't be hurt if we fought again."
"The bravest face I possessed to hide my disappointment in _myself_. I told you our first night together that there was no need to worry because it's the painful truth."
"But I need you," he whispered as he kissed her brow. "I couldn't have gone through with the ritual with anyone but you. I've never wanted to do that to any woman."
She stilled. "Ah, so it was not your fantasy but a duty."
"Yes," he hissed. "I like your hands on me too."
"Then you must remember your duty to your family. Find yourself another woman to wed. I am sure she will easily love you." Her voice caught on the last word and she shook her head as she discovered her claim was all too real. She had fallen in love with him, and now she had even more incentive to give him up than ever. "We had a wonderful dalliance and it must end."
"No. We can be together until I find a bride."
"And risk hurting yourself even more." She evaded his embrace. "Be reasonable."
He took a step in her direction and teased her jaw with the tips of his fingers. "Tell me I didn't imagine how good it was between us?"
She wanted to turn into his embrace for comfort, but it would be a mistake and might make him think there was hope.
"It was good." She smiled that he could still need her reassurance. "You are the best lover I've ever had."
He bit his lip. "So the sex is all that counts with you."
"Telling you I fell in love with you too would change nothing." She stroked her fingers down his face one last time. "Be at peace, Richard, and take my dreams with you. I'll hope they come true for you soon."
"Don't go," he whispered when she eased back.
"I have no choice. I can't stay." She strode toward the house but without intending to encounter any of the Hill family servants who sought to welcome the new _bride_ home. Her heart ached as she quickened her steps, almost running away from Richard and all he'd blindly offered. As she went, she wiped away the tears streaming down her cheeks. She'd marry Richard in a heartbeat if only she wouldn't ruin his life in the process.
Desperate for comfort from one who would understand her pain, she made her way to Harriet's bedchamber and knocked on the heavy door. "It's Esme."
The door opened quickly and instead of Harriet, she faced Miles Hammond in a state of undress. She forced her emotions away, along with her shock at seeing him there after midnight.
He blinked. "Sweetheart?"
"I need Harriet," she whispered.
He ushered her inside and closed the door behind them.
Harriet was sitting up in bed in a demure nightgown. She spared Esme a fleeting glance and then grimaced. "Windermere took _you_ to that damn wishing tree too, didn't he? Perverse bastard."
Harriet threw herself out of bed and rushed to embrace her.
"He's not like Avery," Esme promised. She winced at the broken quality of her voice and straightened her spine. "He imagines us married."
Harriet spat out a bark of bitter laughter. "Marriage? That's new. They never talk about it openly but it's actually a fertility ritual dating back hundreds of years, and particular to this locality." Harriet urged her toward a chair and wiped her tears away with the cuff of her sleeve. "Carolyn Hill explained the significance to me recently. Avery uses it for his own twisted purpose, but I had thought Windermere was above such nonsense."
"Wasted on me," Esme sobbed. "There's nothing that can make me pregnant. I've long given up on that dream."
"Oh my darling," Harriet whispered as she rocked Esme like a babe in arms. "I am so sorry he's hurt you with this. What can I do?"
Esme sniffed. He'd hurt them both, and there was nothing to be done but make a graceful exit as soon as possible. "Can I go with you both when you leave? I need to get away from this place."
"Of course you can." Hammond agreed, pouring drinks at the sideboard. "We can even leave today; the sun will be up in a few hours. I was thinking another month spent in the country would be just the shot before winter sets in. We can make merry together and forget these blasted Hills ever existed."
"I'd like that," Esme whispered, but feared forgetting Richard might just be impossible. She'd like to try though. "I don't think I can face him or London for a long time."
Hammond handed her a glass. "I'll make you both smile again, I swear it."
**Chapter Sixteen**
**T** he drawing room chatter was strangely subdued as Richard rejoined his guests for a late breakfast or early luncheon. He'd overslept unfortunately by a wide margin, so this was his first chance to begin his campaign to get back into Esme's good graces. Having her smile warmly in his direction again was a priority.
Getting her to talk about their marriage was next, and her insistence she couldn't give him children. She'd never delivered a child, but for the first time last night, he'd understood how that lack hurt her. What could it hurt to try for a child together? He was certainly interested in bedding her as often as she'd allow to make a good try of it.
He glanced around but couldn't see her at first, so he helped himself to a plate of cakes and chose to mingle with the ladies who'd gathered around Mrs. Hill, offering well-meaning advice on her impending motherhood. As one, they brushed aside his attempts to converse. Even Carolyn wouldn't look at him, and he puzzled over that new development.
Esme would tell him what he'd missed and undoubtedly tell him what to do. He pursed his lips to hide a smile. God, she never could stop, and he didn't ever want her to.
What he'd found in Esme was the one person who made him happier than he'd ever been—both in bed and out of it. They had always been honest, especially about the things that mattered. If she believed herself barren he'd accept it, but that did not mean they couldn't be together as man and wife.
He took a tour of the room again, noticing by the end that Esme was not present. It was hard to miss that most guests appeared openly hostile toward him and didn't want to talk even while they sipped his best champagne. In desperation for a friendly face, he found Jillian and pulled her aside. "What's going on?"
"Well, nothing beyond the usual. The ladies have had a fine day. We set up our easels in the conservatory and painted each other. Some of the results were amusing."
"Some?" He couldn't wait to see Esme's efforts. She wasn't particularly fond of creating art as far as he could tell, but she never let Jillian down and always encouraged the other women to participate. "Can I look forward to a display?"
"Later today, I think it should be, before the guests start departing tomorrow. Oswin will set everything up in the library. I had considered the long gallery as a venue but perhaps that wouldn't be very kind to force comparisons to the greater painters hung there on us all."
"A wise decision." He bit his lip as Lord Hogan glanced their way. Esme's warning prodded his memory. In the excitement of his budding relationship with her, he'd neglected to pass along her message to his sister, but he could remedy that now, hoping he wasn't about to blunder. "There's something I've been meaning to talk to you about. Might we speak in private?"
Jillian took his arm and he led her farther away to a quiet corner.
He didn't waste any time. "I've been meaning to speak to you about your future."
"Oh?"
She appeared startled and he rushed on so she didn't get the wrong idea. "I wanted to be sure you knew I am happy to have you home and reassure you there is no reason to rush into another relationship if you'd rather not."
His sister's shoulders relaxed. "For a moment, I wasn't sure you were going to give me your blessing or marching orders."
He glanced toward Hogan again. "The gentleman's not spoken to me. Is it serious between you two?"
She smiled softly. "No. Well, maybe once I entertained a notion, but my eyes are opened now."
Richard folded his arms across his chest. "Esme spoke to you already, didn't she? I told her I would do it. That woman will be the death of me."
"Not soon enough, brother dear." Jillian smiled a touch sadly. "But yes, Esme spoke to me several days ago. Said I should make up my own mind and I have taken my time forming my own opinions during the house party."
Curiosity burned when she didn't elaborate immediately. "And?"
"She was right about Lord Hogan." Jillian fidgeted. "There is nothing he likes more than an idea he put forward himself and to subtly ridicule others so I might think less of them. He hasn't a kind word to say about Esme and suggested we shouldn't even be friends already, if you can imagine. She warned me he'd try to subvert my friendships first, and she was right."
Richard's hackles rose. "You keep your friendship with Esme, with anyone you choose."
"I will." Jillian shrugged. "I just hope recent events will not prevent her from being friends with me. I do like her more than any other female acquaintances you've had."
He shuffled his feet. He shouldn't have delayed seeking out Esme a moment longer than necessary. "I cannot imagine Esme would ever snub you."
"She might." Jillian stared at him, eyes narrowing. "Did you have to be such a scoundrel with no thought to her feelings?"
Richard glanced around quickly, realizing everyone must know they'd argued last night. How they could know he wasn't sure but he would fix everything soon. "You know how hot her temper can be. It's just a misunderstanding. By tomorrow all will be settled between us."
"That would be difficult."
"Esme can be entirely sensible when she wants to be. I can be very persuasive."
Jillian gripped his arm and stared into his face until he grew uncomfortable. "How did I get so unlucky in my brothers? You are both so entirely witless it breaks my heart. Someone should have told you by now."
"Don't ever lump me in with Avery's follies." Richard scowled. "If he'd just settle on one woman, he wouldn't be so bloody miserable. I'll talk Esme round, never fear."
Jillian shook her head. "Since I've just come from having the same conversation with Avery, I guess I'll have to be as blunt with you too."
He glanced around the room quickly. "Do spit it out. I need to speak with her."
Jillian scowled. "She left this morning in Mr. Hammond's carriage at daybreak, without a word of farewell to anyone but Oswin. The poor man cried, I think."
He searched for one particular face among the far crowd. He swallowed when he didn't find Hammond seated among his guests. "Alone?"
His sister huffed. "Lady Ames and Mr. Hammond went with her. Avery didn't take the news well. I would suggest you don't venture into the morning room until repairs have been made."
He swallowed hard. "She wouldn't have left without saying goodbye," he insisted. But a hollow feeling filled the pit of his stomach. She had insisted she had to leave last night. He'd not believed her then. He'd certainly not imagined she'd leave without speaking to him again.
Richard noticed again all the disappointed looks aimed his way. "That explains my reception this afternoon."
He took a drink from a servant and sipped, trying to accept he'd already lost Esme.
"Everyone—and I mean everyone but Avery and Lord Hogan—liked Esme for you. I have heard more whispers about the two of you making a match than I have of anyone this Season. Why did you have to follow family tradition and ruin everything by taking her out into the woods?"
"She wasn't upset about the abduction," he told her. "But what it signified for our relationship alarmed her."
"It is customary in our society to allow a woman beyond their first season the luxury of some choice in whom they wed, rather than forcing it on her."
"It wasn't my presumption exactly that upset her." It was worse. She couldn't give him a son and denied them both any happiness. Without a child, she believed there was no reason to wed him.
"Doubly a fool." With that, Jillian rushed off, leaving Richard uncertain of what to do next. He dropped the glass to a nearby table.
A footman with a tray of drinks stared, eyeing him warily from a distance. Richard waved him over. Now that was what he really needed while he formulated a plan to get her back. No doubt she was miles away by now so he had better come up with a compelling reason why the succession wasn't important before he followed.
There was no point rushing Esme to change her mind anyway. He'd never win her over that way. She believed they had no future and only time would prove his devotion sincere. "Another whiskey, Pip. Better make it a dozen. I have some scheming to do."
The footman came close, but then simply shoved the tray toward him. "Have the lot."
Too stunned for words by the servant's surly attitude, he caught the tray before the footman stalked off.
Tucked between the glasses was a folded sheet of paper. He flicked it open one-handed and read.
The paper contained six names, all women. The note was signed with an E and contained a postscript: _don't argue with me_.
He grimaced at her obvious intent—marry one of them, but not her.
"Damn that stubborn woman!" He crumpled the note and rubbed his eye with the heel of his hand. Even when she wasn't here she drove him crazy. How could he choose anyone else after Esme had made him love her?
He'd have to prove her wrong about those other women first and then he'd claim his proper bride at last.
**Chapter Seventeen**
_Three months later..._
**E** sme's knees would have given out had she been standing instead of lying flat on her back while a doctor examined her nether regions. Her hands began to shake and she clutched the sheet beneath her tightly to hide her reaction. "I cannot be," she whispered.
Her doctor, a stranger Mr. Hammond had brought to see her against her better judgment, regarded her over his spectacles as he sank into the chair beside her bed, where she'd rested for the last few weeks since her sickness had begun. "Assuredly, you are."
Her head spun, her stomach churned anew. Where was her own doctor when she needed him most, to tell her the real truth? Swooning was definitely a possibility and if she'd been standing, she might forgive herself for indulging in such theatrics given the news that had just been delivered to her. "How can I have a child?"
The man gave her an odd look. "The usual way, I imagine."
Her old doctor in London had explained long ago that she'd likely never conceive a child, no matter how many times Heathcote joining her in bed. She had accepted she would never be a mother and gotten over the disappointment as best she could. Her husband had gotten himself a mistress who had then popped out illegitimate babies at an embarrassing rate, proving Esme the one at fault all along.
Even after her husband had died, Esme had never so much as worried her lovers could get a child on her. None ever had. _Until now._
She pressed her fingers to her temple as her head throbbed with renewed vigor. "Surely there is room for doubt."
"Of course, but I still expect you to deliver around Easter." He peered at her closely. "Ah, I guess you hadn't hoped for this blessing, after all?"
Hope had been a thing of the very distant past. She'd never dared allow it. "I thought I had simply eaten bad food. I've had the entire kitchen scrubbed out twice just to be sure."
"Well, I am sure your servants will be happy they were blameless in your condition." He smiled warmly. "Try not to panic, my dear. There is no reason you cannot deliver a healthy first child at your age."
"My age?" She rubbed her temple again, fearing her skull would crack from the pounding there. Esme routinely lied about her age. She was a few years older than the doctor knew but she could see no reason to correct him. "That had not even occurred to me as a complication."
"Many women past the first flush of youth go on to deliver safely. Four and thirty is a bit old for a first child, so I would suggest plenty of rest, good food, and a degree of company to distract you from any melancholy that might arise. Surely your husband will understand your fears. You must talk to him, or if you feel yourself unequal to the task, I could speak to him personally perhaps before I go."
"Mr. Hammond is not my husband." She raised her gaze to the doctor's, heart sinking at her worsening predicament. "I am a widow."
His eyes narrowed on her cherry-red dressing robe, and his frown grew even more pronounced. He glanced toward the door. She had been staying here as Mr. Hammond's guest at his new estate for the past three months, and no doubt many assumed them related or involved in some fashion. Had Hammond suggested they were close? She wasn't sure, but the doctor was clearly suspicious of their relationship.
He cleared his throat. "I am very sorry to hear it. Surely you have someone in your life to give you the support you need during your confinement."
She sat up and forced her legs to firm so she could stand. She felt as weak as a newborn kitten that had been spun in circles by a terrifying child. "I do thank you for your time."
The doctor caught up his little bag and swept from the room. Esme sank onto the bed again and closed her eyes, still unable to believe she'd be a mother at long last.
Hammond and the doctor conversed in low tones then all was silent save for Hammond's heavy footfalls returning to her bedside. She snapped her eyes open and struggled for composure.
He stopped at the nearby chair and gripped the back with both hands. "What did the good doctor say?"
"Rest and bland food can help." She glanced at his pale face and saw true concern. For all his wickedness, Hammond was a good man. He deserved the truth, but Esme couldn't force the words out.
"Nothing more? No hint of why you're retching?"
_Pregnant._ Esme swallowed the panic again. She had to think and she couldn't do that with Hammond hovering over her. Her friend would never understand how utterly shocked she was feeling at that moment. If only Harriet were here, and not gone off with her son. Esme hadn't the faintest idea what to do. She was not prepared for something like this. "Some, but I don't believe him."
Hammond sat beside her as Esme took a few steadying breaths. But there was no getting over the shock of the doctor's suggestion. Dear God, how could she be pregnant at her age? She'd been so afraid she'd been about to die, casting up her accounts morning and night, never able to keep much at all in her stomach. She'd given up dreams of a child so long ago that she was afraid for even a moment to consider the possibility was real. "Esme, what did he say to you?"
"A young doctor _could_ be wrong."
"Perhaps." Hammond slipped his arm behind her back. "Tell me what he said that he could be wrong about?"
"He said..." Dear God, how hard it was to confess to something she'd given up hope for? She tried again. "He suggested I might, well, it seems as if I am to have a..." Her throat closed and tears filled her eyes. How did women normally describe it? "This is difficult."
Hammond waited patiently without speaking.
"It seems I am with child." Inexplicable joy filled her as soon as the words passed her lips. In a few months, she would hold her son or daughter in her arms. The dream of her youth would become a reality.
"I see," Hammond said slowly. His expression clouded over with remorse. "When are you going to do something about it?"
Richard's face flashed before her eyes. Not the smug, arrogant mask he liked to slip over his features as he moved about society, but the one she'd come to know from hours spent in his arms. The wickedly confident man she'd come to love despite her misgivings, who still featured in her thoughts night and day. Particularly their nights—and the last one they'd spent together in the woods.
She dreaded telling him because she'd been so utterly confident she couldn't give him what he needed most. A son. "I don't know."
"Can you find what you need here?"
Esme jerked around to stare at Hammond, finally understanding the question he was truly asking of her. He was not speaking of her telling the father of her child, but something much more dangerous. "I will not take such a drastic action as to end this, no matter how sick I have become."
He exhaled loudly but then leaned close to kiss her temple. "Thank God for that. For a moment there I assumed you would take the path many widows take given your circumstances."
She knew of women who drank boiled pennyroyal to abort a pregnancy because their situations did not allow them to keep a lover's child. But that was not without considerable risk to their lives. She wouldn't do it, not to herself and not to Richard. However, a woman of her position would face social ruin to be pregnant out of wedlock. She would be shunned. Held up for ridicule. If Richard didn't marry her, if he couldn't because he'd already committed himself to another, their child would be denied its birthright, excluded from the society it should have belonged too.
Nausea assailed her again and she pressed a scented handkerchief to her mouth. "I would never do that," she whispered against the soft muslin cloth.
"Shall I have the house closed up?"
"No." She glanced at Hammond. "Why would you suggest it?"
He smiled smugly. "Well, I imagine you do need to confront a certain earl about your condition sooner rather than later."
She shook her head.
"Come now, I know the child is his," Hammond continued. "Very shoddy of him not to practice restraint, but he will do the proper thing in the end, I've no doubt."
He would but she'd sent him into the arms of other women. She'd demanded he marry someone else. She'd been wrong to deny him. It could already be too late. "Richard will not be pleased with me."
Hammond rocked her and the motion stirred her stomach again. "He should have considered that before he was in your bed."
"He did." A light sweat broke out over her skin as she fought the nausea yet again. "I promised him I couldn't bear a child. I never have before and he believed me. _I_ believed me."
"Nonsense. I saw how you were together when you thought no one was looking. It was only a matter of time. I've said it before and I'll say it again, the pair of you are besotted."
"Besotted?" Obsessed was a better description for her state of mind then and now.
"He's been rather wild since his house party I hear. Attending every party, dancing and drinking to excess. Flirting with any woman under thirty."
Her eyes filled with tears. "He did what I told him to do. I told him to find someone else to love."
"He's not himself but he would be a fool to turn you aside and you know it. You are exactly what he needs in a wife and he does need an heir."
"What if," she swallowed as a new terror filled her, "what if there is a child and I lose it? The doctor suggested women of my age sometimes have difficulties." Gingerly, she placed her hand over her belly, still unable to believe the situation she was in.
Hammond squeezed her gently. "Don't borrow trouble before you need to."
"You don't understand." She wanted to explain her terror but Hammond didn't place enough faith in her fears. "You are correct that he would marry me. He needs an heir. Everyone knows it. But I have never been with child before and I am quite terrified to move. I cannot confess I'm carrying a child only to lose it in the process. I simply couldn't bear it."
"Well, you cannot hide the truth from him forever. Think of your reputation. Think of the child's future. You must be married before the birth."
"I know. I think it best I remain here until I am absolutely sure there is a babe and I am well enough for travel. Look at me." She flung her arms wide and glanced down. Her breasts might be a touch tender, but her stomach beneath her gown was still as flat as ever. There was no outward sign she'd conceived, if her roiling stomach and sharper cheekbones were overlooked. "Do I appear the least bit pregnant? He'll believe I'm trying to trick him like his last lover did."
"Surely not."
"Richard vowed never again to take a woman at her word. We discussed it one night after..." She waved a hand about. "It's preposterous to turn around and suggest he do so with me without evidence. I won't have him doubt me or, worse, laugh at the very suggestion." She could not confess to Richard until she was absolutely certain. She couldn't bear to build up his hopes only to dash them when this, whatever it was, might turn out to be but a mistake.
"You are wrong." Hammond stubbornly shook his head. "No gentleman would ever laugh at any lady should she declare him the father of her child."
"Nevertheless, I will wait until I am certain your doctor knows what he's about." Esme eased back on the bed as yet again a wave of nausea swamped her senses. She was so tired and weary. Casting up her accounts at all hours of the day and night and dreaming of Richard endlessly was utterly draining. She pushed at Hammond's shoulder. "I need to be alone again, but promise me you won't say a word of this to anyone."
"I promise, but what are you going to do? Harriet expects you to return to Town next week."
"I'll write only to her about this." She gagged, and scrambled for the pail beside the bed. When she was without her lunch, she lay down with Hammond's assistance. She was still too ill to travel. She wasn't going anywhere in anything that would rock her about. If anyone were to be disappointed, it would be her alone, so she would stay here for the present.
She pressed her head into the pillows as Hammond gently covered her with the comforter and rang for her maid. "I won't return to Town. I will stay here until it's absolutely necessary to leave."
When she was sure, ready to believe herself, if there really was a need, she would seek out Richard. She didn't look forward to the conversation, especially when it might require her to beg him to marry her just to give the child the protection of his name. That wasn't a good way to start any marriage, but given the alternative future ahead, she just might have no choice in the matter.
**Chapter Eighteen**
_Early December..._
**P** repared for battle, Richard stepped from the carriage and strode up the steps of the home Esme had moved to. It had been five months since she'd slipped away from his estate without saying goodbye. The problem for him was, he'd felt like an arse ever since and he'd missed her terribly. He'd been a fool to concede to her demand that he consider anyone else.
He was married in his mind to Esme. Even his cock seemed to think so, for it hadn't risen even in the slightest except in remembrance of her.
And he was sick of pretending he didn't care about her odd behavior. What the devil was she doing rusticating in the countryside? She belonged in the heart of the _ton_ not hiding from it.
News of Esme's avoidance of society hadn't reached him for months after their parting. Society had been abuzz with the speculation of her whereabouts but no one had thought to mention she'd actually truly disappeared, and she had cut off everyone in her life.
In frustration and concern, he'd come up to London again hoping to find her, only to meet with no success. No one knew where she was; no one had spoken to her since his house party. In desperation, he'd barged his way into her London townhouse and forced a fortune on her cagey butler for news of her location, but only after he'd begged and pleaded and sworn he only had Esme's best interests at heart.
Discovering she'd taken over one of Mr. Hammond's houses in the country had relieved him. At least if she was with Hammond she'd be taken care of.
But there was something in the way the butler had delivered the news that concerned him. He'd asked if Esme would be returning to London soon and the answer had been a long time coming. It was no.
He rapped the knocker soundly and shivered in the chill air, while rehearsing what he wanted to say most of all clearly and succinctly. He was here to ask for Esme's hand in bloody marriage again, and she had better accept his offer or there would be hell to pay.
The man at the door was instantly recognizable. The footman who had been in his employ until recently, Pip, took great pains to read the card Richard handed over as if they were strangers, rather than former master and servant. His expression gave little away but given the man's sister was Esme's maid, it meant he was in exactly the right place to find the woman he wanted.
"So this is where you found new employment," he murmured, hoping to soften the man and get inside sooner rather than later.
"Indeed," Pip replied in a tone worthy of Oswin's tutelage.
Pip said nothing more than to bade him wait in the hall while he informed his mistress of Richard's arrival.
While he cooled his heels during the long wait, Richard snooped into the nearby rooms. Knickknacks cluttered every space, and he groaned—Esme's current home was a sleepwalker's nightmare, and he'd been doing a lot of that lately.
"Lady Heathcote will see you now."
He followed the servant through a doorway that had been previously closed and found Esme reclining on a chaise. Dressed in dark-blue velvet, she was bundled up beneath a thick quilt and had so many pillows strew around her body that only her head, shoulders and arms were visible. She looked deliciously warm and good enough to pounce on. He bowed instead. "Esme."
"Lord Windermere."
She did not stand and when he drew closer with the intention of kissing her cheek, he noted the pallor of her skin and the dark circles beneath her eyes. Hammond was not taking care of her after all, and he was suddenly furious. Good thing he'd come prepared to fight long and hard to get her back.
He sat down across from her. "You look beautiful posed like that."
"Thank you." Her eyes softened. They talked of the weather, his journey until she seemed at a loss for words. "Why are you here?"
"Isn't it obvious?" He frowned. "I reached the end of your list and have come to extend you an invitation to return to Windermere. I feel badly about how things ended between us."
Her brow rose. "The purpose of my leaving was to get out of the way."
"You were not in my way. I wanted you to stay, remember, and you turned me down. You sent me off to seduce other women, which I have to say wasn't as pleasant to do when ordered to it as you must have imagined a man could find it." He'd felt lost, uncertain, while they'd been apart. As much as he'd tried it her way, there had been no one on her list who could ever replace Esme in his affections. The time wasted on others had only revealed how utterly wrong she'd been about who would suit him for a wife. "I was surprised to find you'd given up London for this place."
"I was leaving tomorrow."
Richard's pulse sped up at the idea he might have missed her if he'd delayed another day. "Where were you going next? Back to London, or to join Lady Ames, wherever that might be?"
"I'm not sure now," she murmured, in a voice very unlike her usual forthright tones.
She met his gaze warily. To Richard's eye she seemed uncommonly nervous about seeing him again. He smiled warmly. Ill kept or not, it was very good to be with her once more. He had so much to tell her of the past months she'd been away from society. "Jillian parted ways with Lord Hogan after the party and tells me she has no intention of keeping up the acquaintance."
"I am not sorry to hear it." She winced and glanced down at her hands. "I hope she suffered no harm from the association."
"She seems more herself, I think. The way she was before her marriage." Richard tilted his head to catch her eye. "You told me the truth about Hogan and I didn't act fast enough for your taste. I learned what he was about, the truth of his nature, after the party. If not for your words of warning, my sister might be miserable today."
"It is fortunate I knew of Lord Hogan's past. I would not have Jillian hurt for anything." She began to rise then sank down again. Her expression became strained.
"I should remember you are right about most things in the future and save myself the bother of questioning you."
She swallowed. "Do give your sister my best and thank you for coming so far to see me."
Was she dismissing him? No, she bloody well wasn't. He moved across the room and squeezed onto the chaise beside her legs. "There are still things I need to say to you."
Her gaze grew wary. "Such as."
Despite the tone of her words, the practiced phrases came rushing back, but he took a deep breath before repeating them. "Esme, what happened between us at the house party haunts me. I have not stopped thinking of you. I enjoyed our time together very much. I came to specifically ask you to give me another chance."
She winced suddenly and her hand flew to her middle, fingers splaying over her belly.
It took a moment for his brain to register that it was not a pillow she clutched to her stomach.
His eyes widened. There was a bump beneath that quilt. One he couldn't dismiss as a product of his imagination. What the devil was she hiding under there?
He reached for the quilt covering her body and slowly drew it back, despite her efforts to keep herself covered. He struggled to breathe. There was nothing beneath the quilt but Esme—and her belly was large, rounded with the evidence of a significant development. One she'd sworn was not possible.
Esme was with child.
She slipped from the chaise and put distance between them. "A gentleman usually asks a lady for permission before exposing them to the discomfort of a cold room."
"Usually I would."
She licked her lips. "I was about to tell you."
He stood but could only stare at her stomach. Lying down as she had been, smothered by a quilt and hidden beneath many pillows had concealed her condition upon his arrival. But he _had_ noted the changes in her face. Standing with sunlight bathing her in a soft glow, however, revealed quite the bump where her babe rested. She was gloriously pregnant. Quite far enough along that...
He swallowed to ease the sudden dryness of his mouth.
She was quite far enough along that _he_ could be the child's father very easily. He sank down on the chaise as shock took the strength from his legs.
Why hadn't Esme written him? She must have known he'd be overjoyed.
He glanced at her face, discovering her calm and composed now despite his shock. He would do the right thing. Surely she knew his character, and his heart, well enough to understand he would marry her immediately if given half the chance. She was what he'd come for after all.
His mind spun with horror that he might never have known if he'd followed through with her choices. If he'd married anyone on her list, their child would have been a born a bastard. That couldn't be allowed to happen.
She exhaled loudly then moved to a sideboard, poured a large amount of spirit into a glass and then thrust it at him. "It does take some getting used to."
Richard drained the whiskey in one gulp. "How?"
Esme laughed, a panicked sound that drove away the shock filling him.
"Sorry," he muttered quickly. "Of course I know how. I meant to say how long before the birth?"
"Some months yet. The doctor believes around Easter."
He glanced at her stomach again and made a quick calculation. He grinned. The child _must_ be his. He stood immediately. Once, he'd been completely taken in by a woman who claimed to be carrying his child. Esme had exposed the lie, but he'd been left feeling a fool.
This time, he intended to know without a shadow of a doubt. Besides he was desperate to touch her and end the awkwardness between them.
Richard stretched forth his fingers and touched the bump. Esme's stomach was hard beneath his fingertips, rather than the softness of a pillow that might have been bound about her waist if she wished to pretend to a pregnancy. He breathed a sigh of relief at the resistance and warmth he encountered.
Esme wasn't the type to pretend anything.
She would be a mother at last and he was unbelievably happy to discover it. He spread his fingers over her thick velvet gown, curving them around her belly, fully intending to explore this change in her body thoroughly, lost in wonder of the new life growing inside her.
Esme backed up a few steps. "It's true."
"Come back here. I'm not done with you yet. A very wise woman once told me I had to look beneath the surface." He followed her retreat, aware he was all but chasing her around the room.
"Are you not needed back in London?"
Stubborn, foolish woman. They made quite the pair. "To find this elusive perfect woman you spoke of? Esme, I came here today to again ask you to marry me."
Her eyes widened and her breath caught.
He peered into her face, seeing at last how uncertain she was. "I _was_ prepared to wait out the banns, but since there is a child involved, I insist we return to London immediately so I can acquire a special license. We will be married before the week is out and then I'll take you home to Windermere, where you belong, so we can spend Christmas there together. Where you are needed and wanted."
She lifted her hand to her temple. Richard pressed his advantage and draped an arm about her shoulders, drawing her near. "Come now, surely you can see the sense in a match between us. The child will have a father, and my name. You can take over the running of the house from Jillian without any difficulty. You've always stepped in when needed in the past and Jillian is very fond of you. She already loves you as a sister."
She broke away. "Is that enough for you?"
Richard stared at her stiff back in irritation. "How about the reason to wed being I am still completely, irrevocably, in love with you?"
Her head turned the slightest amount. "How could you love me still after I sent you away? I ran from you."
"So you did and I'm sure then you believed it the right thing to do. I did what you demanded of me too. I met with Lady Beauchamp and the whole damned list. Wined, dined and discovered I didn't care one whit more for them at the end than at the beginning of our acquaintances. Why didn't you come back to me?"
Her shoulders tensed and then she rubbed her finger over her lip. "I was afraid to believe I was pregnant for a long time, and then I was more afraid I wouldn't keep it. I've been quite unwell, you see. At all hours. Women lose children every day and I didn't dare travel. I was so afraid that if I left here, I'd ruin this one chance I had."
His heart flipped at the tremor of fear behind her words. "Nothing will go wrong, and if it does, I'll bear the disappointment with you."
Esme sniffed.
He drew her into his arms, quite alarmed by her behavior. "I don't give a damn about anything but you. I don't care what you said before or why you left. You are the most vexing, frustrating, exciting woman I've ever known. I love you with all my heart and soul. Marry me. Please, Esme. I don't want to lose you again."
If he had imagined Esme would accept him immediately, he was bound to be disappointed. She didn't utter a word.
He gently set his hands over her shoulders. "I will camp outside your door, follow you everywhere like a pathetic, lovelorn fool. I cannot pretend I don't care for you as much as I do. Berate me, scold me all you like, but I'm here and I'm not leaving without my wife."
Irritated beyond belief by her continued silence, he raised her face to his. Tears streaked down her cheeks as she silently wept.
Damn it, that wasn't the response he'd been hoping for when he'd confessed how she'd changed him. He hadn't meant to upset her. He'd come to win her over and make her his own once and for all.
Richard crushed her against him. "Don't cry."
She burst out in loud, fresh sobs. "I cry at everything."
"Oh," he whispered, uncertain if that boded good or ill for their future. He'd never known her to shed a single tear, real or pretend, until their last night together. It was one of the many things he'd liked about her. She did not resort to sentimental theatrics to get her way. This new Esme, his soon-to-be wife and future mother of his child, would undoubtedly take some getting used to.
As he held her, peace settled over him. He could be reasonable. Wasn't Esme worth any sacrifice? "Well, continue if you like. I'm sure I can grow used to it."
Her sobbing renewed in strength and she buried her face against his neckcloth. She clutched his waistcoat and because it was Esme, and he was in love with the woman, he held her against him tightly and left her to it.
When she quieted, he moved a hand to touch her belly again. The knowledge Esme would bear his child gave him so much satisfaction he couldn't begin to describe it. He'd hoped but had never dared picture this day. A soft laugh left him. "You really were the one, the perfect woman for me all along."
She sniffed. "Well of course I'm the one. Who else could put up with you?"
He kissed the top of her head. He could see a happy life ahead, but not always a peaceful one. "Who else indeed? I promise you will have ample opportunity to test that theory. We will mold our children into strong-minded individuals who will never suffer fools. Our progeny will have every advantage in life and we will love them with our whole hearts, as we love each other, because we will tell them every day just how much of a miracle they are."
She sniffed again and glanced up. Despite looking at him with glassy bright eyes, Esme had never been more beautiful to him. "Why are you so certain?"
"That I love you?" He touched his chest. "I feel it here, every time I think of our days together, every time I recall being told you'd already left my home without a word of farewell and left me with a list of potential wives to consider as replacements. I have ached to hold you against me, talk to you, every moment since you left me. I am so sorry. I should have followed you. If I had, you would never have had to be worried about the babe on your own. I would have reassured you that no matter what happens, I would always be in love with you."
Richard smiled gently. "I might not always make the right decisions for myself, but you'll have to grow accustomed to me knowing you better than you think. The list was a grave mistake."
She laughed a little. "I think so too."
He kissed her then, passionately and with all the longing he'd tried to contain for the past months without her in his life. He held nothing back in showing her how much he'd missed her. He was still as wild for her as their last night together in the woods.
He led her back to the chaise lounge and, with her permission this time, lifted her skirts to see her belly. The slender body and flat stomach he'd admired months ago was even more beautiful when rounded. As he stroked her skin, his breath caught as the child beneath his fingers moved as if drawn to his touch. "We will have smart children."
"Please do not get ahead of yourself," she chided. "One child is all I have ever hoped for. Don't expect more than this. Please. I couldn't bear to disappoint you."
"Oh, there will be more, I assure you." The wishing tree had wrought miracles. He leaned close to kiss her belly then let his hands slide lower over her thighs. "The moment I saw you today, I wanted to make love to you. Desperately. We were meant for each other. I've felt this way every time you've left my arms."
Her fingers curled under his chin and she lifted his face to hers, delaying his wish to give her every imaginable pleasure with his tongue and mouth. "And I feel the same for you. It is the most disconcerting feeling. We have been enemies for so long."
He stood and rubbed his nose against hers. "Friendly enemies, clever girl. I have never hated you, even when you were giving me a proper set down."
She laughed then. "I will have to learn to curb my tongue as you must learn to heed my suggestions. Some of them. Most of them."
"I will try if you will. You'll of course share my bed most nights of our marriage." He winced at how pompous that sounded. "I am sure you believe that's another example of my arrogance, but it is going to be an absolute necessity you must accept here and now."
"Why is that?" She frowned.
He pressed his head to hers. "I'm liable to come looking for you in my sleep. You have no idea the difficulties my staff have faced in keeping me contained these past months. Being woken by a bucket of very cold water thrown in my face, in the middle of the night, halfway to the stables because my sleeping self has discovered how to pick a common lock, is not something I care to repeat too often."
"That must have been embarrassing." Her eyes widened. "Do you realize you never so much as twitched in sleep when we shared a bed?"
"Well, I had what I wanted most within reach then. Why would I wander when I had you in my arms?" He brushed his fingers over her cheek, loving her even more than he thought possible. "Now, onward to important matters. Can we make love today? I have no notion of how things must be done with a woman in your condition."
"Much the same, I'm told." She laughed suddenly. "So, there is to be no further discussion of our marriage? You just expect me to comply with your demands?"
Richard left Esme and locked the door for privacy. As he returned, he stripped himself of his jacket, waistcoat, cravat, and pulled his shirt over his head. Hearing Esme's moan of appreciation for the view of his skin made him happier still. He tossed everything aside carelessly and covered Esme as she slithered into a more comfortable position. He kept his weight suspended lest he crush her. "No demands but that you marry me, and as soon as possible please. I have learned my lesson. The only thing I know for certain anymore is that you're the one I want. The rest I leave to you to manage."
Esme burst out laughing and twined her arms about his neck, holding him firmly against a body he ached to feel bare against his once more. "I never intended to be the one you married," she confessed. "But I've not even felt desire for another man since our last night together."
"It's that blasted tree," he complained. "Turns a man into a lapdog and women into willing puppets."
"It wasn't the tree," she whispered. "It was the way you wanted me. I loved that. I love _you_."
He grinned at her words.
"Now for the last piece of the puzzle," she asked. "Did you hate tying me up?"
"Absolutely. I won't ever do it again." He grinned, remembering that night in the woods. "I like your hands on me too. And I'll have you know I have kept to your rules. I've considered myself a married man since your ravishment at the base of the wishing tree. I have been entirely faithful, both in thought and in deed, since our first kiss."
"Just as well." Her lips turned up at the corners. "I won't share you with anyone."
"You will never have to." He bent his head and licked the plump swell of her lower lip before indulging in a leisurely kiss that made them both very restless on the chaise. "You are the only woman I want in my life and my bed. You'd better get used to being adored."
Esme's smile widened, her eyes filling with new tears. But they were tears of joy this time. And of happiness, he hoped. "Oh, I think I can accept that as my due. But you'll have to be very thorough, indeed, to reassure me in my delicate condition."
He framed her face with both hands. His joy was barely contained. Emotion clogged his throat and when he spoke next, his voice was a rough whisper. "Trust me, I have nothing else on my mind, awake or sleeping, other than you. My one and only love, forevermore."
**Epilogue**
**"R** ichard," Esme Hill, Countess of Windermere, bellowed at the top of her lungs. "Where have you taken our son?"
Richard launched to his feet and stared at his wife in shock as she appeared in the library doorway. "Good God, woman, you've just given birth. What are you doing out of bed?"
"I feel fine and the birth was yesterday." She glanced around the room. "So where is he?"
After their first child's rocky beginning, motherhood had come easily to Esme. It was a relief and a little alarming _how_ well, actually. He'd hoped for an heir—just one son to carry on the Hill legacy and prevent the title from passing to his nosey cousin. He'd received far more than he'd bargained for with Esme as his wife.
"Hiding." He sank back down, knowing there were certain things he could argue with Esme about and other things that were not worth drawing breath for. Her health after laboring to deliver their children being one topic she refused to discuss with him. Ever. "I'm not to look for him until the clock strikes the hour."
She set her hands to her hips and glared. "I expect him to be returned to the nursery in the next half hour. He's to take a bath and that's final."
"Yes, my dear." He tilted his head to the side and smiled at his wife. God, she was lovely when she was annoyed with him. Six years married and he was still in love with the most fascinating woman he'd ever met. And desirable. He had weeks of abstinence ahead and then he could make love to her again. He hated waiting. "I love you, clever girl."
She scowled. "I love you too; just don't think seductive smiles like that are enough to appease me today. Your son has a woodsy odor about him from this morning's play outside."
He grinned. "He has to learn everything if he's to run this place once day. Mucking out the stables didn't hurt him. That's what my father did to me too. I turned out all right, didn't I?"
She huffed. "You did, eventually."
When she went on her way, he peeked under the table next to him and into two pairs of soft-blue eyes. "That was close, poppets. Mama didn't know I'd abducted you too, or I'd be in double trouble."
His two-year-old daughter Katie and four-year-old daughter Marie crawled from cover on hands and knees and then climbed onto his lap.
"Baby," the youngest cooed happily.
He smoothed her hair and winced when he found a piece of straw in the blonde locks. No doubt from the game she and Marie had played among the fresh hay. Katie might be in need of a bath too before her mother saw her again, and was that a smudge of earth on Marie's knee? He was going to pay for that later. "Yes, you'll have a little brother to play with soon. Baby Alistair just needs to grow a bit, so we must be quiet together while we wait until he's bigger."
Marie leaned into his chest, but then stretched to pull her sister's thumb from the girl's mouth. "I want to hold him again."
Not a request but a demand. Marie was so like Esme in some respects that he grinned. "I'll see what I can do."
"And me too." Six-year-old Robert poked his head out of the cupboard he'd been hiding in, joining in with the idea. "Since Mama isn't resting anymore, can we go and see our new brother?"
"Can you be quiet as little mice?"
They all nodded.
"Can you not tell Mama about it later?"
They shook their heads and appeared sad about that fact that no matter what they promised, one stern look from their mother had them confessing to anything and everything.
He laughed. "Probably for the best."
A movement at the door caught his eye and when he looked in that direction, he discovered Esme standing there, watching him and their children plot against her. She shook her head, soft-blue eyes brimming with laughter.
"Always one step ahead of me," he complained.
"Someone has to be," she answered back immediately, looking extremely amused.
He stood, taking Katie into his arms and holding out his hand to Marie, and addressed the children. "Never does pay to keep anything from Mama because she'll always find out what we're up to. Come, it's time for your baths and then we'll make Mama take us to see Alistair so we don't all get in trouble, should we wake the baby."
He led them toward Esme, arms and heart full of love. He kissed his wife because he never could tire of showing her how much he adored her. He had a family he cherished to be sure, but only because one woman had made his life complete and had found a compelling enough reason to wed him in the end—love.
_The Distinguished Rogues series will continue with.. The Trouble with Love soon._
_While you wait for the next Distinguished Rogue please take a peek at The Wedding Affair, Rebel Hearts, book 1_
**The Wedding Affair**
Chapter One
_April, 1815_
_Newberry Park, Essex_
**_I_** _will be betrothed today._
The familiar refrain brought Lady Sally Ford intense satisfaction as she hurried toward Newberry Park's white drawing room were her future waited to be taken up.
The two footmen flanking the drawing room doors opened them smartly, allowing Sally to make a grand entrance to meet with the earl whom she intended to give her hand in marriage. She noted the occupants arrayed in the afternoon's final sunrays—her mother and her future mother-in-law.
But no potential groom.
Regardless of the lack of future husband, Sally dropped into a perfect curtsy because she could not afford to make a bad impression. She had spent many additional minutes before her looking glass, making sure her dark hair was perfectly arranged and her lips slightly pinked thanks to a brush of tinted beeswax. She wanted to have kissable lips when she agreed to become a bride.
Sally's mother, the Countess of Templeton, rested with her feet upon a padded stool and a scrap of fine cloth over her brow.
"Good afternoon, Mama. Lady Ellicott."
Mama started upright at the sound of Sally's voice. "Sally, what are you doing here? I thought you would be gone for hours yet."
Sally smiled but did not want to be drawn into a conversation about the estate immediately. "Where else would I want to be but with our important guests?"
Lady Ellicott, her beau's formidable mother, had also been drowsing in a comfortable high-backed chair but smiled somewhat warmly in return. A round woman with a pale face, Lady Ellicott met her gaze and held it a touch longer than Sally found comfortable, no doubt assessing her yet again.
Sally had grown used to the feeling and the scrutiny over the past weeks. She straightened her spine a touch more, determined not to fail to meet the lady's high standards. She felt she had almost won her over to approving the match.
Almost, but Lord Ellicott had not yet asked for her hand.
"My dear girl, how lovely you look today," Lady Ellicott murmured as her son stepped into the room from the terrace. _There he was._ _Adam Belmont, Lord Ellicott_. The man she would give her hand and fortune to if he would but ask. "Is she not the most arresting woman of all, Ellicott?"
"She is a beauty." Ellicott strode across the room, lean and handsome, smiling widely as he approached. He raised her outstretched hands to kiss them, his warm brown eyes dancing with feeling. "Good afternoon, Sally."
She had given him leave to use her given name a week ago, but the familiarity was still something of a shock.
The sound of her first name tumbling from his lips should have excited her happier emotions. It was an intimacy she did not give lightly to anyone outside her family. Sally waited for anything resembling a heated awareness of him to affect her senses. After all the time they had spent together, shouldn't she feel something, at least anticipation for the pleasure of his kiss? When her heart and body failed yet again to stir, she smiled demurely. The marriage was more important than fleeting pleasures anyway. He was her future. "Thank you for the compliment. Have you had a pleasant morning shooting with Uncle George?"
"Yes, quite pleasant."
Her uncle George had lost a foot years ago, but that did not stop him from hunting on the estate several times a week. He was a high stickler though, and she felt confident he would only paint her character in the best light. "I am sure he enjoyed having the company of another man with him."
Ellicott laughed, and his eyes lit up with mirth. "I imagine so. The chatter of a dozen women can be so overwhelming."
Sally smiled, but the jibe hit a little too close to the bone. Newberry Park consisted of wives or spinsters for the most part. "There are only ten Ford women on the estate to amuse three often exacting men. It is an exhausting job indeed keeping that trio and guests in line, I might tell you."
"I do not wonder why you are hardly ever in London. They do not dare let you out of their sight for long because they are afraid of letting you go." Ellicott kissed her hand again and looked deep into her eyes. He doted on her as much as almost-courting couples were allowed within the bounds of propriety in public, and in the brief private moments they had been granted she had found much to admire in him. He was very free with his compliments, and she felt them all sincere.
He needed a bride with a fortune to bolster his estate's finances but was not so overwhelmed with debt to be considered an out-and-out scoundrel about it. Her dowry and connections were important to him. He was smart enough to keep up his end of a lively conversation, and he was active enough not to allow his figure to run to fat anytime soon.
Overall, a worthy catch for any husband-hunting woman from a good family.
Unfortunately for Sally, her head might say yes to marrying him, but her body and heart remained watching from the shadows. That lack of feeling was probably for the best. If there was any chance of love between them, Sally was convinced it would surface once they were husband and wife and without anyone, such as her nine female relations, watching everything they did and said together.
Finding quiet moments with Lord Ellicott had been a challenge during the week of the Ellicotts' stay. Not one to let a little obstacle such as propriety overset her plans at this late stage of their courtship, she allowed him to take her hand in his and place it upon his arm. "You are too kind."
"Not at all, for it is the honest truth." He led her across the room and stopped, poised equally between their mothers. He cast his eye over them all and smiled. "Beauty runs in the family. In both families."
"True, but a woman's good looks must be cared for as if they were her greatest achievement," Lady Ellicott remarked, casting a stern look in Sally's direction. "I trust you rested in a dark room this morning. It works wonders for the complexion."
"I did hope to, but unfortunately there was a matter that required my urgent attention, so I had no choice but to go out for a little while." A fat fib if ever she had told one. She had wanted to escape the house and find a useful outlet for her nervous energy. Pretending to be demure was difficult.
Lady Ellicott exchanged a glance with her son that hinted at disapproval. In her brother's absence, Sally had involved herself in the running of the estate more than most unmarried women her age usually did. She enjoyed the challenge but made certain to hide her activities from important guests who would disapprove.
She glanced at her mother, seeking a diversion from conversation of Newberry, and saw annoyance in her expression, which she usually did not reveal so openly. Countess Templeton possessed a stubborn disposition and yet disliked disagreements. In her early fifties and mother of six children, five of whom still lived, there were ample signs of what had been considerable beauty in her mother's dear face. Determined that her mother and Lady Ellicott not fall out today of all days, Sally crossed the room and perched at her mother's side.
"Good afternoon, Mama," Sally said warmly, then pitched her voice low as she continued. "There were poachers in the northern field last night, but no sign of them now or the last dozen sheep of that flock. I have had them moved closer to the rest."
"If you think that best," her mother said approvingly.
What Sally did for her family likely amounted to work by an outsider's standards, but she did it with love and pride. Lady Ellicott could not seem to abide the concept of a woman employed and had already expressed her disapproval in many subtle ways. Today Sally was not so lucky.
"Surely Lord George should be making these decisions, or your father or grandfather."
Sally met the woman's gaze steadily, resigned to yet another lengthy discussion on what women should do or not do. It was the worst possible time for it. "Uncle George has his own duties, father is barely ever away from the admiralty, and my grandfather cannot ride the estate anymore. What should we do, allow the flock to be fleeced and the estate fall to wrack and ruin while we spin lace in the drawing room?"
Ellicott coughed into his fist. "Never that."
His mother sniffed haughtily. "A lady should never be without a knowledge of lacemaking. It is certainly more proper than traipsing around the estate at all hours. I always advise my acquaintances to spend as little time out of doors in the elements as possible to preserve the complexion."
Sally happened to love the outdoors, and she liked the way her skin looked—vivid and glowing with warmth during the summertime months. She also loved to be useful, to be as involved as possible with what happened on the Newberry Park estate. Hiding her involvement in the running of the estate was not easy. Many of the servants came to her with their problems and looked to her to make decisions. When she married Ellicott, she would never be an idle wife, but she had to tread carefully for now.
"And she makes the most delicate pieces," her mother insisted loyally. To Sally she said, "You always make the right choices for the estate, and I could not be prouder of the woman you have become. Never doubt that, regardless of what anyone says to the contrary."
Sally struggled to hide her surprise. Normally her mother would say nothing so openly approving about her or about the additional duties they undertook for the estate, certainly not around the Ellicotts. She had agreed that for Sally to make a good impression with the Ellicotts, especially at her age, she had to change, to hide the more unorthodox aspects of their family life.
Hoping to turn the conversation into smoother waters, she smiled warmly at her mother. "Shall we take tea on the terrace today?"
Her mother's face lit up. "Yes, tea out of doors is just the thing to amuse us all."
"I think we should rather not. It is so warm outside and blustery," Lady Ellicott said in a firm voice. Outside, the wind was only gently stirring the bushes of the garden.
Lady Ellicott had a preference for eating all meals indoors. Not even picnics on the cliff tops pleased her. Sally had little choice but to agree with her assessment of the weather since she was trying so hard to win a place in the woman's good graces. "Well then, shall I twist Ellicott's arm and have him read to us from the new _London Gazette_ while we enjoy the tea together here?"
Ellicott shifted toward her. "I was actually hoping you and I might stroll the grounds. Stretch our legs a bit." He turned to Sally's mother. "All within proper sight of the mansion of course."
Lady Ellicott stared hard at her son a moment but nodded. "I am sure Lady Templeton will happily agree to your suggestion."
Her mother's expression grew flinty. "A leisurely stroll about the gardens is acceptable within sight of this room, but propriety must always be observed. You will take a maid."
Sally wanted the opportunity to be alone with Ellicott so he could propose, and her mother was not making it easy. A maid would gossip afterward. She would rather not receive a proposal with an audience. "Mama, please. What harm could come from a short walk through the gardens without a chaperone? We are at home. The wisteria walk is very beautiful at this time of year."
"Indeed that is true." Mama glanced around and then smiled. "I will have a shawl fetched and join you outside."
Sally groaned. Why was Mama not helping her advance her claim on Lord Ellicott's affections or time? She knew the right setting for a proposal was essential for many men to unburden themselves and reveal their desire for matrimony.
Ellicott merely smiled at the news that her mother intended to join them. "The more, the merrier. The breeze appears to have died down too. Shall we wait outside in the fresh air?"
"Yes, that would be lovely." Sally rushed to link arms with him.
Arm in arm, they left her mother behind in the drawing room while her shawl was called for. Undoubtedly Mama would catch up before too many yards had passed beneath their feet. Hopefully Lady Ellicott would sit alone very happily for a little while.
When Ellicott led her at a brisk pace directly toward the wisteria-covered walk, she almost laughed aloud at his haste. She hoped he would propose to her there because it was a lovely, secluded spot.
When they stopped, he caught her hands in his and drew them to his chest. "My dearest Sally, Mother is right. You are the most beautiful woman. I must again convey my thanks for inviting us both for the summer. It has been a pleasant interlude indeed."
"I feel so too," Sally agreed, staring up into his handsome face. Waiting.
"I was thinking of you this morning. Of how well you and I get along. And I do not know any other woman Mother has taken under her wing without hesitation." He laughed softly. "That is quite the coup, I must tell you."
"I do like her." Sally's pulse raced and she bit her lip as she waited for what she hoped might come next. "Her good opinion means the world to me."
Not that she was sure she had it yet, but...
"I must say I never once thought we would both reach this age and not be married." Ellicott sighed. "So it seems plain we must marry each other. What do you think of that idea?"
_Read more..._
_More Regency Romance From Heather Boyd..._
**The Wild Randalls Series**
(begins after Charity, Distinguished Rogues Book 3)
Book 1: Engaging the Enemy (Leopold and Mercy)
Book 2: Forsaking the Prize (Tobias and Blythe)
Book 3: Guarding the Spoils (Oliver and Elizabeth)
Book 4: Hunting the Hero (Constantine and Rosemary)
**Rebel Hearts Series**
(begins after Reason to Wed, Distinguished Rogues Book 7)
Book 1: The Wedding Affair (Felix and Sally)
Book 2: An Affair of Honor (William and Matilda)
Book 3: The Christmas Affair (Harper and Amy) ~ published in A Very Wicked Christmas Boxed Set)
_See the full book list..._
**About Heather Boyd**
Determined to escape the Aussie sun on a scorching camping holiday, Heather picked up a pen and notebook from a corner store and started writing her very first novel—Chills. Eight years later, she is the author of over thirty romances and publisher of several anthologies too. Addicted to all things tech (never again will Heather write a novel longhand) and fascinated by English society of the early 1800's, Heather spends her days getting her characters in and out of trouble and into bed together (if they make it that far). She lives on the edge of beautiful Lake Macquarie, Australia with her trio of mischievous rogues (husband and two sons) along with one rescued cat whose only interest in her career is that it provides him with food on demand.
You can find details of her work at
www.heather-boyd.com
Stay in touch and join Heather's Readers List. Follow this link to receive new release alerts and more direct to your inbox.
Follow Heather on Bookbub!
| {
"redpajama_set_name": "RedPajamaBook"
} | 8,224 |
COLLABORATION / ABOUT
CUHK-Monash University Alliance
The CUHK-Monash Alliance was established in 2017 as a transnational integrated collaboration to co-develop and co-invest in thematic areas of shared excellence in medical education and research. The synergistic collaborations are expected to lead to the development of high impact research programs and innovative medical education platforms.
Formed between the two Medical Faculties, the Alliance aims to generate collaborative research enterprises that will have long-term, significant impact on biomedical, health research and education for both institutions. Initially, the research initiatives will be focused on Translational Medicine:
• Stem Cell Biology and Regenerative Medicine
• Innovative Medical Devices
Specifically, the CUHK Institute for Tissue Engineering and Regenerative Medicine (iTERM) will partner with Monash's Australian Regenerative Medicine Institute (ARMI) in the area of Tissue Engineering and Regenerative Medicine, and the CUHK Chow Yuk Ho Technology Centre for Innovative Medicine will partner with Monash's Institute of Medical Engineering (MIME) to focus on Medical Engineering.
On 17 May 2017, a Memorandum of Understanding (MOU) was signed regarding the exchange of PhD students between our two Faculties of Medicine, with the aim of fostering academic research.
Maintained by Office of Global Engagement (OGE), CUHK Medicine | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,078 |
\section*{Introduction}
The problem of bound states in non-relativistic quantum mechanics in ${\bf R^d}$ is governed by the time-independent Schr\"odinger equation
\begin{equation}
\label{SE}
{\cal H}\ =\ -\frac{\hbar^2}{2 m} \sum_{i=1}^d \partial_{x_i}^2 + V(x)\quad ,\quad {\cal H}\Psi=E\Psi\quad ,\quad \int |\Psi|^2 d^d x < \infty\ ,
\end{equation}
where $V$ is the potential. In this paper we deal with two important classes of potentials:
\noindent
(I) the general one-dimensional anharmonic oscillator (AHO) potential,
\[
V_a(x)\ =\ \frac{1}{g^2}\,\hat{V}(gx)\ =\
\]
\begin{equation}
\label{AHO}
a_0^2 x^2 +
a_1 g x^3 + a_2 g^2 x^4 + \ldots +\ a_{k-2} g^{k-2} x^k + \ldots + a_{2p-2} g^{2p-2} x^{2p} + \ldots\ ,\quad x\in (-\infty, +\infty)\ ,
\end{equation}
where $V_a(x) \geq 0$; the quartic, sextic AHO as well as the quartic $x^4$, sextic $x^6$ etc oscillators, double-well, triple-well, quartic tilted (asymmetric) double-well potentials and even the sine-Gordon potential belong to this class among many other well-known potentials - as well as the $O(d)$,-symmetric radial anharmonic oscillator $$V_a(r)\ =\ \frac{1}{g^2}\,\hat{V}(gr)\ ,$$ and
\noindent
(II) the three-dimensional $O(3)$ radially-Perturbed Coulomb Problem (PCP)
\begin{equation}
\label{Coulomb}
V_c(r)\ =\ g\,\tilde{V}(gr) = -\frac{b_0}{r} + b_1 g^2 r + b_2 g^3 r^2 + \ldots \ ,\quad r \in [0,\infty)\ ,
\end{equation}
where the linear $(b_i=0, i>1)$ and quadratic $(b_i=0, i>2)$ funnel-type potentials as well as the celebrated Yukawa potential are among the potentials of this class. Without loss of generality in many occasions we can put $a_0=b_0=1$, also implying that the constant terms are absent in (\ref{AHO}), (\ref{Coulomb}) as a result of a specific choice of reference point for energy. In general, $\{a\}$ and $\{b\}$ are sets of real parameters. Needless to mention there is an enormous body of papers published about numerous particular cases of (\ref{AHO}) and (\ref{Coulomb}). In this paper we will focus on the one-dimensional anharmonic oscillators (\ref{AHO}) with polynomial anharmonicity of finite integer order $p$ and their corresponding ground state.
\section{Riccati-Bloch equation and Perturbation theory}
Consider the one-dimensional AHO as the first problem. Take the exponential representation of the wave function
\begin{equation}
\label{phase}
\Psi\ =\ e^{-\frac{1}{\hbar}\phi} \ ,
\end{equation}
and substitute it into the Schr\"odinger equation (\ref{SE}) putting for simplicity $m=1/2$. We arrive at the well-known Riccati equation
\begin{equation}
\label{Riccati}
\hbar\,y' \ -\ y^2\ =\ E\ -\ \frac{1}{g^2}\, {\hat V}(g x) \ , \quad y\,=\,\phi'\ ,
\end{equation}
see e.g. \cite{LL:1977}, which contains the Planck constant $\hbar$ in front of the leading derivative. Note that by ignoring this term (in the zeroth approximation), one arrives at the textbook WKB expression for $y$, see e.g. \cite{LL:1977}.
There are two ways to get rid of the explicit $\hbar$ dependence in this equation, which lead to two different expansions of its solution. One corresponds to the
{\it Riccati-Bloch equation}, created by making the following changes to the Riccati equation (\ref{Riccati}) - introducing a new variable $v$ and a new function $\mathcal{Y}$. We write it as
\begin{equation}
\label{RB-variables}
x\ =\ {\hbar^{1/2}}\, v \ ,\ y\ =\ \hbar^{1/2}\, \mathcal{Y}\left(v\right)\ ,\ E\ =\ {\hbar}\,\varepsilon\ ,
\end{equation}
and the effective coupling
\begin{equation}
\label{coupling}
\lambda \ =\ \hbar^{1/2}\,g\ .
\end{equation}
In this case we arrive at the so-called {\it Riccati-Bloch (RB)} equation
\begin{equation}
\label{RB}
\partial_v {\cal Y}\ -\ {\cal Y}^2\ =\ \varepsilon (\lambda)\ -\ \frac{1}{\lambda^2}\,{\hat V}(\lambda v)\ , \quad \partial_v \equiv \frac{d}{dv}\ ,\ v\in (-\infty,\infty)\ ,
\end{equation}
where the potential remains of the same form as in (\ref{AHO}) with replacement $g \rightarrow \lambda$,
\[
\frac{1}{\lambda^2}\,\hat{V}(\lambda v)\ =\ a_0^2\, v^2\ +\
a_1\, \lambda \,v^3\ +\ a_2\, \lambda^2\, v^4\ +\ \ldots\ +\ a_p\, \lambda^{2p-2}\, v^{2p}\ +\ \ldots \ .
\]
Formally, this equation has no $\hbar$-dependence: {\it we study dynamics in the ``quantum", $\hbar$-dependent coordinate $v$ (\ref{RB-variables}) instead $x$, which is governed by an $\hbar$-dependent, effective coupling constant $\lambda$ (\ref{coupling}).} If we develop the perturbation theory (PT) in powers of $\lambda$ in (\ref{RB}), putting $a_0=1$, for the ground state,
\begin{equation}
\label{PT-veps}
\varepsilon\ =\ \sum_0^{\infty} \lambda^{n} \varepsilon_n\ ,\quad \ \varepsilon_0\ =\ 1\ ,\ \varepsilon_1\ =\ 0\ ,\ \ldots
\end{equation}
\begin{equation}
\label{PT-Y}
{\cal Y}\ =\ \sum_0^{\infty} \lambda^{n} {\cal Y}_n(v)\ ,\quad \ {\cal Y}_0\ =\ v\ ,\ {\cal Y}_1\ =\
\frac{a_1}{2}(v^2+1)\ ,\ \ldots
\end{equation}
it becomes clear that the expansion for the energy (\ref{PT-veps}) is simultaneously the perturbation series in powers of $g$ and the semiclassical expansion in powers of $\hbar^{1/2}$, since the coefficients $\varepsilon_n$ are numbers, which depend on the parameters $a_{0,1,2,\ldots}$ in (\ref{AHO}), see (\ref{coupling}). In general, this expansion is asymptotic, having a zero radius of convergence: there exists the problem of its summation. This remains true for the AHO (\ref{AHO}) with two (or several) global degenerate minima. However, in this case (\ref{PT-veps}) contains exponentially-small terms in $\lambda$ in addition to the Taylor expansion in powers of $\lambda$. Contrary to the energy (\ref{PT-veps}), the expansion (\ref{PT-Y}) of ${\cal Y}$ is the PT expansion in powers of $g$ only, since the corrections $Y_n(v)$ are $\hbar$-dependent. This is {\it not} a semiclassical expansion in powers of $\hbar$.
It is worth noting that one can develop in Eq.(\ref{RB}) the perturbation theory in powers of $v$,
\[
{\cal Y}\ =
\]
\begin{equation}
\label{Y-expansion}
\alpha\ +\ (\varepsilon\,+\,\alpha^2)\, v\ +\ \alpha(\varepsilon+\alpha^2)\,v^2\ +\ \frac{\varepsilon^2+\alpha^2 (4\varepsilon+3\alpha^2)-a_0^2}{3}\, v^3 \ +\ \ldots \ ,
\end{equation}
where the parameters $\alpha\equiv{\cal Y}(0),\varepsilon$ can be found approximately only. For even potentials the ground state function is even, its logarithmic derivative ${\cal Y}$ is odd, ${\cal Y}(-v)=-{\cal Y}(v)$ hence, $\alpha=0$.
Expansion (\ref{Y-expansion}) mimics the perturbation theory in powers of $\lambda$ for ${\cal Y}$ (\ref{PT-Y}): the coefficients in (\ref{Y-expansion}) can be found in the form of the expansion in powers of $\lambda$. We have to emphasize that the expansion (\ref{Y-expansion}) has the form of a Taylor expansion in $v$ even for one-term potentials
\[
V=\lambda^{2p-2}\, v^{2p}\ ,\ p=2,3,\ldots \ ,
\]
where the perturbation theory in powers of $\lambda$ can {\it not} be developed.
\section{ Generalized Bloch equation and semiclassical series}
Another approach, central to this paper, is best formulated in a new variable $u$, new function $\mathcal{Z}$ and new energy,
\begin{equation}
\label{GB-variables}
u\ =\ {g\,x}\ =\ \lambda v\ ,\quad y\ =\ \frac{1}{g}\mathcal{Z}(u)\ ,\quad E\ =\ {\hbar}\,\varepsilon\ ,
\end{equation}
keeping the same effective coupling constant (\ref{coupling})
\[
\lambda \ =\ \hbar^{1/2}\,g\ .
\]
After substitution of (\ref{GB-variables}) into the Riccati equation (\ref{Riccati}) and assuming $g \neq 0$ we arrive at
\begin{equation}
\label{GB}
\lambda^2\,\partial_u\mathcal{Z}(u)\ -\ \mathcal{Z}^2(u)\ =\ \lambda^2\,\varepsilon(\lambda)\ -\ {\hat V}(u)\quad ,\quad \partial_u\equiv\frac{d}{du}\ ,\ u \in (-\infty,\infty)\ ,
\end{equation}
cf.(\ref{RB-variables}), where
\[
\hat{V}(u)\ =\ a_0^2\, u^2\ +\
a_1 \,u^3\ +\ a_2\, u^4\ +\ \ldots\ +\ a_p\, u^{2p}\ +\ \ldots \quad .
\]
This is the so-called {\it Generalized Bloch (GB)} equation, see \cite{ST:2018} for the case of the double well potential; evidently, it requires a regularization at $\lambda \rightarrow 0$, when ${\hat V}(u)\rightarrow u^2$, which eventually will lead to the RB equation. This equation describes dynamics in {\it classical, $\hbar$-independent} coordinate $u=g\,x$.
Now we develop the PT in powers of $\lambda$ in the equation (\ref{GB}) assuming that the AHO potential (2) has no degenerate global minima. It is evident that the expansion of the energy $\varepsilon$ in powers of $\lambda$ (\ref{PT-veps}) remains the same as in the Riccati-Bloch equation (\ref{RB}), unlike the expansion for ${\mathcal{Z}}$, which becomes different,
\begin{equation}
\label{PT-Z}
{\cal Z}\ =\ \sum_n \lambda^n {\cal Z}_n(u)\ ,\ {\cal Z}_0\ =\ {\sqrt{ {\hat V} (u)}}\ ,\ {\cal Z}_1\ =\ 0\ ,
\ {\cal Z}_2\ =\ \frac{1}{2} \left(\log {\sqrt{ {\hat V} (u)}}\right)^{\prime}_u - \frac{\varepsilon_0}{2{\sqrt{ {\hat V} (u)}}}\ ,\ \ldots \ .
\end{equation}
where ${\varepsilon_0}=1$. Note that in the standard WKB approach ${\varepsilon_0}$ is replaced by $\varepsilon$, which depends on $g$ and $\hbar$, and, in general, it can {\it not} be found exactly. It is worth presenting three particular examples:
(i) the quartic anharmonic oscillator ${\hat V}=u^2+u^4$, where
\[
{\cal Z}_0\ =\ {u \sqrt{1+u^2}}\ ,\ {\cal Z}_2\ =\ \frac{1}{4} \left(\log u^2 (1+u^2) \right)^{\prime}_u - \frac{1}{2{u \sqrt{1+u^2}}}\ ,
\]
(ii) the sine-Gordon potential ${\hat V}=\sin^2 u$, where
\[
{\cal Z}_0\ =\ \sin u \ ,\ {\cal Z}_2\ =\ \frac{1}{2} \cot u - \frac{1}{2\,{\sin u}}\ ,
\]
and,
(iii) the quartic oscillator ${\hat V}=u^4$, where
\[
{\cal Z}_0\ =\ {u|u|}\ ,\ {\cal Z}_2\ =\ \frac{1}{u} - \frac{\varepsilon_0}{2{u|u|}} \ ,
\]
here $\varepsilon_0 \approx 1.0604$ has the meaning of the ground state energy of the quartic oscillator.
It can be immediately recognized that ${\cal Z}_0$ is, in fact, the classical momentum {\it at zero energy} when $\lambda=1$. In turn, $\int {\cal Z}_0\ du$ is the classical action at zero energy. Furthermore, replacing in ${\cal Z}_0$ the argument $u=\lambda v$ (\ref{GB-variables}) one can see that ${\cal Z}_0(\lambda v)$ is the generating function for the leading terms of the highest degrees in $v$ of the ${\cal Y}_n(v)$ corrections of the expansion (\ref{PT-Y}), while ${\cal Z}_2(\lambda v)$ is the generating function for the next, subleading terms of the ${\cal Y}_n(v)$ corrections etc. Note that
\begin{equation}
\label{eqn_det}
\int {\cal Z}_2\ du\ =\ \frac{1}{4}\,{\log { {\hat V} (u)}}\ -\
\frac{{\varepsilon_0}}{2} \int \frac{du}{\sqrt{{\hat V}(u)}}\ ,
\end{equation}
is related to the logarithm of the determinant, see Section IV. In the standard WKB expansion
the first term in the rhs of (\ref{eqn_det}) contains the energy while the second term is absent.
In general, expansion (\ref{PT-Z}) is the true semiclassical expansion in powers of $\hbar^{1/2}$,
\[
{\cal Z}\ =\ \sum_n \hbar^{\frac{n}{2}} \big( g^n\,{\cal Z}_n(g x) \big)\ ,
\]
while
\[
\varepsilon(\hbar^{1/2}\,g)\ =\ \sum^{\infty}_{n=0} (\hbar^{1/2}\,g)^{n} \varepsilon_n\ .
\]
It is a well-known fact that if the potential (\ref{AHO}) has two or more degenerate global minima, e.g. $V_{dw}=x^2(1-gx)^2$, in addition to the Taylor expansion in powers of $\lambda$: $E_{PT}(\lambda)=\sum E_n \lambda^n$, the exponentially-small terms occur, which can be summed into the non-perturbative energy $E_{NPT}$. In particular, for the case of double-well potential $V_{dw}$, the energy of the state can be written as $E=E_{PT}+E_{NPT}$ \cite{ST:2018}. Hence, it manifests the occurrence of trans-series in $\lambda$, see e.g. \cite{Shifman:2015}, and also \cite{ST:2018} (and references therein), instead of the Taylor expansion.
The RB equation (\ref{RB}) continues to hold and one can see immediately that the energy depends on the single combination of parameters: $\lambda \ =\ \hbar^{1/2}\,g$,
\[
\varepsilon (\hbar^{1/2}\,g)\ =\ \varepsilon_{PT} (\hbar^{1/2}\,g) + \varepsilon_{NPT} (\hbar^{1/2}\,g)\ .
\]
Hence, the semiclassical expansion in powers of $\hbar^{1/2}$ (\ref{PT-veps}) becomes the semiclassical expansion in the form of a trans-series in $\hbar^{1/2}$.
It is worth noting that for a polynomial potential, where the expansion (\ref{AHO}) is terminated at degree $(2p)$, its expansion in $u$-variable takes the form
\begin{equation}
\label{AHO-2p-u}
\hat{V}(u)\ =\ a_0^2\, u^2\ +\
a_1 \,u^3\ +\ a_2\, u^4\ +\ \ldots\ +\ a_{k-2} u^k\ +\ \ldots\ +\ a^2_{2p-2}\, u^{2p} \quad .
\end{equation}
In Eq.(\ref{GB}) one can develop the asymptotic expansion in inverse powers of $u$,
\begin{equation}
\label{Z-expansion}
{\cal Z}\ =\ \pm\, a_{2p-2}\, u^p\ +\ \frac{a_{2p-3}}{\pm \,2\, a_{2p-2}}\, u^{p-1}\ +\ \frac{1}{2} \bigg(\frac{a_{2p-4}}{a_{2p-3}}+\frac{a_{2p-3}}{4a_{2p-2}^2}\bigg)\,u^{p-2}\ +\ \ldots\ +\ \frac{\lambda^2p+ \frac{a_{p-3}}{\pm a_{2p-3}}}{2u}\ +\ \ldots \quad ,
\end{equation}
where for even $p$ the sign {\it plus} is chosen for positive $u > 0$ and the sign {\it minus} for negative $u < 0$ to assure square integrability of the ground state function.
If $p$ is odd, the sign is always plus.
It is evident that the first $(p)$ coefficients in expansion (\ref{Z-expansion}) do not depend on $\lambda$, while the first $(2p)$ coefficients do not depend on the energy. Hence, the first $(2p)$ coefficients in expansion (\ref{Z-expansion}) can be found exactly. This expansion mimics the perturbation theory expansion in powers of $\lambda$ for ${\cal Z}$ (\ref{PT-Z}), when the corrections ${\cal Z}_n$ are expanded in $1/u$.
Interesting situation occurs when AHO degenerates to a power-like potential, thus, all $a_0=a_1=\ldots=a_{2p-3}=0$ except for $a_{2p-2}\neq 0$. In this case the potential (\ref{AHO-2p-u}) has the form
\[
V \ =\ a^2\,u^{2p} \ ,\ a^2 \equiv a_{2p-2}\ .
\]
In expansion (\ref{Z-expansion}) all terms of non-negative degrees vanish except for the leading degree $p$ as well as all negative degrees $-2,-3,\ldots -(p-1), -(p+1)$\ ,
\begin{equation}
\label{Z-expansion-1term}
{\cal Z}\ =\ \pm\, a\, u^p\ +\ \frac{\lambda^2 p}{2}\ \frac{1}{u}\ -\ \frac{\lambda^2 \varepsilon}{\pm 2 a} \frac{1}{u^p}\ - \ \frac{\lambda^2 p(p+2)}{\pm 8 a}\ \frac{1}{u^{p+2}} \ + \
\ldots \quad ,
\end{equation}
where for the case of even $p$ the sign plus (upper sign) is chosen for positive $u > 0$ and the sign minus (lower sign) for negative $u < 0$ to assure square integrability of the ground state function. For the case of odd $p$ the sign in (\ref{Z-expansion-1term}) should be always chosen plus.
The property, that the first $(2p)$ coefficients in expansion (\ref{Z-expansion}) (and (\ref{Z-expansion-1term})) do not depend on the energy and can be found exactly in terms of the parameters of the potential, plays crucially important role in the construction of the approximation for ground state wavefunction, see Section \ref{matching}.
\section{Semiclassical approximation of path integrals, the ``flucton" paths}
Following Feynman \cite{F-H}, the amplitude of a quantum system to go from point $x_i$ to point $x_f$ in time $t$ can be expressed as a functional integral
over all paths, starting and ending at these points.
He also famously pointed out that by moving this expression to Euclidean (imaginary) time $\tau=it$ defined on a circle with ``Matsubara time" circumference related to temperature \begin{equation}
\beta\ =\ \frac{\hbar}{T}\ ,
\end{equation}
one can use path integrals in statistical mechanics. Specifically, the partition function is given by integrals over the periodic paths with time period $\beta$.
In this paper we will use this formalism in the zero temperature limit only, in which $\beta \rightarrow \infty$ and the path integral naturally describes the ground state. Its density matrix -- the probability to find quantum particle at certain position $P(x_0)$ --
is given by the integral over periodic paths, where the initial and final points coincide, $x_i=x_f=x_0$. In the $\beta \rightarrow \infty$ limit it becomes the square of the ground state wave function $P(x_0)=|\psi_0(x_0)|^2$. Thus,
\begin{equation}
\label{psi0-2}
\psi_0^2(x_0)\ =\ \int {\cal D}x\ e^{-\frac{S_E}{\hbar}}\ ,
\end{equation}
where $S_E$ is the Euclidian action, if $\psi_0(x)$ is a real, positive function. Needless to say that the square-root of the rhs is the exact solution of the Schr\"odinger equation.
The development of the semiclassical theory of the ground state, based on special classical paths called ``fluctons", started from the paper of one of the present authors \cite{Shuryak:1988}.
Significant development of this theory has been then made in two recent papers \cite{Escobar-Ruiz:2016,Escobar-Ruiz:2017}, focused on the ground state of a number of quantum-mechanical problems, harmonic and anharmonic oscillators, as well as the double-well
and sine-Gordon potentials \footnote{
For first application of this semiclassical approximation to finite temperatures for
some of these examples, see \cite{Shuryak:2019}.}.
A {\it Flucton} is a path possessing the least action among all paths passing via the {\em observation point} $x_0$. Therefore it satisfies the classical (Newtonian) equation of motion
\begin{equation}
\label{NE}
m \ddot x(\tau)={\partial V \over \partial x}\ ,
\end{equation}
where the dots indicate derivatives over $Euclidean$ time $\tau$. Note that therefore the usual minus sign in the r.h.s. is absent: one can view this as motion in the {\it inverted} potential $\left(-V(x)\right)$.
As usual in $1D$ mechanical problems, in order to find trajectory in (\ref{NE}) the easy way is to employ the energy conservation, which in this case takes the form,
\begin{equation}
\frac{m}{2}\,\dot x(\tau)^2 -V(x)\ =\ E\ .
\end{equation}
The maximum of the inverted potential $(-V(x))$ is conveniently put to zero, so a particle with $E=0$ may stay at this maximum for infinite time. This simple idea defines the shape of the {\it flucton}.
Before giving its explicit form, let us do some redefinitions. For finite $\beta$, the
time variable $\tau$ is defined on a circle. The only condition on paths is
that they must pass through the observation point $x_0$, but it does not matter
at what moment in time this happens. Therefore, one can define it to be zero
\begin{equation}\label{eqn_condition}
x(\tau=0)=x_0\ ,
\end{equation}
with two symmetric arms, for positive and negative $\tau$,
describing {\it path relaxation}. In the zero temperature, or $\beta\rightarrow \infty$,
limit we discuss, $\tau \in (-\infty,\infty)$ and the asymptotic coordinate values
correspond to the position of the potential minimum, defined as
\[
x(\tau\rightarrow \pm \infty) =0\ .
\]
The explicit functional shape follows readily from energy conservation (with $m=1, E=0$)
\begin{equation}
\label{eqn_tau}
\tau=\int_{x_0}^{x(\tau)} \frac{dx}{\sqrt{2V(x)} }\ .
\end{equation}
At this point let us break presentation and, following the wisdom of new variables introduced in the preceding section, describe the Euclidean action. With the classical coordinate $u=g\,x$ and the potential $V(x)=\hat V(u)/g^2$, it takes the form
\begin{equation}
\label{action}
\frac{S}{\hbar}\ =\ \frac{1}{\hbar g^2}\,\int d\tau
\bigg(\frac{1}{2}\dot u(\tau)^2 +\hat V(u)\bigg) \ ,
\end{equation}
see (\ref{psi0-2}),
in which the coupling is united with the Planck constant (leading to the effective coupling $\lambda^2=\hbar g^2$), but both of them are absent in the Equation of Motion (EoM).
The path integral now takes the form
\begin{equation}
\label{path-int}
P(u_0) =\ \int {\cal D} u\ e^{-\frac{1}{\hbar g^2}\,\int d\tau
\bigg(\frac{1}{2}\dot u(\tau)^2 +\hat V(u)\bigg)} \ ,
\end{equation}
where, we remind, the dependence on the observation point $u_0=g x_0$
comes from the condition (\ref{eqn_condition}) which paths must obey.
The change of variable in (\ref{action}) to the ``classical variable" $u$ has important consequences.
In particular, the integral in the r.h.s. of (\ref{eqn_tau}) becomes
\begin{equation}
\label{eqn_tau-u}
\tau\ =\ \int_{u_0}^{u(\tau)} \ \frac{du}{\sqrt{2 \hat V(u)} }\ ,
\end{equation}
independent of $g$ and $\hbar$, where $u_0=g x_0$ is the starting point in $u$-space.
Therefore, in these notations the flucton shape is universal, it does not depend on the coupling constant.
\begin{figure}
\centering
\includegraphics[width=10cm]{fl2.eps}
\caption{Examples of different flucton paths $u_{fl}(\tau)$ (\ref{ufl-24}),
when $u(0)=1,2,3$ versus Euclidean time $\tau$ (blue, yellow, green curves, respectively). Only half of the path, for positive $\tau$ is shown: the path at negative values is its mirror image. }
\label{fig:fluctons}
\end{figure}
For an example at hand, $\hat V=u^2/2+u^4$, the integral (\ref{eqn_tau-u}) can be evaluated analytically, its inverse can also be found in closed analytic form, giving
\begin{equation}
\label{ufl-24}
u_{fl}(\tau)\ =\ \frac{1}{\sqrt{2} sinh\,\big| arc csch\big(\sqrt{2}u_0 \big) +\tau
\big| }\ .
\end{equation}
Three examples of flucton paths for different $u(0)$ are plotted in Fig.\ref{fig:fluctons}.
Inserting (\ref{ufl-24}) into the (Euclidean) action (\ref{action}), one gets
\begin{equation}
\label{Sfl-ufl-24}
S_{fl}\ = \ \frac{1}{6\,{\hbar}\,g^2}\, \bigg(-1 + {(1 + 2 u_0^2)}^{3/2} \bigg) \ ,
\end{equation}
and therefore the explicit form of the density matrix $P(u_0)\sim exp\big(-S_{fl})$.
This result, of course, reproduces the standard WKB expression for the ground state wave function at zero energy $E=0$, cf.(\ref{PT-Z}).
As for another example, for instance, the quartic oscillator, ${\hat V}=u^4$, the integral in the r.h.s. of (\ref{eqn_tau-u}) can also be found in closed analytic form, leading to
\begin{equation}
\label{ufl-4}
u_{fl}=\frac{u_0}{1+{\sqrt 2} u_0 \tau} \ ,
\end{equation}
which is not much different than the flucton trajectory for the anharmonic oscillator,
cf.(\ref{ufl-24}). Three examples of flucton paths for the quartic oscillator for different $u(0)$ are plotted in Fig.\ref{fig:fluctons-4}. Putting (\ref{ufl-4}) into the (Euclidean) action (\ref{action}), one gets
\begin{equation}
\label{Sfl-ufl-4}
S_{fl}\ = \ \frac{\sqrt{2}}{3\,{\hbar}\,g^2}\,u_0^3 \ ,
\end{equation}
cf.(\ref{Sfl-ufl-24}), and therefore the explicit form of density matrix $P(u_0)\sim exp\big(-S_{fl})$.
\begin{figure}
\centering
\includegraphics[width=10cm]{fl3.eps}
\caption{Examples of flucton paths $u_{fl}(\tau)$ (\ref{ufl-4}),
when $u(0)=1,2,3$ versus Euclidean time $\tau$ (blue, yellow, green curves, respectively). Only half of the path, for positive $\tau$ is shown: the path at
negative values is its mirror image. }
\label{fig:fluctons-4}
\end{figure}
However, unlike standard WKB, the flucton version of the semiclassical theory allows one to derive systematically the series of the semiclassical corrections in powers of $\hbar^{1/2}$, see \cite{Escobar-Ruiz:2016,Escobar-Ruiz:2017}. This works as follows. Arbitrary path can be viewed as a classical flucton path plus a quantum fluctuation
\begin{equation}
u(\tau)=u_{fl}(\tau)+ \hbar^{1/2} q(\tau) \ ,
\end{equation}
By substituting this into the action one can do systematic expansion in powers of $q$
\begin{equation}
\label{action-SS}
\frac{S-S_{fl}}{\hbar}\ =\ \frac{1}{{ \hbar\,g^2}} \int d\tau \bigg(\frac{{\dot q}^2}{2}\ +\
\frac{1}{2!}\,\frac{\partial^2 \hat V(u=u_{fl})}{\partial u^2}\, q^2(u)\ +\
\frac{\hbar^{1/2}}{3!}\,\frac{\partial^3 \hat V(u=u_{fl})}{\partial u^3}\, q^3(u)\ +\ \dots \bigg)\ .
\end{equation}
Note that for the first two terms (quadratic in $q$) both $g$ and $\hbar$ are absent, while the subsequent nonlinear terms contain growing powers of $\hbar^{1/2}$.
If only the quadratic terms in $q$ are kept, the resulting EoM is linear, defining the so called ``fluctuation operator" $\hat O$. In this approximation the functional integral over the fluctuations is Gaussian, producing the {\it determinant} of $\hat O$ operator. In the notations we use it becomes obvious that the operator, its eigenvalue spectrum, and their product -- the determinant -- are all universal, independent of $g$ and $\hbar$.
It is important, that unlike instantons and many other classical trajectories (e.g. solitons), the flucton background does not have $any$ zero modes of $\hat O$, and therefore its inversion is straightforward. Indeed, there is $no$ symmetry corresponding to the time shift, because by definition the paths should satisfy condition (\ref{eqn_condition}), which
implies that $q(0)=0$.
The eigenvalue equation
\[
\hat O\, u_\lambda(\tau)\ =\ \lambda\, u_\lambda(\tau) \ ,
\]
is of second order in the derivative and thus similar to the Schr\"odinger equation with an effective potential given by the second derivative at the background. For the example at hand, $\hat V=u^2/2+u^4$ (the quartic anharmonic oscillator),
\[
\hat V''(u=u_{fl})\ =\ 1+12 u_{fl}^2\ ,
\]
where $u_{fl}$ is presented in (\ref{ufl-24}),
hence, the effective potential is equal to 1 at $\tau \rightarrow \pm \infty$ and larger than 1 at the origin, $\tau=0$. Clearly, there are no bound states and all states are scattering states. With standard quantization in a box, all those can be found. This direct diagonalization approach has been used in our previous works \cite{Escobar-Ruiz:2016,Escobar-Ruiz:2017}.
However, this is no longer necessary, as in Section II we have derived the analytic expression for the determinant for any arbitrary potential(!), see (\ref{eqn_det}).
The higher order terms $O(q^3,q^4...)$ in the expansion (\ref{action-SS}) can be viewed as vertices of Feynman diagrams: the growing powers of $\hbar^{1/2}$ in front of them show that it is a truly semiclassical expansion. The propagators which occur in such diagrams are nothing but the Green function, which is the inverse of the fluctuation operator $\hat O$. In fact, several of them were calculated in Section II, see (\ref{PT-Z}). These terms correspond to 1-,2-,3-... loop results. It is worth noting that the one-loop contribution is given (formally) by a single diagram, see \cite{Escobar-Ruiz:2016}, Fig.3 and \cite{Escobar-Ruiz:2017}, Fig.2, the two-loop contribution is the sum of three diagrams, see \cite{Escobar-Ruiz:2016}, Fig.4 and \cite{Escobar-Ruiz:2017}, Fig.3, etc. A non-trivial fact observed in calculations of concrete systems \cite{Escobar-Ruiz:2016,Escobar-Ruiz:2017} is that each individual diagram can be quite complicated, can contain highly transcendental expressions, not always can be calculated analytically, while the sum of the individual diagrams of a given order leads to some mysterious cancellations which results in a sufficiently simple final expressions. A certain advantage of the formalism developed in Section II is that the expansion (\ref{PT-Z}) deals with sum of Feynman diagrams directly
and we do {\it not} see these complications.
\section{Matching perturbation theory and semiclassical expansion}
\label{matching}
Let us consider the polynomial potential of degree $(2p)$, see (\ref{AHO-2p-u}).
By taking the perturbation theory for the logarithmic derivative ${\cal Y}$ (\ref{PT-Y}) at small distances (or, saying differently, the asymptotic expansion (\ref{Y-expansion})) and matching it with a new version of the semiclassical expansion of ${\cal Z}$ (\ref{PT-Z}) at large distances (or, saying differently, the asymptotic expansion (\ref{Z-expansion})) we arrive practically unambiguously(!) at approximate eigenfunction for the $k$th excited state of the form
\begin{equation}
\label{approximant}
\Psi^{(k)}(x)\ =\ P_k(x)\ \mbox{prefactor} (x,g;\{ {\tilde b}\})
\exp{ \bigg(\ -\ \frac{{A}\ +\
{\hat V}(x,g; \{ {\tilde a}\}) }{\sqrt{\frac{1}{x^{2}}\,{\hat V}(x,g;\ \{ {\tilde b}\} )}}
\ +\ \frac{A}{{\tilde b}_0^2}\bigg)} \ .
\end{equation}
Here ${\hat V(x,g;\ \{ {\tilde a}\} )}$, ${\hat V(x,g;\ \{ {\tilde b}\})}$ are, as defined in (\ref{AHO}), the potentials but with new and different sets of coefficients $\{ {\tilde a}\}$ or $\{ {\tilde b}\}$, respectively. In general, $\{ {\tilde a}\}, \{ {\tilde b}\}$ and $A$ are free (variational) parameters subject to $(p)$ constraints: for the phase $\phi$ (\ref{phase}) all $(p)$ growing terms at large distances do not depend on energy and should be reproduced exactly. Here $P_k(x)$ is a polynomial of degree $k$, all its coefficients are found by imposing the orthogonality condition for the functions $\Psi^{(\ell)}(x)$, see (\ref{approximant}), with $\ell=0,1,2,\ldots, (k-1)$. The prefactor in (\ref{approximant}) depends on the potential, it is usually defined by ${\cal Z}_2$ in (\ref{PT-Z}) (by the determinant), see also (\ref{eqn_det}).
Formula (\ref{approximant}) is the central formula of this article. Let us present two examples. It is easy to check that for the Harmonic Oscillator (HO),
\[
V_{HO}\ =\ a_0^2 x^2\ ,
\]
cf.(\ref{AHO}), the (\ref{approximant}) becomes exact, with no free parameters,
\[
\Psi_{HO}\ =\ P_k\,e^{-\frac{a_0}{2} x^2}\ ,
\]
where the prefactor is absent and $P_k$ is the $k$th Hermite polynomial. Another example is
the quartic symmetric AnHarmonic Oscillator (AHO),
\[
V^{(4)}_{AHO}\ =\ a_0^2 x^2 + a_2^2 g^2 x^4\ ,
\]
where we can set $a_0=a_2=1$.
As a result of matching the two asymptotic expansions at small and large distances (equivalently, the perturbation theory in $x$ and the new semiclassical expansion)
we arrive at the following function for the $(k=2n+p)$-excited state with quantum numbers $(n,p)$, $n=0,1,2,\ldots\ ,\ p=0,1$\ \footnote{in this case $k$ is the {\it principal} quantum number} as a reduction of (\ref{approximant}) with two imposed constraints, emerging from (\ref{Z-expansion}) \cite{Turbiner:2021},
\[
\Psi^{(n,p)}_{(approximation)}\ =\
\frac{x^p P_{n,p}(x^2; g^2)}{\left(B^2\ +\ g^2\,x^2 \right)^{\frac{1}{4}}
\left({B}\ +\ \sqrt{B^2\ +\ g^2\,x^2} \right)^{2n+p+\frac{1}{2}}}
\]
\begin{equation}
\label{final}
\exp \left(-\ \dfrac{A\ +\ (B^2 + 3)\,x^2/6\ +\ g^2\,x^4/3}
{\sqrt{B^2\ +\ g^2\,x^2}} \ +\ \frac{A}{B}\right)\ ,
\end{equation}
where $P_{n,p}$ is some polynomial of degree $n$ in $x^2$ with positive roots. Here $A=A_{n,p}(g^2),\ B=B_{n,p}(g^2)$ are two variational parameters. It was shown in \cite{Turbiner:2021} that for the six lowest states $n=0,1,2$ and $p=0,1$ the variational energies for coupling constants in $g \in [0,\infty)$ are obtained with an accuracy of 10-11 significant digits. The variational (optimal) parameters $A=A_{n,p}(g^2),\ B=B_{n,p}(g^2)$ are easily fitted \cite{Turbiner:2021} leading to an accuracy in energy (the expectation value for the Hamiltonian) of 9-10 significant digits for any coupling constant $g \geq 0$.
\vskip .8cm
{\it \bf CONCLUSIONS}
\vskip 1cm
It is shown that for the family of perturbed harmonic oscillators $V_a$ (\ref{AHO}), which includes the sine-Gordon potential, in particular, the classical coordinate $u=g x$ can naturally be introduced. This leads to the fact that the effective coupling constant of the theory becomes a combination of the Planck constant $\hbar$ and the coupling constant $g$: $\hbar^{1/2} g$. As a result the action is independent on external parameters: it contains the effective coupling constant as a factor in front of it only. It is evident that a similar phenomenon occurs for the radially perturbed radial harmonic oscillator, see \cite{delValle} and the radially perturbed Coulomb problem $V_c$ (\ref{Coulomb}), this will be presented elsewhere. It is worth noting that an analogue of the classical coordinate - the {\it classical field} - exists for the massless scalar field theory $\lambda \phi^{2n}$, in QED and in the Yang-Mills theory.
In quantum mechanics the Schr\"odinger equation for the potential $V_a$ (\ref{AHO}) can be transformed into the so-called generalized Bloch equation for the logarithm of the wave function. In this equation the perturbation theory in powers of $g$ coincides with the semiclassical expansion in powers of $\hbar^{1/2}$.
This allows us to easily construct the loop expansion of the density matrix in the path integral formalism dealing with sums of Feynman integrals with a given number of loops, in particular, to calculate the determinant for the general potential $V_a$ (which is the one loop contribution) in closed analytic form. The existence of the field theoretical analogue of the generalized Bloch equation remains an open question. The only available option (at the present moment) for developing the semiclassical expansion is to construct a loop expansion over the flucton background in field theory via calculation of the individual Feynman diagrams. For the case of the massless $\lambda \phi^{4}$ theory, using Lipaton \cite{Lipatov:1977} as the flucton trajectory, this will be done elsewhere.
\begin{acknowledgments}
A.V.T. gratefully acknowledges support from the Simons Center for Geometry and Physics,
Stony Brook University at which the research for this paper was initiated, it was supported
in part by DGAPA grant {\bf IN113022 }~(Mexico). The work of E.S. is supported in part by the U.S. Department of Energy, Office of Science under Contract No. {\bf DE-FG-88ER40388}.
A.V.T. thanks M A Shifman for useful discussions.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,140 |
\section{Introduction}
In the last few years the chaotic regime in dynamics of closed FRW universe
filled with a scalar field becomes the issue of investigations.
Initially, the model with a massive scalar field (with the scalar field
potential $V(\varphi)=(m^2 \varphi^2)/2$, where $m$ is the mass of the scalar
field) was studied \cite{Page,Corn-Shel}. Before summarizing main
results obtained, we present the equation of motion
(for further using we will not specify the
potential $V(\varphi)$). The system has two dynamical variables - the
scale factor $a$ and the scalar field $\varphi$:
\begin{equation}
\frac{m_{P}^{2}}{16 \pi}\left(\ddot{a} + \frac{\dot{a}^{2}}{2 a}
+ \frac{1}{2 a} \right)
+\frac{a \dot{\varphi}^{2}}{8}
-\frac{a V(\varphi)}{4} = 0,
\end{equation}
\begin{equation}
\ddot{\varphi} + \frac{3 \dot{\varphi} \dot{a}}{a}
+ V'(\varphi) = 0.
\end{equation}
with the first integral
\begin{equation}
-\frac{3}{8 \pi} m_{P}^{2} (\dot{a}^{2} + 1)
+\frac{a^{2}}{2}\left(\dot{\varphi}^{2} + 2 V(\varphi)\right) =
0.
\end{equation}
Here $m_P$ is the Planck mass.
The points of maximal expansion
and those of minimal contraction, i.e. the points, where $\dot{a} =
0$ can exist only in the region where
\begin{equation}
a^{2} \leq \frac{3} {8 \pi} \frac{m_{P}^2}{V(\varphi)} ,
\end{equation}
Sometimes, the region defined by inequalities (4) is called
the Euclidean one.
One can easily see also that the possible points of
maximal expansion (where $\dot a=0$, $\ddot a<0$) are localized
inside the region
\begin{equation}
a^{2} \leq \frac{1}{4 \pi} \frac{m_{P}^{2}}{V(\varphi)}
\end{equation}
while the possible points of minimal contraction (where
$\dot a=0$, $\ddot a>0$) lie outside this
region (5) being at the same time inside the Euclidean region
(4).
The main idea of the further analysis \cite{our}
consists in the fact that in the closed isotropical model with a
minimally coupled scalar field satisfying the energodominance condition
all the trajectories have the
point of maximal expansion. Then
the trajectories can be classified according to localization of their
points of maximal expansion. The area of such points is specified by
(5). A numerical investigation shows that this area has a quasi-
periodical structure, wide zones corresponding to the falling to
singularity being intermingled with narrow those in which the points
of maximal expansion of trajectories having the so called ``bounce'' or
point of minimal contraction are placed. Then studying the substructure
of these zones from the point of view of possibility to have two
bounces one can see that this substructure reproduce on the qualitative
level the structure of the whole region of possible points of
maximal expansion. Continuing this procedure {\it ad infinitum}
yields the fractal set of infinitely bouncing trajectories.
It should be noticed that even the 1-st order bounce intervals (containing
maximum expansion points for trajectories having at least one bounce)
are very narrow. Analytical approximation for large initial $a$ indicates
that the width of intervals is roughly inversely proportional to $a$
\cite{Star}.
The opposite case of small initial $a$ was investigated numerically,
and the ratio of the first such interval width to the distance between
intervals appear to be of the order of $10^{-2}$, if we do not take
into account zigzag-like trajectories. So, the chaotic regime, though
being interesting from the mathematical point of view, may be treated
as not important enough.
For steeper potentials the chaos is even less significant. The chaotic
behavior may disappear completely for exponentially steep potentials
\cite{our2} .
The goal of the present paper is to describe the opposite case - the potentials
which is less steep than the quadratic one. We will see that in this case the
transition to a qualitatively stronger chaos may occur.
The structure of the paper is as follows. It Sec.2 we consider
asymptotically flat potential and explain new features of the chaos
which give rise in this case. In Sec.3 a more wide class of potentials
less steep than quadratic is studied. In Sec.4 we discuss the
transition to regular dynamics in the presence of ordinary matter in
addition to the scalar field for potentials under consideration.
\section {Asymptotically flat potentials and the merging of bounce
intervals}
We will use the units in which $m_{P}/\sqrt{16\pi}=1$ for presenting
our numerical results, because in these units most of the interesting events
occur for the range of parameters of the order of unity.
We start with the potential
\begin{equation}
V(\varphi)=M_{0}^{4}(1-exp(-\frac{\varphi^2}{\varphi_{0}^{2}})),
\end{equation}
where $M_0$ and $\varphi_0$ are parameters. $M_0$ determines the asymptotical
value of the potential for $\varphi \to \pm \infty$.
It can be easily checked from the equations of motion that multiplying
the potential to a constant (i.e. changing the $M_0$) leads only to
rescaling $a$. So, this procedure do not change the
chaotic properties of our dynamical system. On the contrary,
this system appear
to be very sensitive to the value of $\varphi_0$. We plotted in Fig.1.
the $\varphi=0$ cross-section of bounce intervals depending on $\varphi_0$.
This plot represents a situation, qualitatively different from studied
previously for potentials like $V \sim \varphi^2$ and steeper. Namely,
the bounce intervals can merge.
\begin{figure}
\epsfxsize=\hsize
\centerline{{\epsfbox{figa1.eps}}}
\caption{
The $\varphi=0$ cross-section of the bounce intervals for the potential
(6) depending on $\varphi_0$. Consecutive merging of $5$ first intervals
can be seen in this range of $\varphi_0$.
}
\end{figure}
Let us see more precisely what does it means. For $\varphi_0>0.82$ the picture
is qualitatively as for a massive scalar field - trajectories from 1-st
interval have a bounce with no $\varphi$-turns before it,
trajectories which have initial point of maximal expansion between
1-st and 2-nd intervals fall into a singularity after one $\varphi$-turn,
those from
2-nd interval have a bounce after 1 $\varphi$-turn and so on. For $\varphi_0$
a bit smaller than the first merging value the 2-nd interval contains
trajectories with 2 $\varphi$-turns before bounce, the space between
1-st interval (which is now the product of two merged intervals)
and the 2-nd one contains trajectories falling into a singularity
after two $\varphi$-turns. There are no trajectories going to a singularity
with exactly one $\varphi$-turn.
Trajectories from the 1-st
interval can experience now
a complicated chaotic behavior which can not
be described in as similar way as above.
With $\varphi_0$ decreasing further, the process of interval merging
being to continue leading to growing chaotisation of trajectories.
When $n$ intervals merged together, only trajectories with at
least $n$ oscillations of the scalar field before falling into
a singularity are possible. Those having exactly $n$ $\varphi$-turns
have their initial point of maximal expansion between 1-st bounce interval
and the 2-nd one (it now contains trajectories having a bounce after
$n$ $\varphi$-turns). For initial values of the scale factor larger then
those
from the 2-nd interval, the regular
quasiperiodic structure described above is restored.
Numerical analysis shows also
that the fraction of very chaotic trajectories as a function of
$\varphi_0$ grows rapidly with $\varphi_0$ decreasing below the first
merging value. To illustrate this point we plotted in Fig.2 the number of
trajectories which do not fall into a singularity during first $50$
oscillations of the scalar field $\varphi$. We do not include
trajectories with the next point of maximal expansion located outside
the 2-nd (or the 1-st one, if merging occurred) interval, so all
counted trajectories avoid a singularity during this sufficiently long
time interval due to their extreme chaoticity, but not due to reaching
the slow-roll regime. The initial value of $a$ vary in the range of
the first two intervals before and after merging
with the step
$0.002$. Before merging, the measure of so chaotic trajectories is
extremely low and they are undistinguisheble on our grid. When $\varphi_0$
becomes slightly low than the value of the first merging, this number
begin to grow rather rapidly and for $\varphi_0 \sim 0.6$ near $10 \%$ of
trajectories from the 1-st interval experience at least 50 oscillation
before falling into a singularity.
\begin{figure}
\epsfxsize=\hsize
\centerline{{\epsfbox{basa.eps}}}
\caption{
Number $N$ of trajectories do not falling into a singularity during
$50$ oscillating times for the potential (6) depending on the parameter
$\varphi_0$. The scale factor of the initial maximal expansion point varies
in the range of the 1-st and 2-nd intervals which merge at $\varphi_0=0.82$.
Total number of trajectories is equal to $1000$.
}
\end{figure}
We recall that for a simple massive scalar field potential only
$\sim 10^{-2}$ trajectories in the same range of the initial scale factors
have at least one bounce. Fraction
of trajectories not falling into a singularity after only one bounce
is about one hundred times less and so on. The common numerical
calculation accuracy is unsufficient for distinguishing even the sole
trajectory with 50 oscillations and $a$ being in the range of first two
intervals.
In contrast to this, the chaos
for the potential (6) is really significant. Detail of intervals merging
including the description out of $\varphi=0$ cross-section require further
analysis.
For large initial $a$ the configuration of bounce intervals
for potential (6) looks like the configuration for
a massive scalar field potential with the effective
mass easily derived from (6): $m_{eff}=(\sqrt{2} M_{0}^{2})/\varphi_0$.
The periods of corresponding structures coincides with a good accuracy
though the widths of the intervals for the potential (6) is
bigger then for $V=(m_{eff}^2 \varphi^2)/2$.
\section{Damour-Mukhanov potentials}
The very chaotic regime described above is possible also for potentials,
which are not asymptotically flat, if the potential growth is slow enough.
We will illustrate this point describing
a particular (but rather wide) family of
potentials having power-low behavior -- Damour - Mukhanov potentials
\cite{Damour}.
They was originally introduced to show a possibility to have an
inflation behaviour without slow-roll regime. After, various issues on
inflationary dynamics \cite{Liddle}
and growth of perturbation \cite{Taruya,Cardenas}
for this kind of scalar
field potential was studied.
The explicit form of Damour-Mukhanov potential is
\begin{equation}
V(\varphi)=\frac{M_{0}^{4}}{q} \left[ \left(1+\frac{\varphi^2}
{\varphi_{0}^{2}} \right)^{q/2}-1 \right],
\end{equation}
with three parameters --$M_0$, $q$ and $\varphi_0$.
For $\varphi \ll \varphi_0$ the potential looks like the massive one with the
effective mass $m_{eff}=M_{0}^{2}/\varphi_0$. In the opposite case of large
$\varphi$ it grows like $\varphi^q$.
As in the previous section, the chaotic behavior does not depend on
$M_0$. So, we have a two-parameter family of potentials with different
chaotic properties. Numerical studies with respect to possibility of
bounce intervals merging shows the following picture (see Fig.3): for
a rather wide range of $q$ there exists a corresponding critical value
of $\varphi_0$ such that for $\varphi_0$ less than critical, the very
chaotic regime exists. Increasing $q$ corresponds to decreasing
critical $\varphi_0$.
\begin{figure}
\epsfxsize=\hsize
\centerline{{\epsfbox{last.eps}}}
\caption{
The value $\varphi_0$ of the potential (7) corresponding to the
first merging of the bounce intervals depending on $q$.
}
\end{figure}
Surely, since this regime is absent for quadratic and more
steep potentials, $q$ must at least be less than $2$. We can see clearly
the very chaotic regime for $q< 1.24$.
The case $q=1.24$ lead to strong chaos for $\varphi_0<1.4 \times 10^{-5}$ and
the critical $\varphi_0$ decreases with increasing $q$ very sharply at this
point. We did not investigated further these extremely small values of
$\varphi_0$, because the physical meaning of such kind of potential is
very doubtful.
\section{The influence of a hydrodynamical matter}
In this section we add the perfect fluid with the equation of state
$P=\gamma \epsilon$.
The equation of motion are now
\begin{equation}
\frac{m_{P}^{2}}{16 \pi}\left(\ddot{a} + \frac{\dot{a}^{2}}{2 a}
+ \frac{1}{2 a} \right) +\frac{a \dot{\varphi}^{2}}{8}
-\frac{V(\varphi)}{4}
- \frac{Q}{12 a^{p+1}}(1-p) = 0
\end{equation}
\begin{equation}
\ddot{\varphi}
+ \frac{3 \dot{\varphi} \dot{a}}{a}+ V'(\varphi) = 0.
\end{equation}
with the constraint
\begin{equation}
-\frac{3}{8 \pi} m_{P}^2 (\dot{a}^{2} + 1)
+\frac{a^{2}}{2}\left(\dot{\varphi}^{2} + 2 V(\varphi)\right) +
\frac{Q}{a^{p}} = 0.
\end{equation}
Here $p=1+3 \gamma$, $Q$ is a constant from the equation of motion for matter
which can be integrated in the form
\begin{equation}
E a^{p+2} = Q = const.
\end{equation}
In Ref.\cite{ournew} it was shown that addition of a hydrodynamical matter
to the scalar field with a potential $V(\varphi)=m^2 \varphi^2/2$ can kill
the chaos. Here we extend this analysis to less steep potentials.
Some of our results are illustrated in Fig.4. It is interesting that
increasing $Q$ acts as increasing $\varphi_0$. In Fig.4 three intervals
merged at $Q=0$. When $Q$ increases, the 3-d and 2-nd interval
consecutively separate and we return to the chaos typical for
$V(\varphi)=m^2 \varphi^2/2$. With the further increasing of $Q$ the chaos
disappear in a way discussed in \cite{ournew}.
\begin{figure}
\epsfxsize=\hsize
\centerline{{\epsfbox{serg.eps}}}
\caption{
The $\varphi=0$ cross-section of the bounce intervals for the Damour-
Mukhanov potential with $q=1.0$, $\varphi_0=0.1$ depending of the $Q$. For
$Q=0$ three intervals merges. This plot shows the consecutive separation
and further disappearance of the bounce intervals with $Q$ increasing.
}
\end{figure}
The value of $Q$ corresponding to the chaos disappearing is in general
bigger for the less steep potentials with the same effective mass
. In Fig.5.(a) this values are plotted
for Damour-Mukhanov potentials and the perfect fluid with $\gamma=0$
($p=1$).
This particular $\gamma$ is chosen for a mathematical reason. Namely,
it can be seen from (8)-(10) that in this case only the constraint
equation is changed in comparison with the initial system (1)-(3). In
other words, dynamical equations describing FRW universe with a scalar
field and dust matter are formally equivalent to those for scalar field
only but with nonzero value of the conserved energy. So, our figure
describes not only the physical system under consideration, but also
general mathematical properties of (1)-(2).
We recall that for the case $V(\varphi)=m^2 \varphi^2/2$ (which is
equivalent to the Damour-Mukhanov potential with $q=2$, the
corresponding mass is equal to the effective one
$m_{eff}=M_{0}^{2}/\varphi_0$) the chaos disappear for $Q m > 0.023
m_{P}$ \cite{ournew}. To compare with this value, we plotted in
Fig.5(a) the values of $Q m_{eff}$ leading to ceasing of the chaos with
respect to $\varphi_0$ for several $q$. In units we used the case
$q=2$ corresponds to horizontal line $Q m_{eff} = 1.15$. All other
curves have this value as an asymptotic one for large $\varphi_0$. With
decreasing $\varphi_0$ the value $Q m_{eff}$ increases with the rate
which is bigger for less steep potentials.
\begin{figure}
\epsfxsize=\hsize
\centerline{{\epsfbox{test.eps}}}
\caption{In Fig.5(a)
the values of $Q m_{eff}$ killing the chaos for potentials
(7) depending on $\varphi_0$ are plotted
for several $q$: $q=2$ (bold line), $q=1.5$
(solid curve), $q=1.0$ (long-dashed curve), $q=0.5$ (short-dashed curve).
In Fig.5(b)
the values of $Q m_{eff}$ killing the chaos for potentials
(6) depending on $\varphi_0$ are plotted.
}
\end{figure}
In Fig.5(b) the analogous curve is plotted for asymptotically flat
potential (6). The value $Q m_{eff}$ for large $\varphi_0$ is the same.
For small $\varphi_0$ we can estimate $Q$ killing the chaos as the
value corresponding to disappearance of the Euclidean region for a flat
potential $V(\varphi)=M_{0}^{4}$. It can be easily obtained from (4) that
in this case the Euclidean region disappears at $$ Q=\frac{1}{8 \sqrt{2
\pi^3}} \frac{m_{P}^{3}}{M_{0}^{2}} $$ and for bigger $Q$ any bounce
become impossible. As the potential (6) differs significantly from the
flat one only for $\varphi$ less then $\varphi_0$, this approximation appear to be
good enough for small $\varphi_0$. In the our units it correspond to the
curve $Q m_{eff}=8/\varphi_0$. Intermediate values of $\varphi_0$ represent
smooth transition between these two asymptotic behaviors.
\section*{Acknowledgments}
This work was supported by
Russian Basic Research Foundation via grant
No 99-02-16224.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,973 |
De stelling van Pappos is een stelling in de meetkunde. De stelling is naar Pappos van Alexandrië genoemd.
De stelling luidt:
Liggen A1, B1 en C1 op één lijn d1 en liggen A2, B2 en C2 op een lijn d2 , dan liggen ook de volgende drie punten op één lijn:
A: snijpunt van B1C2 en B2C1,
B: snijpunt van A1C2 en A2C1 en
C: snijpunt van A1B2 en A2B1
De verkregen figuur heet de configuratie van Pappos.
De stelling van Pappos is een speciaal geval van de stelling van Pascal.
Meetkunde
Pappos | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,859 |
Q: Pass multiple arguments in form of tuple I'm passing a lot data around; specifically, I'm trying to pass the output of a function into a class and the output contains a tuple with three variables. I can't directly pass the output from my function (the tuple) into the class as in the input parameters.
How can format the tuple so it is accepted by the class without input_tuple[0], input_tuple[1], input_tuple[2]?
Here is a simple example:
#!/usr/bin/python
class InputStuff(object):
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
input_tuple = (1, 2, 3)
instance_1 = InputStuff(input_tuple)
# Traceback (most recent call last):
# File "Untitled 3.py", line 7, in <module>
# instance_1 = InputStuff(input_tuple)
# TypeError: __init__() takes exactly 4 arguments (2 given)
InputStuff(1, 2, 3)
# This works
A: You can use the * operator to unpack the argument list:
input_tuple = (1,2,3)
instance_1 = InputStuff(*input_tuple)
A: You are looking for:
Unpacking Argument Lists
>>> range(3, 6) # normal call with separate arguments
[3, 4, 5]
>>> args = [3, 6]
>>> range(*args) # call with arguments unpacked from a list
[3, 4, 5]
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,355 |
{"url":"https:\/\/base12innovations.wordpress.com\/2013\/02\/04\/problem-of-the-day-21413\/","text":"# Problem of the Day:\u00a02\/4\/13\n\nDarren is thinking of two numbers. Their sum is 21, and their product is 108. What is their positive difference?\n\nSolution to yesterday\u2019s problem:\n\nFirst we need to find the circumference of the garden. The formula for circumference is $C=\\pi d$. Substituting 20 for d, we get a circumference of 62.8319 feet. Each shrub is 6 inches in diameter, which means 2 shrubs will fit on one foot of the border. Double 62.8319 is 125.6638, so we can reasonably fit 125 shrubs around the border of the garden.","date":"2018-01-18 07:42:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 1, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9414061903953552, \"perplexity\": 823.4737376611993}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-05\/segments\/1516084887077.23\/warc\/CC-MAIN-20180118071706-20180118091706-00677.warc.gz\"}"} | null | null |
Is this the world's next best UHD Blu-ray player? Pioneer has finally released details on its UDP-LX500, a premium Dolby Vision-enabled player moving into the hardware void created by departing Oppo.
As we've seen on previous Pioneer flagship disc spinners, build quality appears high. The deck features a low resonance design, honed for high-speed UHD Blu-ray playback. An Acoustic Damper tray sits within a rigid double-layered chassis with a 3mm steel top plate. The deck tips the scales at 10.3kg.
Pioneer says "the UDP-LX500 has also been engineered to be a superb audio-only player." Audio boards have also been designed for a superior signal-to-noise ratio, it adds.
There's a Direct function for analogue audio playback, which turns off the digital audio and video circuits, and Pioneer's PQLS (Precision Quartz Lock System) for HDMI connection to compatible Pioneer receivers (SC-LX701, SC-LX801, SC-LX901).
The UDP-LX500 is compatible with both SACD and DVD-Audio discs, and is High-Res able. It's also 3D Blu-ray compliant.
The UDP-LX500 has three HDMI output modes: The Separate mode separates sound and vision via the main/sub HDMI outputs; a Single mode routes both video and audio output via the main HDMI output, while a Pure Audio mode delivers an audio-only output from the sub HDMI terminal.
There is also an SDR/HDR Preset to optimise the video output according to display, adjustable between LCD TV, OLED and Projector (which should prove interesting), and info rich OSD which will relate bit-rate, soundmix format details, plus mastering information including MaxFALL (Maximum Frame Average Light Level) and MaxCLL (Maximum Content Light Level) metadata values. You want more? How about a self-Illuminating remote control?
The UDP-LX500 will sell for £1,000 when it's launched in September. | {
"redpajama_set_name": "RedPajamaC4"
} | 624 |
Isle of Eriska Hotel and Spa
Pure luxury in this remote Scottish escape
By Rosamund Dean
This five star luxury resort is located on its own 300-acre private island, accessed by a wonderfully rickety wooden bridge. The weekend we visited, there was a storm of epic proportions, but we couldn't have picked a cosier place to sit it out. On arrival we found an elderly man carefully tuning a grand piano by a roaring fire in the lounge, which tells you all you need to know about the pace of life at the hotel.
Our room was called Lismore, after a Hebridean island, and Eriska's Scottish heritage is found in every nook and cranny. There are wellies available for blustery walks and, if you go in summer, there is croquet, archery, mountain biking, pitch and putt and, for the more serious golfer, a nine-hole course. If the famous Scottish weather isn't on your side then - like us - you can enjoy afternoon tea by that roaring fire: a feast of scones, shortbread and praline cake.
But modern luxuries are absolutely not sacrificed for the authentic Scottish countryside experience. There is excellent wifi, 24-hour room service and a state-of-the-art spa. The Stables Spa is all about working with nature and harvesting nutrients from the sea, using ESPA products to make you beautifully buffed and shiny as new, with lungs full of squeaky clean Scottish air.
But the thing that really stood out for me was the food. In the award-winning Michelin star restaurant, we tucked into langoustines, shoulder of hogget (helpfully described by our friendly waiter as "older than lamb but younger than mutton, so... sheep") and, for dessert, the incredible farmhouse cheese trolley, with over 40 delicious cheeses. The food at the more casual spa-side Veranda restaurant is just as scrumptious and, as for the breakfast, well, we haven't stopped talking about it ever since.
The quality of the restaurant is no surprise when you learn that Eriska is a member of Relais & Châteaux, the French group of hotels that has a strict criteria of 'the five Cs': Caractère, Courtoisie, Calme, Charme et Cuisine. All five Cs are present in spades at Eriska.
The incredible food - particularly that cheese trolley - and dreamy spa facilities. Plus the remote location (a two-and-a-bit hour drive from Glasgow airport) feels like a real escape.
Some rooms are more modern than others and our bathroom could have done with a bit of updating.
Getting hitched? The Isle of Eriska is the perfect venue for your wedding on a private island!
Number of rooms: 25 bedrooms in the main house, two one-bedroom self-catering lodges, and one three-bedroom self-catering house.
Check out: 11.30am
Private parking: Yes
Child-friendly: Yes, they have high chairs and cots, and there is also the option to hire a babysitter.
Room rate: From £185 per person per night.
To book: eriska-hotel.co.uk
From £185 per room, per night
Isle of Eriska Hotel, Benderloch, Argyll, PA37 1SD, Scotland | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,935 |
\section{Introduction}
Gamma-ray bursts (GRBs) and their broadband afterglows are the most luminous phenomenon in the Universe. The most popular model is the internal + external shock fireball model, which suggests that the prompt emission is produced by internal shocks at a distance internal to the deceleration radius of the GRB fireball and the broadband afterglows are from the external shocks when the fireball is decelerated by the ambient medium\cite{Rees1994,Meszaros1997,Sari1998}. Prompt and afterglow emissions involve two distinct processes at different sites in this model. The observations with {\em Swift} mission significantly improved our understanding of the internal+external shock picture for GRBs\cite{Zhang2007a}. Rapid localization with {\em Swift} also revolutionized the ground-based follow-up observations, leading to establishing a large sample of GRBs with well-sampled optical afterglow lightcurves and redshift measurement. Following our comprehensive analysis of the {\em Swift} data\cite{Liang2006a,Zhang2007b,Liang2007,Liang2008,Liang2009}, we make a systematical analysis of the optical data and explore their relations to the prompt gamma-ray emission and the X-ray afterglow.
\section{Data\label{sec:data}}
We include all the GRBs that have optical afterglow detection in our sample. A sample of 225 optical lightcurves are complied from the literature in the period from Feb. 28, 1997 to November 2011. We make an extensive search for the optical data from published papers or from GCN Circulars in case of no published paper being available for some GRBs. Well-sampled lightcurves are available for 146 GRBs. We collect the optical spectral index ($\beta_O)$\footnote{An optical spectral index $\beta_O=0.75$ is used for those GRBs without $\beta_O$ available} and the extinction $A_{\rm V}$ by the host galaxy of each burst from the same literature. Galactic extinction correction is made by using a reddening map presented by Schlegel et al. (1998)\cite{Schlegel1998}. Since the $A_{\rm V}$ values are available only for some GRBs and the $A_{\rm V}$ is derived from the spectral fits by using different extinction curves, we do not make correction for the extinction of the GRB host galaxy. The $k$-correction in magnitude is calculated by $k=-2.5(\beta_O-1)\log(1+z)$. Note that most of the well-sampled optical lightcurves are in the R band. For a few GRBs, the data are well-sampled in the other bands. We correct these lightcurves to the R band with the optical spectral indices. For data in late epochs ($\sim 10^6$ seconds after the GRB trigger), possible flux contribution from the host galaxy is also subtracted. The isotropic gamma-ray energy ($E_{\rm \gamma, iso}$) is derived in the energy band of $1-10^4$ keV in the burst local frame with spectral indices. Data and references of our full sample will be reported in a series of papers (in preparation).
\section{Lightcurve Fitting and a Synthesized Optical Emission Lightcurve \label{sec:data}}
The optical lightcurves are usually composed of one or several power-law segments as well as flares/re-brightening features. The mix of different components makes the diversity of the optical afterglow lightcurves. Different from previous statistical analysis on the optical data by some teams\cite{Liang2006b,Panaitescu2008,Panaitescu2011,Kann2010,Kann2011}
we fit the lightcurves with a model of several components in order to subtract each emission component from the lightcurves. The basic components in our model are a power-law function and a smooth broken power-law, i.e.,
\begin{eqnarray}
&F=F_0 t^{-\alpha},\ \ \ \
&F=F_0 [(t/t_{\rm b})^{\alpha_1\omega}+(t/t_{\rm b})^{\alpha_2\omega}]^{-1/\omega},
\end{eqnarray}
where $\alpha$ is the temporal decay slope, $t_{\rm b}$ is the break (or the peak) time, and $\omega$ measures the sharpness of a break (or a peak) of a lightcurve.
The width of a flare/bump is measured with the full-width-at-half-maximum (FWHM). We develop an IDL code to make the best fit with a subroutine called MPFIT\footnote{http://www.physics.wisc.edu/~craigm/idl/fitting.html}. Note that the parameter $\omega$ is usually fixed at 3 or 1 in our fitting. The approach of our lightcurve fitting is as follows. Initially, we add components to our model by inspecting the global feature of a lightcurves. If the reduced $\chi^2_{\rm r}$ is much larger than 1, we continue to add components and make the fit. We try to get a fit with $\chi^2_{\rm r}$ being close to 1. The $\chi^2_{\rm r}$ values for some lightcurves are much lower than 1, indicating that some model parameters are poorly constrained. Therefore, we fix some parameters to make the fits for these GRBs. The erratic fluctuation of some data points with a small error bar in some GRBs, such as GRB 030329, makes $\chi^2_{\rm r}$ be much larger than 1. We do not add additional component for these data points and the $\chi^2_{\rm r}$ of the fits for these GRBs are much larger than 1. The most difficult problem of our fit is extraction of the seriously overlapped flares/bumps from the lightcurves. The slopes of these flares/bumps are usually quite uncertain. In our fitting, we first let all parameters free to get the best fit for the global lightcurve, then adjust the rising slopes to ensure that the fitting curve crosses the data point around the peak times of the two components. Finally, we fix the rising slopes and make the best fit again.
The flux of an flare event usually rapidly increases and drops. We identify a flare event with a criterion that the slopes of the rising and decaying parts are steeper than 2. Those optical flares during the burst duration are defined as prompt optical flares. Reversed shock flares following the prompt flares are observed in only a few GRBs. They are extensively discussed in literature. The standard fireball model suggests that the decay slope of the afterglows should be steeper than 0.75 if no any late energy injection after the GRB phase. We therefore define the shallow decay phase with the criterion that the initial decay slope of this segment is shallower than 0.75, which transits to a steeper decay after the break time. An optical afterglow onset is defined as an initial smooth hump with a peak less than 1 hour after the GRB trigger and decaying as a power-law with a slope being consistent with the external shock models ($0.75<\alpha<2$). A re-brightening hump is analogous to the afterglow onset hump but is later than the onset hump. The supernovae bumps are identified as those late re-brightenings that peak at around $1\sim 2$ weeks post the GRB trigger. We decompose emission components and make statistical analyses for each one. Morphologically, a synthesized optical lightcurve of these components along with some typical examples is shown in a cartoon picture (Figure \ref{Cartoon}) based our statistical results. It describes 7 components related to the X-ray canonical lightcurve\cite{Zhang2006}. The early optical afterglow lightcurves ($t<10^3$ second post the GRB trigger) of about one-third GRBs show a smooth bump and the other one-third lightcurves start with a shallow decay segment. Twenty-four optical flares are observed in 19 GRBs. Late re-brightenning is observed in 30 GRBs. A jet like break, in which the decay slope transits from $0.75\sim 1.5$ to $1.5\sim 2.5$, is detected in 10 GRBs\footnote{We do not include those breaks that have a slope shallower than 0.75 breaking to a slope steepen than 1.5}. A clear Supernova bump is detected for 18 GRBs. The detection probability of each component is also marked in the cartoon lightcurve. We report our results for the flares, shallow decays, afterglow onset bumps, and late re-brightening in this paper. We mark the parameters of these components with superscripts ``F" for the flares, ``S" for the shallow decays, ``A" for the afterglow, and ``R" for the re-brightening.
\begin{figure}
$
\begin{array}{lr}
\subfigure [A synthesized cartoon lightcurve of multiple optical emission components based on our statistics. (b) Examples of the lightcurves with various emission components. The solid lines represent the best fit to the data. Simultaneous X-ray data
observed with {\em Swift}/XRT (crosses with error bars) are also presented.]{\includegraphics[angle=0,scale=0.55,height=1.6 in]{optical.eps}}&
\subfigure []{
$
\begin{array}{cc}
\includegraphics[angle=0,scale=0.25,height=1.3 in]{080319B.ps}&
\includegraphics[angle=0,scale=0.25,height=1.3 in]{050401.ps}\\
\includegraphics[angle=0,scale=0.25,height=1.3 in]{050922C.ps}&
\includegraphics[angle=0,scale=0.25,height=1.3 in]{100901A.ps}\\
\end{array}
$
}
\end{array}
$
\caption{} \label{Cartoon}
\end{figure}
\section{Flares}
We get 24 late flares in 19 GRBs. Relations of the width ($w^{\rm F}$) and the peak luminosity ($L^{\rm F}_{\rm p,iso}$) of the flares as a function of the peak time ($t^{\rm F}_{\rm p}$) are shown in Figure \ref{Flare_Corr}. The $t^{\rm F}_{\rm p}$ ranges from $\sim$ tens of seconds to $\sim 10^6$ seconds. The $w^{\rm F}$ values are in the same range as $t^{\rm F}_{\rm p}$. The $L^{\rm F}_{\rm R, iso}$ ranges in $10^{43}-10^{49}$ erg s$^{-1}$, with a typical value of $10^{46}$ erg s$^{-1}$. A tight correlation between $w^{\rm F}$ and $t^{\rm F}_{\rm p}$ is found. The best fit gives $\log w^{\rm F}=-0.32+1.01\log t^{\rm F}_{\rm p}$, i.e., $w^{\rm F}\sim t^{\rm F}_{\rm p}/2$. The $L^{\rm F}_{R,p}$ is anti-correlated with $t^{\rm F}_{\rm p}$ in the burst frame, i.e., $\log L^{\rm F}_{\rm R, iso,48}=(1.89\pm 0.52)-(1.15\pm0.15) \log [t^{\rm F}_{\rm p}/(1+z)]$ with a Spearman correlation coefficient of 0.85 and a chance probability $p<10^{-4}$. Therefore, a flare peaking at a later time tends to be dimmer and wider.
$E^{\rm F}_{\rm R, iso}$ are usually smaller than 1/100 of $E_{\gamma, \rm iso}$. The $L^{\rm F}_{\rm R, iso}$ is correlated with $L_{\rm \gamma, iso}$, i.e., $\log L^{\rm F}_{\rm R, iso}/10^{48}=(-3.97\pm 0.60)+(1.14\pm 0.27)\log L_{\rm \gamma, iso}/10^{50}$ with a Spearman correlation coefficient of $r=0.75$ and a chance probability $p\sim 10^{-3}$. The flares in GRBs 050401, 060926, and 090726 are out of the $3\sigma$ region of the fit. Without considering the flares in the three GRBs, it is found that the $t^{'\rm F}_{\rm p}$ is also tightly anti-correlated with $E_{\gamma, \rm iso}$ i.e., $\log t^{'\rm F}_{\rm p}=(5.38\pm 0.30)-(0.78\pm 0.09)\log E_{\rm \gamma, iso}/10^{50} $ (with $r=0.92$). Similarly, a tight anti-correlation between $L_{\rm R, p}$ and $t_{\rm p}$ in the burst frame is found without considering the flares in the three GRBs, e.g., $\log [t^{\rm F}_{\rm p}/(1+z)]=(7.57\pm 0.60)-(1.35\pm 0.17)\log E_{\gamma,\rm iso, 50}$ with a Spearman correlation coefficient of 0.91.
It is interesting to study whether the optical flares are associated with X-ray flares. Early optical flares are only observed in the lightcurves of GRBs 060210, 060926, 090618, and 090726 in our sample, indicating that the fraction of GRBs with detection of early optical flares is much lower than that of the X-ray flares. Among the 19 GRBs with detection of the optical flares 16 are detected with Swift. Simultaneous observations with XRT during the optical flares are available for GRBs 050401, 060206, 060210, 060607A, 060926, 070311, 071010A, 071031, 080506, 090618, and 100728B. An X-ray flare that may be associated with the optical flare is only observed in GRBs 060926, 070311, and 071010A. The optical flares of the three GRBs are lagged behind the corresponding X-ray flares. Measuring the lags with the peak times of the flares, we get 196 seconds, $7.7\times 10^4$ seconds, and $2.45\times 10^4$ seconds for the flares in GRBs 060926, 070311, and 071010A, respectively. The lag is potentially proportional to the peak time of the flares with the three flares.
\subsection{Prompt Optical and Reversed Shock Flare}
Well-sampled prompt optical flares are observed for GRBs 061121, 060526, 080129, 080319B, and 110215A. They usually trace the pulses of the prompt gamma-ray phase with a significant temporal lag. Reversed shock Flares are detected for GRBs 990123 and 060111B. Generally speaking, the light curve dominated by reverse the shock emission is fast rising and decaying, quite similar to the prompt optical flares.
\subsection{Late Flare}
\begin{figure*}
\includegraphics[angle=0,scale=0.4,width=0.32\textwidth,height=0.25\textheight]{Flare_Tp_W.eps}
\includegraphics[angle=0,scale=0.4,width=0.32\textwidth,height=0.25\textheight]{Flare_Lp_Tp.eps}
\includegraphics[angle=0,scale=0.4,width=0.32\textwidth,height=0.25\textheight]{Flare_LR_Liso.eps}
\caption{Correlation between $w^{\rm F}$ and $t^{\rm F}_{\rm p}$ as well as relations of $L^{\rm F}_{\rm R, iso}$ to $t^{\rm F}_{\rm p}$ and $L_{\gamma, \rm iso}$. The black solid dots, grey open circles, and grey triangles are for the optical flares, X-ray flares, and prompt gamma-ray pulses. Best fit lines with $3\sigma$ significance level are also shown. } \label{Flare_Corr}
\end{figure*}
\section{Early Shallow Decay Segment}
We get a sample of 41 GRBs with a shallow decay segment from the 146 GRBs. Thirty-one out of the 41 shallow decay segments transit to a decay slope of $1\sim 2.5$, and 5 of them are followed by a sharp drop with a decay slope being steeper than 2.5. About half of the shallow decay segments look like a plateau, with $|\alpha^{\rm S}_{b,1}|\leq 0.3$. Figure \ref{Shallow_corr} shows the distributions of the break times ($t^{\rm S}_{\rm b}$) and the luminosity at the break ($L^{\rm S}_{b, \rm iso}$) as well as their correlation. The break time ranges from tens of seconds to several days post the GRB trigger, with a typical $t^{\rm S}_{\rm p}$ of $\sim 10^4$ seconds. The $L^{\rm S}_{\rm R,b}$ varies from $10^{43}$ to $10^{47}$ erg s$^{-1}$, and even $\sim 10^{49}$ erg s$^{-1}$ of the early break in some GRBs. The $L^{\rm S}_{\rm R,b}$ is anti-correlated with $t^{\rm S}_{\rm b}$, which is $\log L^{\rm S}_{\rm R,48}=(1.75\pm 0.22)-(0.78\pm 0.08)\log [t^{\rm S}_{\rm b}/(1+z)]$ with a Spearman correlation coefficient of $r=0.86$ and $\rho<10^{-4}$.
A shallow decay segment is commonly seen in the well-sampled of XRT lightcurves, except for a few GRBs that their XRT lightcurves decay as a single power-law\cite{Liang2009} (Liang et al. 2009). It was also reported that the X-ray luminosity at the break time is correlated with the break time \cite{Dainotti2010}(Dainotti et al. 2010). We over-plot the $L_{\rm b, iso}$ as a function of $t_{\rm b}$ in the burst frame in Figure \ref{Shallow_corr}. One can observe that optical data share the same relation to the X-ray data. Note that the X-ray luminosity is in the 0.3-10 KeV energy band and the optical data are in the R band, so that the X-ray luminosities are significantly higher than the optical ones. The observed photon indices of the X-ray spectra are $\sim 2$. Therefore, the energy spectra of the X-rays are flat and the derived $L_{\rm b, iso}-t_{\rm b}$ relation in the 1 KeV band is roughly consistent with that observed in the X-ray band.
We examine the chromaticity of the shallow decay segments in the X-ray and optical bands. The X-ray observations are available for 17 out of the 34 GRBs. We extract the underlaying afterglow components from the X-ray data and compare the $\alpha^{\rm S}_{\rm 1}$, $\alpha^{\rm S}_{\rm 2}$, and $t^{\rm S}_{\rm b}$ in the X-ray and optical bands in Figure \ref{Shallow_Opt_Xray}. It is found that the data points are scattered around the equality line, and a tentative correlation between the break times of the optical and X-ray lightcurves is observed, with a chance probability of the correlation of $\sim 0.15$. These is no correlation between the decay slopes of the X-ray and optical lightcurves. The decay segment prior to the break times in the optical bands tend to be steeper than that of the X-ray band, but the slopes post the breaks are roughly consistent, except for those $\alpha^{\rm S}_{2}>2.5$ in the optical bands.
\begin{figure*}
\includegraphics[angle=0,scale=0.350,width=0.32\textwidth,height=0.25\textheight]{Shallow_Tb_N.eps}
\includegraphics[angle=0,scale=0.350,width=0.32\textwidth,height=0.25\textheight]{Shallow_Lb_N.eps}
\includegraphics[angle=0,scale=0.350,width=0.32\textwidth,height=0.25\textheight]{Shallow_Lb_Tb.eps}
\caption{Correlation and Distributions of $L^{S}_{\rm R, iso}$ and $t^{\rm S}_{b}/(1+z)$ of the shallow decay segments for the GRBs in our sample. The grey circles are for the the X-ray data from Dainotti et al. (2010). Lines are the best fit line.} \label{Shallow_corr}
\end{figure*}
\begin{figure*}
\includegraphics[angle=0,scale=0.350,width=0.32\textwidth,height=0.25\textheight]{Shallow_OX_tp.eps}
\includegraphics[angle=0,scale=0.350,width=0.32\textwidth,height=0.25\textheight]{Shallow_OX_alpha1.eps}
\includegraphics[angle=0,scale=0.350,width=0.32\textwidth,height=0.25\textheight]{Shallow_OX_alpha2.eps}
\caption{Comparisons of the decay slopes and the break times in the optical and X-ray bands. The dashed lines are the equality lines. } \label{Shallow_Opt_Xray}
\end{figure*}
\section{Early Afterglow Onset Bump}
An early smooth bump is observed in the optical afterglow lightcurves of 42 GRBs in our sample. The peak luminosity and width as a function of $t^{\rm A}_{\rm p}$ are shown in Figure \ref{Onset_RB_Corr}. It is found that $L^{\rm A}_{\rm {R,p}}$ is anti-correlated with $t^{\rm A}_{\rm p}$ measured in the burst frame and $w$ is tight correlated with $t^{\rm A}_{\rm p}$, indicating that it is wider and dimmer if it peaks later. The isotropic prompt gamma-ray energy ($E_{\gamma, {\rm iso}}$) is tightly correlated with $L^{\rm A}_{\rm R,p}$ (Figure \ref{Onset_RB_Corr}). The best fit yields $L^{A}_{R, iso}\propto E_{\gamma, \rm iso}^{1.00\pm 0.14}$. Assuming that the bumps signal the deceleration of the GRB fireballs in a constant density medium, we calculate the initial Lorentz factor ($\Gamma_0$) of the GRBs with redshift measurements. The derived $\Gamma_0$ are typically a few hundreds. The $\Gamma_0-E_{\rm \gamma, iso}$ relation discovered by Liang et al. (2010)\cite{Liang2010} is confirmed with the current sample.
\begin{figure*}
\includegraphics[angle=0,scale=0.350,width=0.32\textwidth,height=0.25\textheight]{Onset_RB_Lp_Tp.eps}
\includegraphics[angle=0,scale=0.350,width=0.32\textwidth,height=0.25\textheight]{Onset_RB_Tp_W.eps}
\includegraphics[angle=0,scale=0.350,width=0.32\textwidth,height=0.25\textheight]{Onset_RB_Eiso_Lp.eps}
\caption{Correlation between $w^{\rm A}$ and $t^{\rm A}_{\rm p}$ as well as relations of $L^{\rm A}_{\rm R, iso}$ to $t^{\rm A}_{\rm p}$ and $E_{\gamma, \rm iso}$. The black solid dots are for the afterglow onset bumps. The data for the late re-brightenings are also shown with opened triangles for comparison.}
\label{Onset_RB_Corr}
\end{figure*}
\section{Late Re-Brightening}
A re-brightening hump is analogous to the afterglow onset hump but it is latter than the onset humps. It is observed in 30 GRBs in our sample. Both $\alpha_1$ and $\alpha_2$ of the onset and re-brightening bumps are well consistent. Most $\alpha_1^{,}$s are in the range of $-3\sim 0$, with a typical value of $\sim -1$. The $\alpha_1$ in some cases is smaller than $-3$. The $\alpha_2$ values are in the range of $0.3\sim 2$. It is found that the $t^{\rm R}_{\rm p}$ randomly ranges from several hundreds of seconds to several days. The $L^{\rm R}_{\rm p}$ is systematically lower than $L^{\rm A}_{\rm p}$. The typical $E^{R}_{\rm R, iso}$ is $10^{45}\sim 10^{47}$ erg. Correlation of the characteristics of the re-brightening bumps are also shown in Figure \ref{Onset_RB_Corr} in comparison with the afterglow onset bumps. It shares the same relation between the width and the peak time as the onset bump, but no clear correlation between $L_{R, p}$ and $E_{\gamma, iso}$, as is the case for the onset bumps. Although its peak luminosity also decays with time, the slope is much shallower than that of the onset peak. We get $L\propto t^{-1}_{\rm p}$, being consistent with off-axis observations of an expanding fireball in a wind-like circum medium\cite{Panaitescu2008}. Therefore, the late re-brightening may signal another jet component.
\section{Summary}
We have analyzed a well-sampled lightcurves of 146 GRBs. A generic optical lightcurve and statistical results of various emission
components are presented. We summary our results below.
(1) Twenty-four late optical flares are obtained from 19 GRBs. The fraction of the detected optical flares is much smaller than that of X-ray flares. Associated X-ray flares are observed for 4 optical flares and the optical flares usually lag behind the corresponding X-ray flares. We find $L^{\rm F}_{\rm R, iso}\propto L_{{\gamma}, \rm iso}^{1.11\pm 0.27}$, $w^{\rm F}\sim t^{\rm F}_{\rm p}/2$ and $L^{\rm F}_{\rm R, iso}\propto [t^{\rm F}_{\rm p}/(1+z)]^{-1.15\pm0.15}$, indicating that the optical flares are correlated with the prompt gamma-ray phase and a flare peaking latter is wider and dimmer. These results suggest that the physical origin of the late optical flares could be the same as the prompt gamma-ray phase and the temporal evolution from the GRB phase to late optical flares may signal the global evolution of the GRB central engine.
(2) A shallow decay segment is observed in 39 GRBs. The detection fraction of the optical shallow decay component is comparable to that in the X-ray band. The X-ray and optical breaks are usually chromatic, but a tentative correlation is found. Their break times ($t^{\rm S}_{\rm b}$) range from tens of seconds to several days post the GRB trigger, with a typical value of $\sim 10^4$ seconds. The break luminosity is anti-correlated with $t^{S}_{\rm b}$, $L^{\rm S}_{\rm R, b}\propto {[t^{S}_{\rm b}/(1+z)]}^{-0.78}$, similar to that derived from X-ray flares. The shallow decays / internal plateaus may be evidence of a long-lasting wind powered by the central engine. The injection behavior may be used to diagnose the nature of the central objects in the GRB central engine. Assuming that the behavior of the luminosity injected into the forward shocks evolves as $L=L_0t^{-q}$, we find that the long-lasting wind may be powered by a Poynting flux from a black hole via the Blandford-Znajek mechanism fed by fall-back mass or by the spin-down energy release of a magnetar after the main burst episode\cite{Rees1998,Dai1998,Sari2000,Zhang2001}. One critical issue to explain the shallow decay segment with the energy injection scenario is the chromatic breaks in the optical and X-ray bands. Mixing of different emission components may be the reason for the observed chromatic breaks of the shallow decay segment in different energy bands.
(3) An early smooth bump is observed in the optical afterglow lightcurves of 42 GRBs in our sample. It is found that $L_{\rm {R,p}}$ is anti-correlated with $t^{\rm A}_{\rm p}$ measured in the burst frame and $w^{\rm A}$ is tightly correlated with $t^{\rm A}_{\rm p}$, indicating that a dimer flare tends to peak later and be wider. The $E_{\gamma, {\rm iso}}$ is also tightly correlated with $L^{\rm A}_{\rm R,p}$. Assuming that the bumps signal the deceleration of the GRB fireballs in a constant density medium, we calculate the initial Lorentz factor ($\Gamma_0$ of the GRBs with redshift measurements. The derived $\Gamma_0$ are typically a few hundreds. The $\Gamma_0-E_{\gamma, \rm iso}$ correlation discovered by Liang et al. (2010)\cite{Liang2010} is confirmed with the current sample. The tight relation of the onset bumps to the prompt gamma-rays may open a window to investigate the radiation physics of GRB fireballs.
(4)A re-brightening hump is analogous to the afterglow onset hump but it follows an onset hump or a power-law decay segment. It is observed in 30 GRBs in our sample. It shares the same relation between the width and the peak time as for the onset bumps, but no clear correlation between $L_{R, p}$ and $E_{\gamma, iso}$ is found. Although its peak luminosity also decays with time, the slope is much shallower than that of the onset peak. We get $L\propto t^{-1}_{\rm p}$, being consistent with off-axis observations to an expanding external fireball in a wind-like circum medium. Therefore, the late re-brightening may signal another jet component.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,611 |
The next-to-last time I saw Zig Ziglar, I was one of 17,000 in attendance at the Honda Center in Anaheim, California, where he was speaking as part of a program of superstars, including Colin Powell, Condoleezza Rice, and Joe Montana. He was onstage accompanied by his daughter, Julie Ziglar Norman, because Zig had suffered a fall a couple of years before that and nobody wanted him to fall again, especially onstage, and especially in front of 17,000 people.
On April 15, 2011, I saw Zig again, this time for lunch, with his daughter Julie and his son Tom. From 17,000 down to four. If you love Zig Ziglar as I do, you can readily understand it was one of the greatest thrills of my life.
At lunch, Zig leaned over to me and said, quite seriously, �Never say anything negative about yourself.� It sounds so obvious, but we all do it all the time. If we don�t see ourselves as wondrously made, as Zig likes to quote from the Bible, who will?
I asked Zig what caused him to make the transition from sales training to motivational speaking. His son Tom explained that Zig studied the success of his students, and he realized that only 20 percent of it was due to technique. The other 80 percent was due to reputation and character. So that�s when Zig began to focus on those issues and not just talk about selling.
As massive as Zig�s audience was, the publishing industry didn�t think him worth a shot when he wrote the book I found many years later in that furniture store, See You At The Top. By then, Zig had been providing sales training to the Mary Kay Company. Mary Kay Ash was such a devotee of his, Tom told me at lunch, that she told Zig that if he were to self-publish the book, she would buy the first 10,000 copies. Those initial 10,000 sales mushroomed into millions upon millions of books, since Zig has now authored 26 books in all.
I had the extraordinary privilege of editing Zig�s last book Born To Win. I�ve edited or coached hundreds of writers, and it was an uncanny, almost out-of-body experience instead of quoting Zig to people, talking directly to Zig, and making suggestions�how dare I?�to improve his manuscript.
It means the world to me that I was able to meet him face to face at lunch with just him, his two grown children who work with him, and me, and tell him that he made me a better salesperson, a better husband, a better father, a better believer, and a better man.
As I headed out to drive to the airport, Zig took me by the hand and cautioned me to drive carefully.
�After all, most people are caused by accidents,� he warned, with mock solemnity. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,259 |
BrandingObservationsPublished on Muzli
Rebranding Facebook: What can we learn about the new logo
By Ghaith Ayadi November 9, 2019 No Comments
Facebook just unveiled their new company logo on November 4th. No teasing, no big unveiling of the rebranding, just a relatively discrete update showing-off an underwhelming design of the brand's iconic wordmark and… Well, nothing else.
A low radar update
Contrary to what many brands have been doing recently, and what facebook itself has done in the last F8 conference when it updated the look of their app, the low-radar update is a bit of a surprise given that the logo hasn't changed much since 2005 and hasn't changed at all since the 2015 facelift.
https://www.pnclogos.com/the-facebook-logo-and-the-history-behind-the-company/
I guess we are used to seeing rebranding projects take much more publicity because they are, indeed, important. We've seen more excitement when Microsoft merely released the new icons of their Office suite. Heck, smaller brands like Slack and Mozilla generated a lot more news in their debut. Even Zara got more attention and they only changed the kerning on theirs.
So isn't this rebranding important and newsworthy at all? Spoiler alert, it is.
Facebook went uppercase
If you're not so much into silicon valley tech companies then this might not be a big deal for you. But here's a bunch of tech company logos. Guess what they all have in common: Lowercase letters.
Silicone valley logos
Yes, there are silicon valley companies with full uppercase letters. Netflix, Oracle, Tesla, Cisco (maybe?). Uber also used to have an all-caps logo before they ditched it, so did Evernote. But at least we can say that Facebook is the first of the GAFAM (Google, Apple, Facebook, Amazon, Microsoft) to do it, and it's a curious decision, to say the least.
Lowercase letters are a pilar of silicon valley. There has been much debate around this fact and the consensus seems to be going behind the idea that lowercase letters are friendlier and less Shouty. But I guess nobody put it better than Erlich Bachman from the popular HBO sitcom Silicon Valley who said in a comment about this exact subject:
" Are you f*ing serious? Lowercase letters? Twitter, lowercase "t". Google, lowercase "g". Facebook, lowercase "f". Every f*ing company in the Valley has lowercase letters. Why? Because it's safe."
I never thought I would quote Erlich, but he's right (he's wrong about Google's G though). Is Facebook Taking the road less traveled here for a reason? Of course, they are. The question is why and the answer has got to be more than just for the sake of novelty.
The goal behind the rebranding
What we know for now is that one of the goals behind this rebrand is to draw a difference between the company and the app as say their chief marketing officer Antonio Lucio:
We needed the wordmark to establish distinction from the Facebook app and allow for a clearer connection to the full family of technologies. The new brand system uses custom typography, rounded corners, open tracking, and capitalization to create a visual distinction between the company and the app.
https://facebook.design/companybrand
In that regard, I personally think that it's a job well done and it's actually a good idea. As Opposed to Google, Facebook doesn't have a parent company like Alphabet, so it's nice to see them trying to draw the distinction. I feel though as if this move is going to bring facebook as a company, not as an app, more to the average consumer's attention.
We don't know anything other than that for now, but it would seem to me that this has to have a connection with Facebook's legal troubles. Maybe the company is trying to remind everyone that the apps that we all love and use are all coming from the same company. Maybe they're trying to buy some slack there (no pun intended)? Nobody knows. It's just speculation at this point
A very unoriginal rebranding
I guess the trend of going with wordmarks and dumb the symbols cannot be hidden anymore. This is reminiscent of Uber's argument when they ditched their Uppercase wordmark + symbol combo in favor of a mere lowercase wordmark. They're the idea behind that was:
" Invest in a wordmark, not a symbol"_ Uber Design
The only reasoning they gave was "No need for a symbol". I guess that's also a trend that seems to be in action. We've also seen it from Godaddy and you could even say Instagram. You can be pretty sure that we'll be seeing a lot of companies going with this approach in the near future.
From a typography standpoint, I must say I can't see anything original, which is a good thing for a company of facebook's size.
Facebook's new wordmark typography comparison with Circular and Gotham
Facebook's in house team developed a "custom" type for this wordmark. It does truly have some distinctive features such as the curved diagonal strokes of the A and K, the extended bowls of the B and the slight symmetry of the arm and leg of the K. I'm sure typography experts will have more to say about it, but I kind of like it. It's appropriately discrete.
Can you see the book?
I might be overthinking this but I could swear I see a book in the logo. The Arms of the K are very reminiscent of a book's pages and especially the symmetry between FACE and BOOK with the slightly tighter kerning of the E and B is making me imagine things.
So will Facebook's new logo end all their struggles? Don't be silly, of course not. I guess we all rather see them take serious steps towards becoming the "privacy centered company" that Zuckerburg promised us at his congressional hearing session
Published on Muzli
Ghaith Ayadi
I'm Ghaith Ayadi [ɣaajθ ʕajadiː]. I am an entrepreneur who writes and designs. I specialize in Product Design and Branding and write about design, entrepreneurship, growing up and other things that matter. I currently do freelance work while I'm working on my next big thing. Reach me on: yo@ayadighaith.com
Next PostApple TV+'s "See" is a Branding Stunt That Deserves Our Attention.
BrandingPublished on Muzli
Apple TV+'s "See" is a Branding Stunt That Deserves Our Attention.
Ghaith AyadiNovember 6, 2019
Brand Identity Design Beyond Logos
Ghaith AyadiMay 5, 2019
Published on MuzliVisual Design
Design crimes in UI design: 8 things we should stop doing today
Ghaith AyadiApril 21, 2019
I'm Ghaith Ayadi [ɣaajθ ʕajadiː]. I am an entrepreneur who writes and designs. I specialize in Product Design and Branding and write about design, entrepreneurship, growing up an other things that matter. I currently do freelance work while I'm working on my next Big thing. Reach me on:
yo@ayadighaith.com
During his short time @mantu Ghaith was able to impact significantly the product he worked on by being able to solve elegantly complex supplier management issues for the company.
Abdou Ghariani
mantu
It was great to collaborate with Ghaith who was dedicated, self-motivated and respected deadlines. His approach was helpful for us and he is very reactive. As a team member or freelancer, I highly recommend him.
Anis Mnejja
Yanvestee
Spread the love lisence ♥ | Credits
Get $hit done
UI/UX design for my website/appBrand Strategy / Brand DesignSEOCopywritingOther
Any details you would care to mention about the project
Availability for a call | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,587 |
{"url":"https:\/\/socratic.org\/questions\/are-all-isosceles-triangles-acute-1#620890","text":"Are all isosceles triangles acute?\n\nMay 27, 2018\n\nNo. There can be an obtuse angle or a right-angle in an isosceles triangle.\n\nExplanation:\n\nNo. You can three types of isosceles triangles:\n\nacute-angled isosceles triangle\nAll the angles are acute and the base angles are equal.\n\neg: 75\u00b0, 75\u00b0, 30\u00b0\" or \"50\u00b0, 50\u00b0, 80\u00b0\n\nright-angled isosceles triangle\nOne angle is 90\u00b0 and the other two are both 45\u00b0\n\n45\u00b0, 45\u00b0, 90\u00b0\" \"larr\" \" (like one of the usual set squares)\n\nobtuse-angles isosceles triangle\nOne angle is obtuse and the base angles are equal.\n\neg: 30\u00b0, 30\u00b0, 120\u00b0\" or \"25\u00b0, 25\u00b0, 130\u00b0\n\nOf course there can only be one 90\u00b0 or one obtuse angle in a triangle. The other two angles MUST be acute, else the sum would be more than 180\u00b0","date":"2022-01-20 22:16:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 7, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7690525650978088, \"perplexity\": 2574.841815840993}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320302706.62\/warc\/CC-MAIN-20220120220649-20220121010649-00659.warc.gz\"}"} | null | null |
\section{0pt}{6pt plus 0pt minus 0pt}{6pt plus 0pt minus 0pt}
\titlespacing\subsection{0pt}{6pt plus 2pt minus 2pt}{0pt plus 2pt minus 2pt}
\newif\iflogvar
\logvartrue
\newif\ifblankver
\blankverfalse
\DeclareMathOperator*{\argmax}{argmax}
\DeclareMathOperator*{\argmin}{argmin}
\hypersetup{colorlinks=true,
citecolor=[rgb]{0,0.2,0.8},
linkcolor=[rgb]{0,0.2,0.8},
urlcolor =[rgb]{0,0,0}}
\usepackage{color}
\definecolor{blue}{rgb}{0,0.2,0.8}
\pagestyle{fancy}
\fancyhf{}
\fancyhead[RO,LE]{\footnotesize\thepage}
\fancyhead[CO]{\scriptsize CAN VOLATILITY SOLVE THE NAIVE PORTFOLIO PUZZLE?
\ifblankver
\fancyhead[CE]{\scriptsize CAN VOLATILITY SOLVE THE NAIVE PORTFOLIO PUZZLE?
\else
\fancyhead[CE]{\scriptsize CURRAN, O'SULLIVAN AND ZALLA}
\fi
\fancyfoot[L,R,C]{}
\renewcommand{\headrulewidth}{0pt}
\ifblankver
\title{Can Volatility Solve the Naive Portfolio Puzzle?}
\vspace{0.5cm}
\author{}
\else
\title{Can Volatility Solve the Naive Portfolio Puzzle?\footnote{\scriptsize
We thank Caitlin Dannhauser, Jes\'{u}s Fern\'{a}ndez-Villaverde, Alejandro Lopez-Lira, Rabih Moussawi, Michael Pagano, Nikolai Roussanov, Paul Scanlon, Frank Schorfheide, John Sedunov, Raman Uppal, and Raisa Velthuis for helpful comments. Christopher Antonello provided diligent research assistance.
There are no conflicts of interest to disclose.
A technical appendix can be found at \url{http://michael-curran.com/research/volatility_appendix.pdf}.
}
}\vspace{0.5cm}
\author{\href{http://www.michael-curran.com}{Michael Curran}\footnote{\scriptsize
Corresponding author.
Email: michael.curran@villanova.edu; Phone: (+1) 610-519-8867
\newline Address: Economics Dept., Villanova School of Business, Villanova University, 800 E Lancaster Ave, PA 19085, USA.} \\%\\
Villanova University
\and
Patrick O'Sullivan\footnote{\scriptsize
Email: patrick.osullivan@schroders.com
\newline Address: Schroders, 1 London Wall Place, London, UK.} \\%\\
Schroders Investment Management
\and \href{https://sites.google.com/site/ryanzalla/}{Ryan Zalla}\footnote{\scriptsize Email: rzalla@sas.upenn.edu; Phone: (+1) 412-759-5032 \newline Address: Economics Dept., University of Pennsylvania, 133 South 36th Street, Philadelphia, PA 19104, USA.} \\%\\
University of Pennsylvania \vspace{.4cm}}
\fi
\date{\small \today \\ \vspace{-0.5cm}}
\begin{document}
\begin{titlepage}
\thispagestyle{empty}
\begin{onehalfspace}
\maketitle
\end{onehalfspace}
\vspace{-0.5cm}
\begin{doublespace}
\begin{abstract}
We investigate whether sophisticated volatility estimation improves the out-of-sample performance of mean-variance portfolio strategies relative to the naive 1/N strategy. The portfolio strategies rely solely upon second moments. Using a diverse group of portfolios and econometric models across multiple datasets, most models achieve higher Sharpe ratios and lower portfolio volatility that are statistically and economically significant relative to the naive rule, even after controlling for turnover costs. Our results suggest benefits to employing more sophisticated econometric models than the sample covariance matrix, and that mean-variance strategies often outperform the naive portfolio across multiple datasets and assessment criteria.
\end{abstract}
\end{doublespace}
\indent\small \textbf{Keywords:} mean-variance, naive portfolio, volatility \\
\indent\small \textbf{JEL:} G11, G17
\end{titlepage}
\doublespacing
\newpage \setcounter{page}{1}
\setlength{\baselineskip}{1.1\baselineskip}
\setlength{\parskip}{0pt plus0pt}
\section{Introduction}
\noindent{}Since~\citet{markowitz1952} introduced mean-variance strategies, their out-of-sample performance has been criticized. One reason for weak performance is estimation error in the mean return (\citealp{merton1980estimating}; \citealp{chopra2013effect}). Variance strategies, using only second moments, avoid the pitfall of expected returns estimation. \citet{demiguel2009optimal} found the difference in performance of mean-variance strategies relative to an equally diversified portfolio to be statistically insignificant. We call this the \textit{naive} diversification puzzle. While previous work such as~\citet{demiguel2009optimal} employs robust estimation procedures to reduce parameter errors, they use sample covariance estimates.
Few papers in the naive portfolio literature have pursued improved estimation of the volatility of returns.\footnote{Instead of the portfolio strategy, our innovation explores a wide variety of \textit{econometric} models. \citet{demiguel2009optimal} find that the minimum-variance portfolio, though performing well relative to other portfolio strategies, significantly beats the 1/N strategy for only 1 in 7 of their datasets. \citet{jagannathan2003} and~\citet{kirby2012s} innovate on the portfolio strategy, illustrating that short-sale constrained minimum-variance strategies and volatility timing strategies enhance performance.}\textsuperscript{,}\footnote{We consider a wide range of mostly parametric econometric models. Non-parametric models using higher-frequency data~\citep{demiguel2013improving} and shrinkage approaches~\citep{ledoit2017nonlinear} also improve the accuracy of estimation. Daily frequency option-implied volatility reduces portfolio volatility, but never statistically significantly improves the Sharpe ratio relative to the 1/N strategy~\citep{demiguel2013improving}. Although \citet{johannes2014sequential} account for both estimation risk and time-varying volatility through eight variations of a similar class of constant and stochastic volatility models, we expand to more varied classes of volatility types with 14 econometric models. Initial investigations reveal our results to be at least as strong as~\citet{ledoit2017nonlinear}.
} Our contribution is that a variety of econometric models of volatility improve performance relative to the naive portfolio strategy. That is, relative to sample covariance estimation, our study suggests considerable performance gains from employing modern econometric models for estimating volatility in portfolio construction.
We investigate whether sophisticated volatility estimation improves the out-of-sample performance of mean-variance portfolio strategies depending only on conditional variance-covariance matrices (variance strategies) relative to the naive 1/N strategy. Using a diverse set of fourteen econometric models, we
apply three portfolio strategies -- minimum-variance, constrained minimum-variance, and volatility-timing -- to six empirical datasets of weekly and monthly returns; we also include the tangency portfolio.
We assess performance using three criteria: (i) the Sharpe ratio, (ii) the Sharpe ratio adjusted for turnover (trading) costs, and (iii) the standard deviation of returns (portfolio volatility). Overall, we show that variance strategies perform consistently well out-of-sample.
Our first contribution is that all four portfolio strategies, regardless of the econometric model used for estimating covariance, achieve superior out-of-sample performances relative to the naive benchmark. This assertion challenges the literature that has rarely reported superior minimum-variance performances relative to the equally-weighted portfolio \citep{demiguel2009optimal}.\footnote{Our econometric estimation strategies yield improvements beyond the period and frequency differences.} Specifically, we find that all four portfolio strategies estimated using all fourteen econometric models perform \textit{at least as well} as the naive benchmark, and only rarely underperform it. These underperformances are mainly concentrated in the Fama-French 3-factor dataset. If we discard this dataset,
then all four portfolio strategies estimated using twelve of the fourteen econometric models would \textit{weakly dominate} the naive benchmark.\footnote{A portfolio strategy, whose covariance is estimated using a given econometric model, \textit{weakly dominates} the naive benchmark if, for each performance criterion, the portfolio strategy performs at least as well as the naive benchmark across all datasets and performs significantly better in at least one dataset.}
In general, the minimum-variance strategy, with or without short-sale constraints, achieves higher Sharpe ratios, lower turnover costs, and lower portfolio volatility across the majority of datasets. Wherever these strategies do not outperform the naive rule, we often fail to reject the null hypothesis that their performances are identical to the naive rule.
Likewise, we find evidence that volatility-timing strategies achieve higher Sharpe ratios and lower turnover costs but exhibit comparable portfolio volatility to the naive rule. The tangency portfolio does reasonably well in certain datasets. Relative to the other strategies, however, its performance is lackluster, which we attribute to estimation error in expected returns.
Our second contribution is to identify pairings of volatility models and portfolio strategies that perform consistently and significantly well across datasets relative to the naive benchmark. Multivariate GARCH models, particularly the constant conditional correlation (CCC), \textit{weakly dominate} the naive rule when applied to minimum-variance and constrained minimum-variance strategies. Similarly, the realized covariance (RCOV) model weakly dominates the naive rule when applied to the volatility-timing strategy. In the tangency portfolio, although the RCOV model achieves higher Sharpe ratios and lower portfolio volatility relative to the naive strategy than other econometric models, it exhibits higher portfolio volatility relative to the naive rule in data on international equities. Nonetheless, many other econometric models, such as the multivariate GARCH models, weakly dominate the naive strategy. Even econometric models such as the regime-switching vector autoregression (RSVAR) and exponentially-weighted moving-average (EWMA), which perform worst in each of the four porfolio models, still perform at least as well as the naive benchmark across every dataset except the Fama-French 3-factor.
Our third contribution is to compare the performance of econometric models relative to the naive benchmark using each assessment criterion in \textit{isolation}. That is, across portfolio strategies and datasets, which econometric models consistently achieve the highest Sharpe ratios? Which econometric models achieve the lowest turnover costs? Which econometric models achieve the lowest portfolio volatilities? First, the combined parameter (CP) and realized covariance (RCOV) models achieve significantly higher Sharpe ratios than the naive benchmark. Second, the RCOV and a variety of GARCH models produce significantly higher Sharpe ratios than the naive rule after adjustment for turnover costs. Finally, the exponentially-weighted moving-average (EWMA), multivariate stochastic volatility (MSV), and RCOV models exhibit significantly lower portfolio volatility than the naive rule. These performances are economically and statistically significant: relative to the naive strategy, we achieve 30\% higher Sharpe ratios and 9\% lower portfolio volatility.\footnote{For each portfolio strategy, we average the Sharpe ratios and portfolio volatility resulting from all fourteen econometric models across all six datasets. Then we average Sharpe ratio and portfolio volatility across all four portfolio strategies.}
Our paper exploits recent computational developments to estimate the covariance matrix
using multivariate, nonlinear, non-Gaussian econometric models.\footnote{Our study benefits from incorporating recent advances in the computation of several models as in~\citet{vogiatzoglou2017dynamic}, \citet{chan2018bayesian}, and~\citet{kastner2019sparse}. To reduce run-time, we employ
fast, low-level languages, e.g., C++, that we program in parallel with hyperthreading and execute on
clusters.}
We extend the set of models in~\citet{wang2015hedging} beyond
GARCH and discrete regime-switching models to smooth multivariate stochastic volatility
and non-parametric realized volatility models.
We draw two conclusions from our results. First, relative to sample covariance estimation, our study suggests considerable performance gains from employing modern econometric models for estimating volatility in portfolio construction.
Second,
variance strategies consistently perform well relative to the naive strategy.
Our study builds on the literature of naive diversification.
Mean-variance models struggle to compete with naive diversification in out-of-sample return performance. Considering multiple mean-variance strategies including ones accounting for parameter errors,
\citet{demiguel2009optimal}
find that no portfolio strategy
consistently outperforms naive diversification.
One cause for weak performance is that while mean-variance strategies based on the minimum-variance portfolio outperforms naive diversification, portfolio turnover costs negates these benefits~\citep{kirby2012s}. To address the turnover issue, the authors
propose two new approaches that reduce portfolio turnover, finding that the mean-variance strategies outperform the naive strategy out-of-sample. We employ their volatility-timing strategy.
Combining naive diversification with other conventional portfolio
strategies can also enhance portfolio performance~\citep{tu2011markowitz}. We investigate combining parameters from the variance-covariance matrix estimates with portfolio strategy weights.
Studies have increased the sophistication of parameter estimation since \citet{demiguel2009optimal}, which was based on rolling window sample estimates. Performances of mostly GARCH-based estimators were lackluster~\citep{trucios2019covariance}.\footnote{Using a shorter time-sample across one dataset with a larger portfolio, they do not consider vector autoregression, vector error correction for non-stationarity, or either regime-switching or stochastic volatility models, which are computationally challenging and account for observed nuances of time-varying volatility.} Non-linear shrinkage estimators~\citep{ledoit2017nonlinear} and estimation strategies for large portfolios~\citep{ao2019approaching} can improve performance.\footnote{Preliminary evidence suggests that our results are at least as strong relative to the naive portfolio as what~\citet{ledoit2017nonlinear} find. Direct comparisons are more complicated in~\citet{ao2019approaching}. Relative to the naive portfolio, initial experiments suggest that their MAXSER estimator performs better than our econometric models do in some comparisons, but that our models do better in most empirical comparisons.} Implied volatility and skewness~\citep{demiguel2013improving},
vector autoregression~\citep{demiguel2014stock},
and portfolio constraints (\citealp{demiguel2009generalized}; \citealp{kourtis2012parameter}; \citealp{behr2013portfolio})
also enhances portfolio performance.\footnote{To isolate one study by~\citet{demiguel2009generalized}, our paper employs improved econometric methods rather than more sophisticated portfolio constraints. Although not directly comparable, preliminary investigations reveal that our models improve performance relative to the naive portfolio by a greater ratio than~\citet{demiguel2009generalized} in terms of Sharpe ratios, portfolio volatility, and turnover costs.}
Recent papers find comparably poor performance using sophisticated hedging strategies to beat naive one-to-one hedging~\citep{wang2015hedging}, provide behavioral evidence of naive choice strategies (\citealp{benartzi2001}; \citealp{huberman2006offering};
\citealp{giorgi2018naive}; \citealp{gathergood2019naive}), and emphasize the importance of volatility~\citep{moreira2017volatility,moreira2019volatility}.
Consistent with
the latter,
our paper finds that volatility-timing strategies generate higher Sharpe ratios yet exhibit equal volatility to the naive portfolio. We explore improved volatility estimation as a source of out-of-sample performance.
Rather than consider an investor allocating wealth across individual assets, our paper differs from many papers by focusing on the problem of allocating wealth across portfolios.
We apply our methods to six different empirical datasets with $N\leq11$ portfolio choices. Many retail investors and some institutional investors trade a small number of index portfolios rather than a large number of individual stocks. Evidence using over half a million individuals in over six hundred 401(k) plans indicates that participants tend to use three or four funds and allocate their contributions equally across the funds~\citep{huberman2006offering}.
Fortunately, our multiple datasets are broad and varied. For instance, we cover datasets allowing investors to diversify across international equities in the form of national stock market aggregate indices, across entire sector and industry indices, across expansive Fama-French portfolios, and across other encompassing indices based on size/book-to-market and momentum portfolios. The best application for our methods from a practical standpoint, therefore, is to an investor holding multiple mutual funds in equities.
Our benchmark is the \textit{naive} diversification rule, allocating a fraction $1/N$ of wealth to each of $N$ choices available for investment at each rebalancing date.
Three reasons justify the naive rule as a benchmark. First, implementation is easy as it relies on neither optimization nor estimation of the moments of returns. Second, investors use such simple rules for allocating their wealth across investments (\citealp{benartzi2001}; \citealp{huberman2006offering};
\citealp{baltussen2011}; \citealp{giorgi2018naive}; \citealp{gathergood2019naive}). Third, the naive rule consistently outperforms mean-variance strategies
(\citealp{demiguel2009optimal}; \citealp{duchin2009}; \citealp{pflug2012}). Without needing to estimate the 1/N weights, the variance of parameter estimation is zero, and thus the mean square error of the naive portfolio weights is simply the square of the bias.\footnote{Letting $\hat{\mathbf{w}}$ denote our estimate of the optimal vector of portfolio weights $\mathbf{w}$, the MSE bias-variance decomposition from econometrics is $MSE(\hat{\mathbf{w}})=Var(\hat{\mathbf{w}})+Bias^2(\hat{\mathbf{w}},\mathbf{w})$, where $Bias(\hat{\mathbf{w}},\mathbf{w})=\hat{\mathbf{w}}-\mathbf{w}$.} The naive strategy therefore proxies as a challenging rival for mean-variance strategies to outperform.
\section{Econometric Models}\label{sec:metrics}
We benchmark our model choices to~\citet{wang2015hedging}, who consider out-of-sample performance of hedging strategies.\footnote{While we attempt to cover the broad classes of econometric models, our set of econometric models is not exhaustive. For instance, we omit the shrinkage estimators of~\citet{hafner2012estimation} and~\citet{ledoit2003improved,ledoit2017nonlinear}. Although these and other econometric models are interesting, our study is the most expansive in its coverage of econometric models.} We extend their analysis from bivariate to multivariate random variables where the choice set is $N>2$. For each of their econometric models, they experiment with multiple estimators. Initial investigations reveal that heterogeneity in econometric models matters more than heterogeneity in estimators for yielding differing means, covariance estimates, and portfolio weights. We therefore expand the set of econometric models, controlling for different real world features in the data.\footnote{Implementation details are relegated to the online appendix.}
Letting $M$ be the estimation window length with $T$ out-of-sample investment periods, we use a rolling window approach with $T+M$ returns for $N$ choices.
Picking ten-year rolling windows, in line with~\citet{demiguel2009optimal}, $M$ will be set by the frequency of the data, e.g., $M=520$ for weekly data and $M=120$ for monthly data. We choose $T=1$, which corresponds to one week or month ahead, reduces compute time, and allows us to compare our results with the literature, e.g.,~\citet{demiguel2009optimal}. With $P$ total periods in our data, let $\{\hat{\Sigma}_t^j{}\}_{t=M}^{P-T}$ denote the conditional estimate of the variance-covariance matrix of returns for investment period $t$ based on econometric model $j$. Similarly, we define the conditional estimate of the expected return over period $t$ given model $j$ by $\{\hat{\boldsymbol{\mu}}_t^j\}_{t=M}^{P-T}$.
Mean-variance strategies are defined by the first two conditional moments of the return for period $t$ so that $\{(\hat{\boldsymbol{\mu}}_t^j,\hat{\Sigma}_t^j)\}_{t=M}^{P-T}$ defines the sequence of mean-variance strategies over the $T$ out-of-sample investment periods with respect to model $j$. Section~\ref{sec:pfm} details the mean-variance strategies.
\subsection{Sample Covariance}\label{subsec:cov}
\noindent{
Sample-based in-sample estimation of the variance-covariance matrix is standard in the literature on the naive diversification puzzle (\citealp{demiguel2009optimal}; \citealp{fletcher2011optimal}; \citealp{tu2011markowitz}; \citealp{kirby2012s}). Some studies examine improved estimation but typically use a small set of models~\citep{demiguel2013improving}. The sample covariance matrix (Cov) equally weights past observations.
\subsection{Exponentially Weighted Moving Average}\label{subsec:ewma}
\noindent{}The recent past might be more informative for estimating the variance-covariance matrix, motivating our first refinement, the exponentially weighted moving average (EWMA) model. The EWMA model suggested by RiskMetrics places decaying weight on the past.
\subsection{Vector Autoregression}\label{subsec:var}
\noindent{}To exploit dependence along the cross-section and time-dimension (serial), we estimate a vector autoregression (VAR).
We typically find two lags to be optimal at weekly and monthly frequencies, i.e., we estimate a VAR(2).
\subsection{Vector Error Correction}\label{subsec:vec}
\noindent{}To account for potential cointegration between the variables, we estimate a parsimonious vector error correction (VEC) model.
We compute the number of cointegrating relations in the system following the Johansen trace test~\citep{johansen1988statistical,johansen1991estimation} and employ the variance-covariance matrix estimated from the VEC model.
\subsection{BEKK-GARCH and Asymmetric BEKK-GARCH}\label{subsec:bekk}
\noindent{}The volatility of financial return data varies over time. General Autoregressive Conditional Heteroscedasticity (GARCH)
models the evolution of volatility as a deterministic function of past volatility and innovations.
Our first multivariate GARCH specification is BEKK-GARCH, modeling causalities of variances by allowing the conditional variances of one variable to depend on lagged values of another. Empirically, BEKK is general but easy to estimate. Relaxing symmetry, we also allow positive and negative shocks of equal magnitude to have different effects on conditional volatility by employing Asymmetric BEKK (ABEKK).
We allow
one symmetric innovation when estimating BEKK, and one symmetric innovation and one asymmetric innovation when estimating ABEKK.
\subsection{Conditional Correlation: Constant, Dynamic, \& Asymmetric}\label{subsec:ccc}
\noindent{}BEKK and ABEKK suffer from the curse of dimensionality which renders them computationally infeasible for investors allocating capital across a large set of investment choices. We therefore also estimate a constant conditional correlation (CCC) model. The CCC model is a multivariate GARCH model, where all conditional correlations are constant and conditional variances are modeled by univariate GARCH processes
The CCC model benefits from almost unrestricted applicability for large systems of time series, but fails to account for increases in correlation during financial crises.
Dynamic conditional correlation (DCC) permits time-varying correlatio
.
Without accounting for dynamics of asymmetric effects (ADCC), however, DCC cannot distinguish between the effect of past positive and negative shocks on the future conditional volatility and levels.
We allow
one symmetric innovation when estimating CCC and DCC, and one symmetric innovation and one asymmetric innovation when estimating ADCC.
\subsection{Copula-GARCH}\label{subsec:copula}
\noindent{}The assumption of multivariate normality is often questioned in practical applications. For instance, shocks that affect Apple may also affect Microsoft. Each company may experience similar nonlinear extreme events, hence exhibiting tail dependence. A portfolio manager who assumes multivariate normality will underestimate the frequency and magnitude of rare events. Such underestimation may hurt the portfolio's performance.
Modeling multivariate dependence among stock returns without assuming multivariate normality has become popular in the 21st century. Copulas are functions that may be used to bind univariate marginal distributions to produce a multivariate distribution.
Parameters can vary over time as an autoregression in a copula-GARCH model. Copulas have become the standard tools for modeling multivariate dependence among stock returns without assuming multivariate normality with many general applications in finance.
\subsection{Regime-Switching Vector Autoregression}\label{subsec:rsvar}
\noindent{}To account for bull and bear phases of the market, we estimate a discrete time-varying parameter model in the form of a regime-switching VAR (RSVAR) as in~\citet{chan2018bayesian}.
We choose two regimes
and we set our lag length at 2 for parsimony.
\subsection{Multivariate Stochastic Volatility}\label{subsec:msv}
\noindent{}Another nonlinear state-space model that allows for heteroscedasticity is the computationally challenging multivariate stochastic volatility model (MSV). Unlike GARCH models, volatility is stochastic. We adapt our method from~\citet{kastner2019factorstochvol,kastner2019sparse}.
\subsection{Realized Volatility}\label{subsec:rcov}
\noindent{}Our final econometric model is a non-parametric model: realized volatility (RCOV). To augment our group of econometric models from consisting of purely parametric models, we choose this model, although a backwards looking one, because of its popularity in practice and in the financial volatility literature.
\section{Portfolio Strategies}\label{sec:pfm}
\noindent{
To avoid the significant issues with estimating mean returns \citep{merton1980estimating} and the large impact of errors in the mean vector on out-of-sample performance \citep{chopra2013effect}, we concentrate on portfolio strategies that depend only on the covariance matrix.
\subsection{Naive Diversification}
\noindent{}With $N$ investment choices, the portfolio held over investment period $t$, $\mathbf{w}_t^{NV}$, is given by
\begin{equation}
\mathbf{w}_t^{NV} = (1/N,\ldots,1/N) \quad \forall t.
\label{eq:nv}
\end{equation}
\subsection{Minimum-Variance Portfolio}
\noindent{}The minimum-variance portfolio (MVP) for investment period $t$, $\mathbf{w}_t^{MVP}$, minimizes conditional portfolio variance. For each econometric model $j$ discussed in Section~\ref{sec:metrics}, we calculate the minimum-variance portfolio by
\begin{equation}
\mathbf{w}_{t,j}^{MVP} = \argmin_{\mathbf{w}\in\mathbb{R}^N|\mathbf{w}'\mathbf{1}=\mathbf{1}}{\mathbf{w}\hat{\Sigma_t^j}{}\mathbf{w'}}.
\label{eq:mvp}
\end{equation}
For each econometric model, the conditional estimate of the covariance matrix over period $t$ is used as an input to find the minimum-variance portfolio.
\subsection{Constrained Minimum-Variance Portfolio}
\noindent{}The constrained minimum-variance portfolio (con-MVP) for investment period $t$, $\mathbf{w}_t^{Con-MVP}$, minimizes conditional portfolio variance subject to no short selling; con-MVP improves performances (\citealp{jagannathan2003}; \citealp{demiguel2009optimal}). For each econometric model $j$ discussed in Section~\ref{sec:metrics}, we calculate the minimum-variance portfolio by
\begin{equation}
\mathbf{w}_{t,j}^{Con-MVP} = \argmin_{\mathbf{w}\in\mathbb{R}^N|\mathbf{w}'\mathbf{1}=\mathbf{1},\mathbf{w}\geq0}{\mathbf{w}\hat{\Sigma_t^j}{}\mathbf{w'}}.
\end{equation}
For each econometric model, we use the conditional estimate of the covariance matrix over period $t$ as an input to find the constrained minimum-variance portfolio.
\subsection{Volatility-Timing Strategies}
\noindent{}The volatility-timing strategy
ignores off-diagonal elements of the covariance matrix, i.e., assumes all pair-wise correlations are zero~\citep{kirby2012s}. The minimum-variance portfolio given covariance matrix $\Sigma$ is $w_i^{VT}=\frac{1/\Sigma_{ii}}{\sum_{i=1}^N{1/\Sigma_{ii}}}$. We similarly define the volatility-timing (VT) strategy given conditional estimate of the covariance matrix $\hat{\Sigma}_t^j$ by
\begin{equation}
\left(w_{t,j}^{VT}\right)_i = \frac{1/\left(\hat{\Sigma}_t^j\right)_{i,i}}{\sum_{i=1}^N{1/\left(\hat{\Sigma}_t^j\right)_{i,i}}} \qquad i=1,\ldots,N.
\end{equation}
\subsection{Tangency Portfolio}
\noindent{}While our main focus is on minimum-variance portfolio strategies, we also include the tangency portfolio (TP) for illustrative purposes. The TP with respect to econometric model $j$, $\mathbf{w}_{t,j}^{TP}$, is given by
\begin{equation}
\mathbf{w}_{t,j}^{TP} = \argmax_{\mathbf{w}\in\mathbb{R}^N|\mathbf{w}'\mathbf{1}=\mathbf{1}}{\frac{\mathbf{w}\hat{\boldsymbol{\mu}}_t^j}{\mathbf{w}\hat{\Sigma}_t^j\mathbf{w}'}}.
\label{eq:tp}
\end{equation}
\subsection{Combined Parameter Strategy}\label{subsec:cp}
\noindent{}We combine portfolios by inputting the arithmetic average over the econometric estimates of the covariance matrix into the portfolio optimization strategies.
We form a combined parameter estimate of the covariance matrix $\hat{\Sigma}$ by equally weighting estimates of the covariance matrix from each of the thirteen econometric models. We use the combined parameter estimate for $\hat{\Sigma}$ in each of~\eqref{eq:mvp}--\eqref{eq:tp} to get four combined parameter (CP) portfolios. Taking the example of the minimum-variance portfolio, using $\hat{\Sigma}^{comv}$ as the arithmetic average of the covariance matrices across the thirteen econometric models, our CP strategy is
\begin{equation}
\mathbf{w}_{t,j}^{MVP,comv} = \argmin_{\mathbf{w}\in\mathbb{R}^N|\mathbf{w}'\mathbf{1}=\mathbf{1}}{\mathbf{w}\hat{\Sigma}_t^{comv}\mathbf{w'}}.
\label{eq:comv}
\end{equation}
%
Our CP strategy is motivated by the finding that combining different hedging forecasts leads to more consistent hedging performance across datasets~\citep{wang2015hedging}. The result echoes the forecasting literature finding that combined models tend to perform more consistently over time than individual models~\citep{stock2003forecasting,stock2004combination}.
In preliminary investigations, we explored two other approaches:
(i) naively weighting across the thirteen vectors of weights suggested by the portfolio strategy using each econometric model's variance-covariance matrix estimate as an input;
and (ii) naively weighting across the four weights suggested by the four financial portfolio strategies for a given econometric model.\footnote{First, a variation of our benchmark CP strategy~\eqref{eq:comv},
for each of the strategies~\eqref{eq:mvp}--\eqref{eq:tp}, we examine the corresponding portfolio given by naive investments across the thirteen portfolios with respect to each of the econometric models. More precisely, consider the minimum-variance portfolio. We form a fourteenth portfolio strategy, $w_t^{MVP,com}$, which is equally invested across the thirteen estimates of the true minimum-variance portfolio, i.e.,
\begin{equation*}
\mathbf{w}_t^{MVP,com} = \frac{1}{13}\sum_{j=1}^{13}{\mathbf{w}_{t,j}^{MVP}}.
\end{equation*}
Second, with respect to each of the econometric models, we examine the corresponding portfolio given by naive investments across the four strategies~\eqref{eq:mvp}--\eqref{eq:tp}. More precisely, consider the VAR econometric model. We form a fifth portfolio strategy, $w_t^{VAR,comp}$ that is equally invested across the four vectors of portfolio weights suggested by inputting the volatility estimates from the VAR model into strategies~\eqref{eq:mvp}--\eqref{eq:tp}, i.e.,
\begin{equation*}
\mathbf{w}_t^{VAR,comp} = \frac{1}{4}\sum_{k=1}^{4}{\mathbf{w}_{t,VAR}^{k}}.
\end{equation*}
\label{fn:cp}}
These alternative combined parameter strategies are less relevant for our study. With the first variation, averaging over thirteen weights suggested by the portfolio strategy is less direct than averaging over the variance-covariance matrix estimates. With the second variation, averaging over four portfolio strategies for a given econometric model is similar to that of~\citet{wang2015hedging}. Consider instead the realistic situation that we are unsure of the data generating process underlying the return series. Rather than choosing one econometric model, we benefit from using all the information by hedging equally across the various nuances captured by each of the thirteen econometric models. Results are broadly similar across the three versions of combined parameter strategies. Thus, we report the results from our combined parameter strategy~\eqref{eq:comv}, which we denote CP.
\section{Data}\label{sec:data}
\noindent{}We employ six datasets at weekly and monthly frequencies. Lower frequencies
smooth out too much volatility and are inappropriate for our study. Higher frequency data generate more accurate estimates of the covariance matrix, but daily and higher frequency data are also troubled with problems such as day-of-the-week effect and asynchronous trading.
We use value-weighted returns and assess robustness
to equally-weighted returns.
We use end-of-period data where possible. When weekly or monthly frequency is unavailable, we scale data geometrically. For instance, to scale returns from daily to weekly frequency, we use $\Pi_{j=1}^{ND}{(1+r_j)^{1/ND}-1}$ where $r_j$ is the daily return and $ND$ denotes the number of trading days in the week. We adopt similar procedures to scale to monthly frequency.
For the realized covariance (RCOV) model, we use daily data
to calculate RCOV over each week (month) for weekly (monthly) frequency analysis. We omit data prior to July 1963.\footnote{Standard and Poor's established Compustat in 1962 to serve the needs of financial analysts and back-filed information only for the firms that were deemed to be of the greatest interest to the analysts. The result is significantly sparser coverage prior to 1963 for a selected sample of well performing firms.}
Our choice of datasets is motivated by comparison with previous literature. Our first four datasets closely correspond to datasets 4, 2, 1, and 3 of~\citet{demiguel2009optimal}. Popularly employed in the empirical finance literature, our final two datasets come from the same source as those of dataset 4 by the same authors.
We choose datasets with a modest number of portfolio choices ($N\approx10$). Choosing to allocate wealth across a small number of index portfolios is in line with evidence on the behavior of many retail investors and some institutional investors~\citep{huberman2006offering}.
Rather than examining simulated datasets, such as randomized selections of stocks, we restrict our attention to empirical datasets because our focus is on the econometric model as the source of improvement in performance.
\subsection{Dataset 1: Fama-French Portfolios}
\noindent{}Our first dataset consists of returns obtained from Wharton Research Data Services.\footnote{Kenneth French provides full description at \url{https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/}} We focus on the three-factor Fama-French portfolio: Small Minus Big, High Minus Low, and Market portfolios. Small Minus Big (SMB) is the average return on three small portfolios minus the average return on three big portfolios. High Minus Low (HML) is the average return on two value portfolios minus the average return on two growth portfolios.
The Market (MKT) return is the weighted return on all NYSE, AMEX, and NASDAQ stocks from CRSP and is obtained by adding the risk-free return to the excess market return.\footnote{The risk-free (RF) asset is the one-month Treasury bill rate from Ibbotson Associates and proxies the return from investing in the money market. We exclude the risk-free rate from the investor's choice set; therefore, we exclude returns in excess of the risk-free rate.} The benchmark analysis uses value-weighted returns to generate MKT.
We focus on weekly and monthly frequencies, where we use end-of-period returns for daily and monthly frequencies, and we scale daily data to weekly data as described above. We limit the data to span all US trading days from July 1st, 1963 to December 31st, 2018.
\subsection{Dataset 2: Industry Portfolios}\label{subsec:ind}
\noindent{}We take returns from Kenneth French's website covering ten industries: Consumer-\linebreak{}Discretionary, Consumer-Staples, Manufacturing, Energy, High-Tech, Telecommunications, Wholesale and Retail, Health, Utilities, and Others. The benchmark analysis uses value-weighted returns.\footnote{\label{equal1}We also employ equal-weighting in robustness checks.} We focus on
weekly and monthly frequencies, where we use end-of-period returns for daily and monthly frequencies, and we scale daily data to weekly data as described above. We limit the data to span all US trading days from July 1st, 1963 to December 31st, 2018.
\subsection{Dataset 3: Sector Portfolios}
\noindent{}This dataset includes returns for eleven value-weighted industry portfolios formed by using the Global Industry Classification Standard (GICS) developed by Standard \& Poor's (S\&P) and Morgan Stanley Capital International (MSCI). We obtained the returns from Bloomberg. The ten industries are Energy, Materials, Industrials, Consumer-Discretionary, Consumer-Staples, Healthcare, Financials, Information-Technology, Telecommunications, Real Estate, and Utilities. The expected returns are based on equity investments. Data are end-of-period returns for weekly and monthly frequencies and span all US trading days from January 2nd, 1995 to December 31st, 2018.
\subsection{Dataset 4: International equity indices}
\noindent{}This dataset includes returns on eight MSCI countries, Canada, France, Germany, Italy, Japan, Switzerland, UK, and USA, along with a developed countries index (MXWO). The returns are total gross returns with dividends reinvested. For robustness, we also use the world index (MXWD) and look at the regular return index. We source data from Bloomberg and the MSCI. Data are end-of-period returns for weekly and monthly frequencies and span all US trading days from January 4th, 1999 to December 31st, 2018.
\subsection{Dataset 5: Size/Book-to-Market}
\noindent{}We employ returns on the 6 ($2\times3$) portfolios sorted by size and book-to-market. Remaining details (e.g., source, weighting, frequency, date range) are the same as in Section~\ref{subsec:ind}.
\subsection{Dataset 6: Momentum Portfolios}
\noindent{}This dataset consists of returns on the 10 portfolios sorted by momentum. Remaining details (e.g., source, weighting, frequency, date range) are the same as in Section~\ref{subsec:ind}.
\section{Assessment Criteria}\label{sec:assess}
\noindent{}We assess performance through three industry-standard metrics:
Sharpe ratio, portfolio volatility, and Sharpe ratio net of turnover costs.\footnote{Covariance-based methods such as the minimum variance portfolio may lower variance relative to the 1/N portfolio and thus raise Sharpe ratios. We therefore also consider returns. While the naive strategy performs well on dataset 1, and despite with weaker dominance over the naive strategy on dataset 6, other strategies dominate the naive strategy on datasets 2 through 5. Results are available upon request.} For each metric, we estimate the statistical significance of the difference in the estimated metric from that of the 1/N strategy.
\subsection{Sharpe Ratio}
\noindent{}The Sharpe ratio measures reward to risk from a portfolio strategy, i.e., expected return per standard deviation. To test for differences between the Sharpe ratio from investing according to the naive strategy and the Sharpe ratio from the strategy in question, we employ the robust inference methods of~\citet{ledoit2008robust}.
\subsection{Sharpe Ratio Adjusted for Turnover Cost}
\noindent{}We assume a proportional turnover cost of 0.5\%
and calculate the expected returns net of the cost of rebalancing similar to~\citet{demiguel2014stock}
\begin{equation*}
\mathbf{r}_{t+1} = \left(1-\kappa\sum_{i=1}^N{|w_{i,t}-w_{i,(t-1)*}|}\right)(\mathbf{w}_t)'\mathbf{r_{t+1}},
\end{equation*}
where $w_{i,(t-1)*}^k$ is the weight in investment $i$ and time $t$ prior to rebalancing, $w_{i,t}$ is the weight suggested by the strategy at time $t$, i.e., after rebalancing, $\kappa$ is the proportional transaction cost, $\mathbf{w}_t$ is the vector of weights, and $\mathbf{r}_{t+1}$ is the return vector.\footnote{Several papers in the literature consider transaction costs of 10 or 50 basis points~(\citealp{kirby2012s}; \citealp{demiguel2014stock}) and others consider transactions costs that vary across stock size and through time~\citep{brandt2009parametric}. With high turnover, assuming 50 basis points
transactions costs conservatively biases our models away from beating the 1/N strategy.} Rebalancing may occur each period. We compare the difference in Sharpe ratios between the expected net returns following both the specific strategy and the naive strategy.
\subsection{Portfolio Volatility}
\noindent{}Assuming that the investor's goal is to minimize portfolio volatility, by analogy with~\citet{wang2015hedging}, we examine ranking portfolio strategies by out-of-sample volatility of returns. To be precise, we conduct the Brown-Forsythe F* test of unequal group variances. We also apply the Diebold-Mariano test in comparing forecast errors from a naive strategy with the strategy under consideration~\citep{diebold1995comparing}.\footnote{The forecast error is defined as the difference between expected returns using estimated portfolio weights and mean returns. The loss differential underlying the test looks at the difference of the squared forecast errors, and we calculate the
the loss differential correcting for autocorrelation.}
The procedures allow testing whether the strategy is significantly more or less volatile relative to the naive strategy.
\section{Empirical Results}\label{sec:results}
\noindent{}Our evidence suggests that the out-of-sample performance of portfolios whose only inputs are volatility estimates often weakly dominate that of the naive diversification portfolio. The minimum-variance, constrained minimum-variance, volatility-timing, and tangency portfolios have equivalent or superior Sharpe ratios, portfolio volatility, and Sharpe ratios adjusted for turnover costs relative to the naive portfolio in datasets 2, 3, 5, and 6, regardless of the econometric model used to estimate volatility. If we average the volatility estimates of all thirteen econometric models, we continue to obtain similar results. Our portfolio strategies perform well relative to the naive when applied to country stock-market indices. Our results are robust to value- and equal-weighting and to weekly and monthly frequency estimation.\footnote{We report only results for value-weighted data at weekly frequency.} We thus show that controlling for volatility in portfolio strategies delivers better performance than the naive portfolio.
In the next two subsections, we evaluate the performance of the econometric models. Specifically, we select an individual performance metric (e.g., the Sharpe ratio) and attempt to rank econometric models by performance consistency across datasets within a given portfolio strategy (e.g., minimum-variance). Ranking allows us to observe how models perform in an absolute sense across datasets. Often, however, one model has the highest Sharpe ratio yet the highest portfolio volatility. In the third subsection, we therefore undertake a holistic analysis that incorporates all three performance metrics to rank econometric models and portfolio strategies that consistently outperform the naive strategy.
\subsection{Sharpe Ratio}\label{subsec:sharpe}
\noindent{}We first evaluate the Sharpe ratio performance metric. In Tables~\ref{tab:mvsr}--\ref{tab:tansr}, we provide the Sharpe ratios associated with each of our thirteen econometric models when used as the input in each of the four portfolio strategies. Sharpe ratios are assessed across all six datasets with value-weighting at weekly frequency. We empirically test the difference in Sharpe ratios between each of the econometric models relative to the naive strategy and report significance levels.
The row ordering reflects our attempt to rank the econometric models according to consistency of performance relative to the naive benchmark.
In the minimum-variance and constrained minimum-variance portfolios, the combined parameter (CP) model yields Sharpe ratios that are consistently and significantly higher than those of the naive benchmark. In fact, nearly all econometric models achieve significantly higher Sharpe ratios relative to the naive rule in datasets 2, 4, 5, and 6, while displaying broadly equivalent Sharpe ratios in datasets 1 and 3.\footnote{To explain the poorer performance of datasets 1 and 3, first, the literature consistently finds weak performance with the Fama-French dataset~\citep{demiguel2009optimal}; second, a simple correlation matrix of the six datasets shows that dataset 3 is the only dataset to be negatively correlated with the other datasets.} Consequentially, most econometric models, when combined with either the constrained or unconstrained minimum-variance portfolio, weakly dominate the naive benchmark. The few exceptions, which occur only in dataset 1, are the vector autoregression (VAR) and vector error-correction (VEC) models in the minimum-variance portfolio, and the regime-switching vector autoregression (RSVAR) model in both the constrained and unconstrained minimum-variance portfolios. Even our lowest ranked econometric model, the exponentially-weighted moving-average (EWMA) model, still weakly dominates the naive portfolio.
In the volatility-timing and tangency portfolios, the realized covariance (RCOV) model delivers Sharpe ratios that are consistently and significantly higher than those of the naive benchmark. Let us first examine the volatility-timing portfolio. Most econometric models achieve significantly higher Sharpe ratios relative to the naive rule in datasets 2, 4, 5, and 6, and similar Sharpe ratios in dataset 3. The only weakness again lies in dataset 1, where all models except RCOV underperform the naive rule. Turning our attention to the tangency portfolio, we observe significantly higher Sharpe ratios relative to the naive benchmark across models in datasets 5 and 6, and similar Sharpe ratios in the rest. Thus, we conclude the tangency portfolio weakly dominates the naive portfolio in terms of Sharpe ratio. As with the minimum-variance portfolios, the exponentially-weighted moving-average (EWMA) achieves the worst Sharpe ratios relative to the other econometric models yet still performs well in comparison to the naive strategy.
Tables
S1--S2 in
the
\ifblankver
online appendix
\else
\href{http://www.michael-curran.com/research/volatility_appendix.pdf}{\textcolor{blue}{online appendix}}
\fi
show that the results are robust to adjusting the Sharpe ratios for turnover costs.\footnote{Allocations can shift, requiring rebalancing turnover even for the naive portfolio. With turnover, expected returns are no larger, but standard deviations may be smaller or larger.} The only difference is that the BEKK- and ABEKK-GARCH models achieve the highest and most consistent turnover-cost-adjusted Sharpe ratio with the minimum-variance portfolio relative to the naive benchmark. Moreover, we determine results are robust to both equal-weighting and monthly frequency.
The Sharpe ratios of our portfolio strategies relative to the naive strategy are not just statistically significant but \textit{economically} significant. Figure~\ref{fig:sharpesubplot} illustrates this point. On average, our portfolio strategies achieve Sharpe ratios that are 30\% higher than the naive, across all six datasets, punctuated by the minimum-variance portfolio at 47\%. We obtain a similar message in Figure
S1.1 from the
\ifblankver
online appendix
\else
\href{http://www.michael-curran.com/research/volatility_appendix.pdf}{\textcolor{blue}{online appendix}}
\fi
when we adjust
for turnover costs.
\begin{figure}
\centering
\captionsetup{font=small,skip=0pt}
\caption{Sharpe Ratio Percentage Difference Relative to Naive}
\label{fig:sharpesubplot}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[height=15.5cm,width=\columnwidth]{img/sharpe_subplot}
\end{subfigure}
\floatfoot{\small Notes: See Section~\ref{sec:metrics} for explanations of econometric model abbreviations; CP denotes combined parameter model~\eqref{eq:comv}. Dataset 1: Fama-French portfolios; Dataset 2: industry portfolios; Dataset 3: sector portfolios; Dataset 4: international equity indices; Dataset 5: portfolios sorted by size/book-to-market; Dataset 6: momentum portfolios.
Results are for value-weighted data at weekly frequency. Note that standard errors of the Sharpe ratio for the tangency portfolio are mostly large, rendering most Sharpe ratios for the tangency portfolio statistically insignificantly different to Sharpe ratios for the naive portfolio.}
\end{figure}
\subsection{Portfolio Volatility}\label{subsec:vol}
\noindent{
Tables~\ref{tab:mvpv}--\ref{tab:tanpv}
provide the standard deviations of the returns associated with each of our thirteen econometric models when used as the input in each of the four portfolio strategies.
The row ordering ranks the econometric models according to consistency of performance relative to the naive benchmark.
In the minimum-variance and constrained minimum-variance portfolios, the exponentially-weighted moving-average (EWMA), realized covariance (RCOV), and multivariate stochastic volatility (MSV) models exhibit significantly lower portfolio volatility relative to the naive portfolio across all datasets. More importantly, most econometric models \textit{strictly} dominate the naive portfolio in terms of volatility performance. The two exceptions, regime-switching vector autoregression (RSVAR) and combined parameter (CP), which happen to be the worst ranking models, still \textit{weakly} dominate the naive benchmark.
For the volatility-timing portfolio, the EWMA model delivers the best results across datasets. Moreover, every econometric model weakly dominates the naive benchmark. For the tangency portfolio, the COPULA achieves the lowest portfolio volatility across datasets. All econometric models, except RCOV and VEC in dataset 4, weakly dominate the naive benchmark. In addition, the MSV and RCOV models are consistent runner-ups in both the volatility-timing and tangency portfolio strategies. Although the RSVAR and CP models yield the highest volatility, both still weakly dominate the naive portfolio.
The volatility of our portfolio strategies relative to the naive strategy is \textit{economically} significant. Figure~\ref{fig:volsubplot} illustrates this point. On average, our portfolio strategies are 9\% less volatile, across all six datasets, with the minimum-variance portfolio at 10\% lower volatility.
\begin{figure}
\centering
\captionsetup{font=small,skip=0pt}
\caption{Portfolio Volatility Percentage Difference Relative to Naive}
\label{fig:volsubplot}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[height=15.5cm,width=\columnwidth]{img/vol_subplot}
\end{subfigure}
\floatfoot{\small Notes: See Section~\ref{sec:metrics} for explanations of econometric model abbreviations; CP denotes combined parameter model~\eqref{eq:comv}. Dataset 1: Fama-French portfolios; Dataset 2: industry portfolios; Dataset 3: sector portfolios; Dataset 4: international equity indices; Dataset 5: portfolios sorted by size/book-to-market; Dataset 6: momentum portfolios.
Results are for value-weighted data at weekly frequency.}
\end{figure}
\clearpage
\subsection{Portfolio Strategies}\label{subsec:pm}
\noindent{}We undertake a holistic evaluation of the econometric models.
In Tables~\ref{tab:cov}--\ref{tab:msv}, for each econometric model, we compare
the Sharpe ratio, Sharpe ratio adjusted for turnover cost, and portfolio volatility of each portfolio strategy across all six datasets.
All of our econometric models are broadly successful at outperforming the naive portfolio; that is, they achieve higher Sharpe ratios, lower turnover costs, lower portfolio volatility, or some combination. The few exceptions are concentrated where the volatility-timing portfolio is applied to dataset 1.
To identify the best performers, we develop a simple heuristic to score individual econometric models: we sum the instances where the model outperforms the naive benchmark and subtract the instances where the model underperforms.\footnote{To clarify, ``$\checkmark$" $= 1$, ``$\checkmark$*" $= 2/3$, `` " (blanks) $= 0$, and ``$\times$" $= -1.$ We discount results that are significant at the 10\% level by assigning a value of only 2/3 instead of 1. }
The multivariate GARCH models achieve the highest scores relative to other econometric models when applied to the minimum-variance and constrained minimum-variance strategies. With GARCH estimates of the covariance matrix, these portfolio strategies weakly dominate the naive benchmark. The constant conditional correlation (CCC) performs especially well without short sales. For the volatility-timing and tangency portfolios, the realized covariance (RCOV) model exhibits the most impressive results relative to the naive rule. Specifically, RCOV weakly dominates the naive rule across every dataset when paired with the volatility-timing strategy, and across five out of six datasets when paired with the tangency portfolio. In general, our results suggest the multivariate GARCH and RCOV models are better alternatives to the often-used sample covariance (COV) matrix for portfolio construction. The sample covariance matrix (COV) performs at the median relative to the other econometric models.
While the convenience of COV makes it attractive to researchers, our analysis shows there are returns to using more sophisticated methods to forecast volatility. The worst-ranking econometric models are the regime-switching vector autoregression (RSVAR) and exponentially-weighted moving-average (EWMA) models. Nonetheless, both of these models perform at least as well as the naive benchmark in every dataset except the Fama-French 3-factor.
As a final assessment, we naively average the estimated conditional volatilities of all thirteen econometric models to form a combined parameter (CP) model;
see Table~\ref{tab:msv}.
The main takeaway from this exercise is that controlling for volatility in a portfolio delivers performance metrics that are generally at least as strong as the naive strategy.
\section{Conclusion}\label{sec:conc}
\noindent{}We evaluate the out-of-sample performance of mean-variance strategies relying solely upon the second moment relative to the naive benchmark.
Using fourteen econometric models across six datasets at weekly frequency, we show that the minimum-variance, constrained minimum-variance, and volatility-timing strategies generally achieve higher Sharpe ratios, lower turnover costs, and lower portfolio volatility that are \textit{economically} significant relative to naive diversification. Whenever mean-variance strategies do not significantly outperform the naive rule, they usually match and only rarely lose to it.
We identify the econometric models that most consistently and significantly outperform the 1/N benchmark. First, we show that the multivariate GARCH models weakly dominate the naive rule when applied to the minimum-variance and constrained minimum-variance strategies. Next, we demonstrate that the realized covariance model achieves impressive results when paired with the volatility-timing and tangency portfolios. Even our ``worst-performing" econometric models still perform at least as well as the naive rule in all but one dataset. Third, we illustrate that if one prioritizes the Sharpe ratio, then the combined parameter and realized covariance models are excellent choices, even after controlling for turnover costs. Finally, we show the exponentially-weighted moving-average and multivariate stochastic volatility models consistently deliver low portfolio volatility.
With the difficulty in consistently outperforming the strategy, the 1/N naive diversification should serve as a benchmark for practitioners and academics. We empirically demonstrate that an important source of the naive portfolio puzzle is the quality of the econometric volatility inputs to the mean-variance portfolio strategies. With improved estimates, mean-variance models can beat the naive portfolio strategy.
Our findings imply that improving the estimation of return moments should be prioritized.
\begin{singlespacing}
\bibliographystyle{ecca}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,321 |
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0© Apress 2013
Matthew Campbell
Objective-C Quick Syntax Reference
Matthew Campbell
ISBN 978-1-4302-6487-3e-ISBN 978-1-4302-6488-0
© Apress 2013
Objective-C Quick Syntax Reference
President and Publisher: Paul Manning
Lead Editor: Steve Anglin
Technical Reviewer: Charles Cruz
Editorial Board: Steve Anglin, Mark Beckner, Ewan Buckingham, Gary Cornell, Louise Corrigan, Jonathan Gennick, James DeWolf Jonathan Hassell, Robert Hutchinson, Michelle Lowman, James Markham, Matthew Moodie, Jeff Olson, Jeffrey Pepper, Douglas Pundick, Ben Renow-Clarke, Dominic Shakeshaft, Gwenan Spearing, Steve Weiss, Tom Welsh
Coordinating Editor: Anamika Panchoo
Copy Editor: Mary Behr
Compositor: SPI Global
Indexer: SPI Global
Artist: SPI Global
Cover Designer: Anna Ishchenko
Distributed to the book trade worldwide by Springer Science+Business Media New York, 233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505, e-mail orders-ny@springer-sbm.com, or visit www.springeronline.com . Apress Media, LLC is a California LLC and the sole member (owner) is Springer Science + Business Media Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation.
For information on translations, please e-mail rights@apress.com, or visit www.apress.com .
Apress and friends of ED books may be purchased in bulk for academic, corporate, or promotional use. eBook versions and licenses are also available for most titles. For more information, reference our Special Bulk Sales–eBook Licensing web page at www.apress.com/bulk-sales .
Any source code or other supplementary materials referenced by the author in this text is available to readers at www.apress.com . For detailed information about how to locate your book's source code, go to www.apress.com/source-code/ .
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher's location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law. Trademarked names, logos, and images may appear in this book. Rather than use a trademark symbol with every occurrence of a trademarked name, logo, or image we use the names, logos, and images only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein.
For my daughter, Keira
About the Author
Matthew Campbell is a professional software developer, entrepreneur, author, and trainer. He works for Mobile App Mastery, a web-based software development training company he founded in 2008. Before building Mobile App Mastery, Matt studied psychology, worked as a mental health counselor, and supported psychometric research as a data analyst at the Educational Testing Service in Princeton. The books and trainings that he creates are designed to remove the obstacles that stop developers from mastering their craft.
About the Technical Reviewer
Charles Cruz is a mobile application developer for the iOS, Android, and Windows Phone platforms. He graduated from Stanford University with B.S. and M.S. degrees in engineering. He lives in Southern California and runs a photography business with his wife ( www.facebook.com/BellaLenteStudios ). When not doing technical things, he plays lead guitar in an original metal band ( www.taintedsociety.com ). Charles can be reached at codingandpicking@gmail.com and @CodingNPicking on Twitter.
Introduction
Objective-C is a tool that you can use to create stunning applications for the Mac, iPhone, and iPad. This unique programming language traces its linage back to the C programming language. Objective-C is C with object-oriented programming.
Today, learning programming is about learning how to shape our world. Objective-C programmers are in a unique position to create mobile applications that people all over the world can use in their daily lives.
Objective-C is a delight to use. While other programming languages can feel clumsy at times, Objective-C will show you its power and reach with grace. Problems that seem intractable in other programming languages melt away in Objective-C.
At its core, this book is about laying out, without any fuss, what Objective-C can do. When you know what you want to do, but you just need to know the Objective-C way to do it, use this book to get help.
Contents
Chapter 1: Hello World 1
Xcode 1
Creating a New Project 2
Hello World 3
Code Comments 5
Build and Run 6
Where to Get More Information 6
Chapter 2: Build and Run 7
Compiling 7
Building 7
Build and Run 8
Chapter 3: Variables 11
Variables Defined 11
Data Types 11
Declaring Variables 12
Assigning Values 12
Integer Types 12
Boolean Types 13
Float Types 13
Scope 14
Chapter 4: Operators 15
Operators Defined 15
Arithmetic Operators 15
Assignment Operators 16
Increment and Decrement Operators 16
Relational Operators 17
Logical Operators 17
Chapter 5: Objects 19
Objects Defined 19
NSObject Class 19
Object Declaration 19
Object Constructors 19
Object Format Specifier 20
Messages 21
Chapter 6: Strings 23
NSString 23
NSMutableString 24
Inserting Strings 24
Deleting Strings 24
Find and Replace 25
Chapter 7: Numbers 27
NSNumber 27
Converting to Primitive Data Types 27
Formatting Numbers 28
Converting Strings into Numbers 28
Chapter 8: Arrays 29
NSArray 29
Referencing Objects 29
Enumeration 30
NSMutableArray 30
Chapter 9: Dictionaries 33
NSDictionary 33
Referencing Objects 33
Enumeration 33
NSMutableDictionary 34
Chapter 10: For Loops 35
For Loops Defined 35
For Loops and Arrays 36
Chapter 11: While Loops 37
While Loops Defined 37
While Loops and Arrays 38
Chapter 12: Do While Loops 39
Do While Loops Defined 39
Do While Loops and Arrays 40
Chapter 13: For-Each Loops 41
For-Each Loops Defined 41
For Loops with NSDictionary 42
Chapter 14: If Statements 43
If Statements Defined 43
Else Keyword 43
If Statements and Variables 44
Chapter 15: Switch Statements 45
Switch Statements Defined 45
Switch Keyword 45
Case Keyword 45
break Keyword 46
Complete Switch Statement 46
Default Case 47
Chapter 16: Defining Classes 49
Classes 49
Class Interfaces 49
Property Forward Declarations 50
Method Forward Declarations 51
Implementing Classes 52
Implementing Methods 52
Private Properties and Methods 53
Chapter 17: Class Methods 57
Class Methods Defined 57
Coding Class Methods 57
Chapter 18: Inheritance 59
Creating Subclasses 59
Extending Classes 60
Overriding Methods 60
Instance Variable Visibility 61
Chapter 19: Categories 65
Categories Defined 65
Category Example 65
Chapter 20: Blocks 69
Blocks Defined 69
Defining Blocks 69
Assigning Blocks 70
Using Blocks 70
Copying Scoped Variables 70
Blocks as Properties 71
Chapter 21: Key-Value Coding 73
Key-Value Coding Defined 73
Setting Property Values 73
Retrieving Property Values 73
Chapter 22: Key-Value Observation 75
Key-Value Observation Defined 75
Project and Task Object Graph 75
Implementing Key-Value Observation 77
Add the Observer 78
Observing Value Changes 78
De-Registering Observers 79
Testing the Observer 80
Chapter 23: Protocols 81
Protocols Overview 81
Defining Protocols 81
Adopting Protocols 82
Implementing Protocol Methods 83
Chapter 24: Delegation 85
Delegation Defined 85
Defining Delegate Protocols 85
Delegate References 86
Sending Messages to the Delegate 87
Assigning the Delegate 87
Chapter 25: Singleton 89
Singleton Defined 89
Singleton Interface 89
Singleton Implementation 89
Referencing Singletons 90
Chapter 26: Error Handling 91
Error Handling Defined 91
NSError 91
Try/Catch Statements 92
Chapter 27: Background Processing 95
Background Processing Defined 95
Chapter 28: Object Archiving 97
Object Archiving Defined 97
NSCoding 97
Using the Archiver 100
Chapter 29: Web Services 101
Web Services Defined 101
Bitly Example 101
Formulate Request String 102
Create the Session and URL 102
Send and Receive the Response 102
Index105
Contents at a Glance
Chapter 1: Hello World 1
Chapter 2: Build and Run 7
Chapter 3: Variables 11
Chapter 4: Operators 15
Chapter 5: Objects 19
Chapter 6: Strings 23
Chapter 7: Numbers 27
Chapter 8: Arrays 29
Chapter 9: Dictionaries 33
Chapter 10: For Loops 35
Chapter 11: While Loops 37
Chapter 12: Do While Loops 39
Chapter 13: For-Each Loops 41
Chapter 14: If Statements 43
Chapter 15: Switch Statements 45
Chapter 16: Defining Classes 49
Chapter 17: Class Methods 57
Chapter 18: Inheritance 59
Chapter 19: Categories 65
Chapter 20: Blocks 69
Chapter 21: Key-Value Coding 73
Chapter 22: Key-Value Observation 75
Chapter 23: ProLohols 81
Chapter 24: Delegation 85
Chapter 25: Singleton 89
Chapter 26: Error Handling 91
Chapter 27: Background Processing 95
Chapter 28: Object Archiving 97
Chapter 29: Web Services 101
Index105
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_1
© Matthew Campbell 2013
# 1. Hello World
Matthew Campbell1
(1)
CA, US
Abstract
Objective-C is a programming language that extends the C programming language to include object-oriented programming capabilities. This means that most classic C programming procedures are used in Objective-C programs. For the purposes of this book, you will need to have an idea of how C programming works.
## Xcode
Objective-C is a programming language that extends the C programming language to include object-oriented programming capabilities. This means that most classic C programming procedures are used in Objective-C programs. For the purposes of this book, you will need to have an idea of how C programming works.
Before you write any Objective-C code, you will need to have the proper tool for the job. For Objective-C, this tool is Xcode. Xcode will be your primary code editor and integrated development environment (IDE).
Note
Xcode requires a Mac. You cannot install Xcode on a Windows-or Linux-based computer.
To install Xcode, go to the Mac App Store by selecting your Mac's menu bar and then choosing ʿ ➤ App Store. Use the App Store search feature to locate Xcode by typing the word Xcode into the textbox next to the hourglass. Press return to search for Xcode. You will be presented with a list of apps, and Xcode should be the first app in the list. Install Xcode by clicking the button with the word free next to the Xcode icon. See Figure 1-1 for the screen that you should see once you searched for Xcode in the App Store.
Figure 1-1.
Downloading Xcode from the App Store
## Creating a New Project
Open Xcode by going to your Applications folder and clicking the Xcode app. You will be presented with a welcome screen that includes text that reads Create a new Xcode project (see Figure 1-2). Click the text Create a new Xcode project to get started.
Figure 1-2.
Xcode welcome screen
The next screen that appears will list options for creating apps both for iOS and Mac. In this book, you will be using a Mac Command Line Tool app, so set up this by choosing OSX ➤ Application ➤ Command Line Tool.
When the next screen appears, just give your new project a name, choose the type Foundation, leave the other settings as they are, and then click Next.
Now choose a folder to save the Xcode project on your Mac. Once you do this, an Xcode screen will appear. The Xcode screen will include a list of files on the left and a code editor in the center (see Figure 1-3).
Figure 1-3.
Code editor and project navigator
## Hello World
Writing Hello World in code is what we do when want to make sure that we have set up a code project correctly. Xcode makes this really easy to do because new Command Line Tool projects come with Hello World already coded.
All you need to do is use the Project Navigator, the widget on the left-hand area of your Xcode screen, to locate the file named main.m. Click main.m to open the file in the code editor (Figure 1-4).
Figure 1-4.
Editing main.m
When you do this you will see code that looks a bit like this:
#import <Foundation/Foundation.h>
int main(int argc, const char * argv[]){
@autoreleasepool {
// insert code here...
NSLog(@"Hello, World!");
}
return 0;
}
Much of the code above sets up the application, starting with the #import statement. This statement imports the code that you need, called Foundation, for your Objective-C program to work.
The next part of the code above is the function named main, which contains all the program code and returns the integer 0 when the program is complete.
Inside the main function you will see an Objective-C auto release pool. Auto release pools are required to support the memory management system used with Objective-C. The auto release pool is declared with the @autoreleasepool keyword.
In the middle of all this code, you can see the Hello World code, which looks like this:
NSLog(@"Hello, World!");
The first piece of this is the function NSLog.NSLog is used to write messages to the console log. Xcode's console log is located at the bottom of the Xcode screen (Figure 1-5) and presents error messages along with messages that you send using NSLog.
Figure 1-5.
Hello World output in console screen
Note
By default the console log is hidden along with the debugger at the bottom of the screen. To see these two components you must unhide the bottom screen by clicking the Hide or Show Debug Area toggle located in the top right-hand part of the Xcode screen. This button is located in the middle of a set of three buttons.
The string Hello World is enclosed with quotes ("") and the Objective-C escape character @. The @ character is used in Objective-C to let the compiler know that certain keywords or code have special Objective-C properties. When @ is before a string in double quotes, as in @"Hello, World!", it means that the string is an Objective-C NSString object.
## Code Comments
There is one more line of code that Xcode helpfully inserted into this project for you. This line of code is a good example of a code comment and begins with these two special characters: //. Here is what the code comment looks like:
// insert code here...
Code comments are used to help document your code by giving you a way to insert text into the program that will not be compiled into a working program.
## Build and Run
To test the code, click the Run button in the top upper left area of the Xcode screen. See Figure 1-6 to see which button to push.
Figure 1-6.
Building and running the Hello World code
When you click the Run button, Xcode will compile the code in the Xcode project and then run the program. The program you have been working on will print out the words Hello World. You can see the output circled in Figure 1-6.
## Where to Get More Information
This book is a quick reference for Objective-C, and I have focused on the code and patterns that I judge will be most useful for most people. However, this means that I can't include everything in this book.
The best place to get complete information on Objective-C and the Mac and iOS applications that you can create with Objective-C is the Apple Developer web site. You can get to the Apple Developer web site by using a web browser to navigate to http://developer.apple.com/resources .
This web site contains guides, source code, and code documentation. The part of the web site that will be most relevant to the topics in this book is the code documentation for the Foundation framework. You can use the web site's search features to look for a specific class like NSObject, or you can search for the word Foundation or Objective-C.
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_2
© Matthew Campbell 2013
# 2. Build and Run
Matthew Campbell1
(1)
CA, US
Abstract
Objective-C code needs to be turned into machine code that runs on an iOS device or a Mac. This process is called compiling, and Xcode uses the LLVM compiler to create machine code. Xcode templates used to create new projects, like you did in Chapter 1, will have the settings that the compiler needs to set this up for you.
## Compiling
Objective-C code needs to be turned into machine code that runs on an iOS device or a Mac. This process is called compiling, and Xcode uses the LLVM compiler to create machine code. Xcode templates used to create new projects, like you did in Chapter 1, will have the settings that the compiler needs to set this up for you.
## Building
Compiling code is usually only part of the process involved with creating an app. Apps destined to be distributed to Mac and iPhone users require other resources in addition to the compiled code. This includes content like pictures, movies, music, and databases.
These resources, along with an app directory structure, are all packed into a special file called a Bundle. You will use Xcode to compile your source code and then package everything into the bundle that you need for you app. This process is called Building in Xcode.
If you look under the Project menu item in your Xcode menu bar (Figure 2-1), you will see options for building your program. Usually you will just use the Build and Run feature of Xcode to creating compile and test your code.
Figure 2-1.
Product build options
## Build and Run
Use the Build and Run button (see Figure 2-2) located in the upper left-hand area of your Xcode screen (this is an arrow that looks like a play button) to build your app.
Figure 2-2.
Build and Run button
Xcode will not only build your app, but execute the code as well. If you click the Build and Run button for the current program, you should see the following text appear in your console log (also shown in Figure 2-3):
2014-01-12 06:22:48.382
Ch01_source_code[13018:303] Hello, World!
Program ended with exit code: 0
Figure 2-3.
Console log's Hello World output
Your output won't match mine exactly, but you should see the words Hello World! and the name of your project on the screen.
Note
While most apps will get a bundle along with the compiled machine code included, I don't need that for the apps I am using to demonstrate the code used in this book. If you locate your compiled code file, you will only find one Unix Executable File that you can run with the Mac Terminal app.
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_3
© Matthew Campbell 2013
# 3. Variables
Matthew Campbell1
(1)
CA, US
Abstract
Objective-C stores information in variables. These are divided into two types: primitive types and composite types. Primitive variables store one piece of information, such as a number or a character. Composite variables store a set of information, such as three related numbers and a character.
## Variables Defined
Objective-C stores information in variables. These are divided into two types: primitive types and composite types. Primitive variables store one piece of information, such as a number or a character. Composite variables store a set of information, such as three related numbers and a character.
### Data Types
Table 3-1 shows the most common primitive data types that you will see in Objective-C.
Table 3-1.
Objective-C Data Types
Data Type | Format Specifier | Description
---|---|---
NSInteger | %li | Signed integer
NSUInteger | %lu | Unsigned integer
BOOL | %i | Boolean (YES/NO)
CGFloat | %f | Floating point
Note
Objective-C programs can use C data types like int, long, float, double, and char in addition to the Objective-C data types listed in Table 3-1. This is because Objective-C is based on the C programming language and so inherits all of C's functionality in addition to the Objective-C syntax that we are discussing here.
### Declaring Variables
Variables are declared in Objective-C with their data type first, followed by a variable name. You must declare a variable before using it. Variable names should be meaningful, but you can name a variable anything that you want.
Here is how you would declare an integer in Objective-C:
NSUInteger numberOfPeople;
### Assigning Values
You can use the assignment operator (=) to assign a value to a variable, like so:
numberOfPeople = 100;
Once you have assigned a value, you can retrieve and use that value by referencing the variable name.
NSLog(@"The number of people is %lu", numberOfPeople);
Note
You may have noticed that the NSLog statement required the %lu symbol. This symbol is called a format specifier and NSLog will use it as a placeholder to insert values in the comma-separated list that appears right after the string. See Table 3-1 for a list of the format specifiers that you must use with Objective-C data types.
You can also declare variables and assign values on the same line if you like.
NSUInteger numberOfGroups = 20;
### Integer Types
Integers are whole numbers, so any number that doesn't need a decimal point is an integer. In Objective-C, integers are expressed with the data types NSInteger and NSUInteger.
NSUIntegers are unsigned integers, which means that they can only be positive numbers. The maximum value that an NSUInteger can take depends on the system for which the Objective-C code is compiled. If you compile for a 64-bit Mac, the maximum value will be 18,446,744,073,709,551,615.
For 32-bit platforms like the iPhone 5 and below, the maximum value is 4,294,967,295. You can check these numbers yourself using the NSUIntegerMax constant.
NSLog(@"NSUIntegerMax is %lu", NSUIntegerMax);
NSIntegers are signed integers, which means that they can be either positive or negative. The maximum value of an NSInteger is half of the NSUInteger value because NSInteger must support both positive and negative numbers.
So, if you need huge numbers, you may need to stick to NSUInteger, but if you need to handle both positive and negative numbers, you will need NSInteger. You can check the minimum and maximum value of NSInteger on your system with the NSIntegerMin and NSIntegerMax constants.
NSLog(@"NSIntegerMin is %li", NSIntegerMin);
NSLog(@"NSIntegerMax is %li", NSIntegerMax);
### Boolean Types
Boolean date types are used when values can either be true or false. In Objective-C, this data type is declared as a BOOL type. BOOL types have values that are either YES or NO.
BOOL success = YES;
Since Objective-C stores BOOL values as 1 for YES and 0 for NO, you must use the %i format specifier print out a BOOL value. %i is another format specifier for integers.
NSLog(@"success is %i", success);
The NSLog statement above will print out 1 for YES and 0 for NO, but some people prefer to see the YES or NO strings printed out to the log. You can do so using this alternate statement:
NSLog(@"success: %@", success ? @"YES" : @"NO");
Here the variable success was replaced with a statement that has to be evaluated. This statement will return either the string YES or the string NO depending on the value of the variable success. If success is zero, then whatever is in the last position of the statement is returned, and if success is any other value then whatever is in the first position is returned. The ternary operator (?) tells the compiler to evaluate the statement.
### Float Types
Float types are represented in Objective-C with the CGFloat data type. CGFLoat is what you use when you want decimal places in your number. For example, if you want to represent a percent, you may do something like this:
CGFloat percent = 33.34;
You can find the maximum value of CGFloat values for 32-bit systems using FLT_MAX. For 64-bit systems you must use DBL_MAX.
### Scope
Like most programming languages that trace their history back to C, Objective-C variables have their scope determined by the placement of these curly brackets,{ }. When you enclose lines of code in { }, you are defining a block of code. Variables placed inside a block of code can only be used from inside that block of code. This is called scope.
For example, let's take the previous example that declared an unsigned integer called numberOfPeople, assigned a value to this variable, and then printed this value out to the log.
NSInteger numberOfPeople;
numberOfPeople = 100;
NSLog(@"The number of people is %li", numberOfPeople);
This code works perfectly fine because the variable numberOfPeople remains in scope the entire time you need it to. But if you use curly brackets to enclose the first two lines of code in their own region, the variable will work when you assign the value but not when you attempt to write out the value to the log. You will not be able to compile your program if you try to write out numberOfPeople to the log outside of the scope defined by the curly brackets.
{
NSInteger numberOfPeople;
numberOfPeople = 100;
}
NSLog(@"The number of people is %li", numberOfPeople);
Scope is used to define blocks of code for functions, loops, methods, if-statements and switch statements. All of these things are discussed later in this book.
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_4
© Matthew Campbell 2013
# 4. Operators
Matthew Campbell1
(1)
CA, US
Abstract
Operators are used to perform operations on values. You can do arithmetic, assignment, logical, and relational operations with operators.
## Operators Defined
Operators are used to perform operations on values. You can do arithmetic, assignment, logical, and relational operations with operators.
### Arithmetic Operators
Arithmetic operators are used to perform math on values. You can use arithmetic operators to perform addition, subtraction, multiplication, division, and modulus (the remainder from a division operation). Table 4-1 lists Objective-C's arithmetic operators.
Table 4-1.
Arithmetic Operators
Operator | Meaning
---|---
+ | Addition
- | Subtraction
* | Multiplication
/ | Division
% | Modulus
An operation will look like a math problem.
1.0 + 2.0 – 3.0 * 4.0 / 5.0;
The result from the line of code above won't do much because the result isn't being stored or used in a function. You can use the results of an operation immediately in a function like:
NSLog(@"1.0 + 2.0 – 3.0 * 4.0 / 5.0 = %f", 1.0 + 2.0 - 3.0 * 4.0 / 5.0);
You can also use an assignment operator to store the result in a variable to be used later on.
CGFloat t2 = 1.0 + 2.0 - 3.0 * 4.0 / 5.0;
You may notice that floating point numbers are used in the operations above. Each number in the expression has a decimal point and zero, and the t2 variable data type is CGFloat. This was deliberate because I suspected that the operation would result in a fractional number, requiring a floating point variable to be represented correctly.
Note
Using the correct data types is essential when doing arithmetic operations, and the compiler will assume that any number without a decimal place is an integer. Operations involving only integers will return integers, which means that the result will be rounded. This could easily lead to unexpected results in your calculations.
#### Operator Precedence
Operators are evaluated from left to right. Multiplication, division, and modulus operators are evaluated before addition and subtraction operators. If you want to change the order that operators are evaluated, you can enclose parts of the expression in parentheses. Doing this will change the results of your expressions, as shown:
NSLog(@"%f", 1.0 + 2.0 - 3.0 * 4.0 / 5.0); // 0.600000
NSLog(@"%f", 1.0 + (2.0 - 3.0 * 4.0) / 5.0); // -1.000000
NSLog(@"%f", (1.0 + 2.0 - 3.0 * 4.0) / 5.0); // -1.800000
### Assignment Operators
The assignment operator (=) is used to assign a value to a variable. You can assign a value or the results of an operation to a variable using the assignment operator.
NSUInteger t2 = 100;
NSUInteger t3 = 10 * 10;
### Increment and Decrement Operators
You can combine the addition and subtraction operators with the assignment operator as a shortcut. Add a ++ to the variable name and the value will be incremented by 1 and automatically assigned to the variable.
t2++;
The line of code above will increment t2 by 1, making the value of t2 equal to 101. The following is the longer way of doing the same thing:
t2 = t2 + 1;
You can also reduce the value of t2 by adding the decrement operator (\--) to the variable name.
t2--;
### Relational Operators
Relational operators are used to evaluate the relationship between two values. When you use relational operators, the result will be a BOOL data type. You can evaluate whether two values are the same or different. See Table 4-2 for a list of the available relational operators.
Table 4-2.
Relational Operators
Operator | Meaning
---|---
== | Equal to
!= | Not equal to
> | Greater than
< | Less than
>= | Greater than or equal to
<= | Less than or equal to
Here is an example of how to use a relational operator:
BOOL t4 = 5 < 4;
NSLog(@"t4 = %@", t4 ? @"YES" : @"NO"); // NO
This case seems trivial, but when you have variables whose values you don't know beforehand, evaluating relational operators is important. Relational operators are also used in if statements, which are a key programming tool. If statements are covered later.
### Logical Operators
Logical operators are used when you are evaluating more than one relationship between entities. These operators are used with the relational operators and they also return a BOOL result.
See Table 4-3 for a list of available logical operators.
Table 4-3.
Logical Operators
Operator | Meaning
---|---
&& | AND
|| | OR
! | NOT (Reverse result)
Here's an example of how to use the logical operators:
BOOL t5 = YES && NO; // NO
BOOL t6 = YES && YES; // YES
BOOL t7 = YES || NO; // YES
BOOL t8 = NO || NO; // NO
BOOL t9 = !YES; // NO
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_5
© Matthew Campbell 2013
# 5. Objects
Matthew Campbell1
(1)
CA, US
Abstract
Objective-C objects are entities that contain both behavior and attributes in one place. Behaviors are coded in methods while attributes are coded in properties. Objects can also include private instance variables. Private instance variables are used when data storage is required, but not needed to be shared.
## Objects Defined
Objective-C objects are entities that contain both behavior and attributes in one place. Behaviors are coded in methods while attributes are coded in properties. Objects can also include private instance variables. Private instance variables are used when data storage is required, but not needed to be shared.
### NSObject Class
NSObject is the root class in Objective-C. A class is a definition that has all the code needed to make an object's methods and properties work. NSObject is called the root class because it has all the code needed to make objects work in Objective-C and every other class inherits from the NSObject class.
### Object Declaration
A class is used like a data type. Data types are used to declare a variable and you have many variables for each data type. A class is used to declare an object and you can have one class with many objects.
Here is how you would declare an NSObject object:
NSObject object;
### Object Constructors
While data type variables can just be assigned to a value, objects require functions called constructors. Constructors assign memory resources to the object and do any setup that the object needs to function. Usually, you will see constructors split up into two functions called alloc and init.
object = [[NSObject alloc] init];
The init function will sometimes have a different name, but it will usually start with the letters init. For example, here is a constructor for an NSURL object that will point to my web site:
NSURL *url = [NSURL alloc] initWithString:@" [ https://mobileappmastery.com "];
Notice that instead of init you have initWithString:. There aren't any rules, other than convention, when it comes to names of constructors.
While the pattern of alloc and init is the most common, you will also see object creation with other function names and with the new keyword.
NSDate *today = [NSDate date];
NSObject *object2 = [NSObject new];
While the new constructor is uncommon, the new keyword can be used in place of alloc and init. Constructors other than new, alloc, and init are used for temporary objects. The date object above is an example of an object that is used on a temporary basis because you usually just want to get a timestamp and move on. There is no reason to maintain an object like this for a long time.
Note
Temporary objects like the date object in the example are used more often in projects where ARC is not being used for memory management. ARC, or Automatic Reference Counting, is a system that manages each object's memory requirements. Projects built with ARC use temporary objects like the date object above when functionality is needed, but the object doesn't need to be maintained for any length of time.
### Object Format Specifier
When you want to use NSLog to print out data type values you must use a format specifier like %lu, %li, %f, or %i. The value gets substituted into the NSLog string, giving you a way to observe variable values. You can do this with objects as well.
NSObject objects and every object that derives from NSObject use the %@ format specifier. The output you get from NSLog depends on the type of object. If you print out the object from the example above like this
NSLog(@"object = %@", object);
you will get output that gives you details about the object including the class name and memory address.
object = <NSObject: 0x10010a0c0>
Other objects will report back more specific information; what gets reported back depends on the type of object. If you tried the same trick with the url NSURL object like this
NSLog(@"website = %@", url);
the console would present a listing of the web site URL.
website = https://mobileappmastery.com
### Messages
When you want an object to do something, you send a message to the object. Sending a message directs the object to execute the method defined in the class that corresponds to the message.
For instance, you could remove a file from your shared directory by sending a message to an NSFileManager object.
NSFileManager *fileManager = [NSFileManager defaultManager];
[fileManager removeItemAtPath:@"/Users/Shared/studyreport.txt"
error:nil];
The first line of code above is declaring an NSFileManager object named fileManager. In the second line of code, you can see the example of the message being sent. The message is removeItemAtPath:error : and you send this message by writing this out and including the parameters (here, these are the item to remove and an optional error object). All of this is enclosed in square brackets, [ ], and ends with a semi-colon.
If you were to look at the class definition in the header file for NSFileManager, you would find the declaration for this method:
\- (BOOL)removeItemAtPath:(NSString *)path error:(NSError **)error;
This method returns a BOOL value that you are not using here.
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_6
© Matthew Campbell 2013
# 6. Strings
Matthew Campbell1
(1)
CA, US
Abstract
NSString is the class used in Objective-C to work with strings. NSString manages the list of characters that forms a string. NSString objects are immutable, which means that once you create an NSString object you can't change it.
## NSString
NSString is the class used in Objective-C to work with strings. NSString manages the list of characters that forms a string. NSString objects are immutable, which means that once you create an NSString object you can't change it.
NSString objects can be created with many different constructors, but the most common way you'll see NSString objects created is with the @ symbol followed by quotes. In fact, you've seen this already in the Hello World example from Chapter 1.
NSLog (@"Hello, World!" );
That parameter is an NSString object, although it's hard to see since you don't need the explicit NSString declaration here. More often you will see NSString objects created like this:
NSString *firstName = @"Matthew";
NSString *lastName = @"Campbell";
Here is another NSString constructor, stringWithFormat:, that is used often when other variables and objects are used to compose a new string:
NSString *n = [NSString stringWithFormat:@"%@ %@", firstName, lastName];
This constructor, stringWithContentsOfFile:encoding:error:, is used to create a new NSString object based on the contents of a file.
NSString *fileName = @"/Users/Shared/report.txt";
NSString *fileContents = [NSString stringWithContentsOfFile:fileName
encoding:NSStringEncodingConversionAllowLossy
error:nil];
## NSMutableString
Sometimes you want to be able to add or remove characters to a string as your program executes. For instance, you may want to maintain a log of changes users make in your program and you don't want to create new strings each time a change is made. These situations call for NSSMutableString.
You can use the same constructors to create NSMutableString objects except for the shortcut where you assign an object to a string contained in @"". To create a simple NSMutableString, use the stringWithString: constructor.
NSMutableString *alpha = [NSMutableString stringWithString:@"A"];
### Inserting Strings
You can insert strings into a mutable string at any point in the list of characters that make up the mutable string. You just have to be sure that the insertion point that you specify is in range of the list of characters. Don't attempt to insert a string in position 20 if your string is only 10 characters long. You can find out the length of a string by sending the length message to the string.
To insert a string, you will need to specify both the string that you want to insert and the starting position. Here is how you would insert a B into the alpha mutable string:
[alpha insertString:@"B"
atIndex:[alpha length]];
Here you are sending the insertString:atIndex: message. The first parameter is @"B", which is the string you want to add to the mutable string A. The atIndex: parameter is the length of the alpha string since you want to append the B to the end of the A string to produce @"AB".
If you really just want to append a string, there is an even easier method available to do that. You can send the appendString : message, which only requires the string parameter. The insertion point is not required because it is assumed that the string will be appended to the end of the mutable string.
[alpha appendString:@"C"];
### Deleting Strings
Just as you can add strings to a mutable string, you can remove parts of a mutable string. When you are deleting strings, you will need to specify both a starting point and a length. There is a composite type called NSRange that can help with that. NSRange has two variables associated with it, location and length. You need to create one of these composite types first before sending the deleteCharactorsInRange : message to the mutable string.
NSRange range;
range.location = 1;
range.length = 1;
[alpha deleteCharactersInRange:range];
This code will delete the B from the ABC string you created in the previous steps.
Note
When you use NSRange, you should keep in mind that strings are stored as a list of letters that start with the index of 0.
### Find and Replace
Anyone who has used a word processor knows how convenient the find and replace function can be. You just supply the program with the text that you want to replace and the text that you want in its place. NSMutableString also has this ability.
To do find and replace with a mutable string you will need to define a range and supply the string that you are looking for and the string that you put in the first string's place. There are also search options that you can specify.
range.location = 0;
range.length = 2;
[alpha replaceOccurrencesOfString:@"AC"
withString:@"ABCDEFGHI"
options:NSLiteralSearch
range:range];
The first thing you are doing here is reusing the NSRange range variable to specify what part of the string you want to look at. You are going to start at the beginning and search the entire length of the string.
Next, you define the string that you want to replace, @"AC", and the string that you want to use as a replacement, @"ABCDEFGHI".
In the options you set the NSLiteralSearch option. This means that the method will require an exact match for your strings. You could also specify NSCaseInsensitiveSearch to ignore case and NSRegularExpressionSearch, which lets you use a regular expression.
Note
Regular expressions are a tool used to search strings for patterns. They are used in many programming languages. A full explanation of regular expressions is out of the scope of this book, but worth looking into if you spend a lot of time working with strings.
The last parameter is the range variable that you set up before the message.
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_7
© Matthew Campbell 2013
# 7. Numbers
Matthew Campbell1
(1)
CA, US
Abstract
NSNumber is the class used in Objective-C to work with numbers. NSNumber gives you a way to turn floating point and integer values into object-oriented number objects. While you can't use NSNumber objects in expressions, NSNumber objects become useful when complicated formatting is required.
## NSNumber
NSNumber is the class used in Objective-C to work with numbers. NSNumber gives you a way to turn floating point and integer values into object-oriented number objects. While you can't use NSNumber objects in expressions, NSNumber objects become useful when complicated formatting is required.
NSNumber objects can be created with many different constructors, but the most common way you'll see NSNumber objects created is with the @ symbol followed by a number.
NSNumber *num1 = @1;
NSNumber *num2 = @2.25;
Sometimes you may want to use special constructors that are matched to numbers stored in a particular way.
NSNumber *num3 = [NSNumber numberWithInteger:3];
NSNumber *num4 = [NSNumber numberWithFloat:4.44];
### Converting to Primitive Data Types
NSNumber objects can't be used in expressions, but NSNumber has some built-in functions that will return the object in a primitive data type form. You will have to use these functions to convert numbers before using them in expressions.
CGFloat result = [num1 floatValue] + [num2 floatValue];
The function used above is floatValue but there are more like intValue and doubleValue that match primitive data types from C programming like int and double. stringValue is another function that will return the number formatted as a string, which can be useful in reports.
### Formatting Numbers
NSNumber becomes very useful when you want to format numbers for displays in reports and presentations. When used with the NSNumberFormatter class you can output numbers as localized currency, scientific notation, and they can even be spelled out.
To do this, you must create a new number formatter and then set the formatting style that you want to use.
NSNumberFormatter *formatter = [[NSNumberFormatter alloc] init];
formatter.numberStyle = NSNumberFormatterCurrencyStyle;
Then you can send the stringFromNumber: message to get the formatted number. Here is an example of doing this in the context of using NSLog to write a message to the console:
NSLog(@"Formatted num2 = %@", [formatter stringFromNumber:num2]);
This will output $2.25 from my computer since I'm set up in the United States. Your output will differ depending on the locale that you have set on your Mac or iOS device.
### Converting Strings into Numbers
You can also convert a string into a number. If you have a number represented as a string, you can use a number formatter to convert the string into an NSNumber object.
Just change the number formatter style to the style in which the number was stored. Then create a new NSNumber object with the NSNumberFormatter numberFromString: message. Here is how to convert the string "two point two five" into the number 2.25:
formatter.numberStyle = NSNumberFormatterSpellOutStyle;
NSNumber *num5 = [formatter numberFromString:@"two point two five"];
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_8
© Matthew Campbell 2013
# 8. Arrays
Matthew Campbell1
(1)
CA, US
Abstract
NSArray is a class used to organize objects in lists. NSArray can maintain an index of objects, search for objects, and enumerate through the list. Enumeration is the process of moving through a list one item at a time and performing an action on each item in the list.
## NSArray
NSArray is a class used to organize objects in lists. NSArray can maintain an index of objects, search for objects, and enumerate through the list. Enumeration is the process of moving through a list one item at a time and performing an action on each item in the list.
To create an array, you include a comma-separate list of objects enclosed in square brackets and started with the @ symbol.
NSArray *numbers = @[@-2, @-1, @0, @1, @2];
NSArray *letters = @[@"A", @"B", @"C", @"D", @"E", @"F"];
The NSArray object numbers has a list of NSNumber objects, while letters has a list of strings. Any object can be put into an NSArray object, but not primitives like NSInteger.
### Referencing Objects
You put objects in arrays so that you have an easy way of getting references to these objects later. The general way of getting these references is to send an objectAtIndex: message to the array. Here is how to get the number object reference from the second position in the numbers array:
NSNumber *num = [numbers objectAtIndex:1];
If you know that you want the last object in the list, you use lastObject to return the last object in the list.
Note
Array indexes in Objective-C start with 0.
NSNumber *lastNum = [numbers lastObject];
Sometimes you might already have a reference to the object in question, but you want to find out the index number that corresponds to the object's position in the array. You can use indexOfObject: to get this information.
NSUInteger index = [numbers indexOfObject:num];
### Enumeration
Enumeration is the process of moving through a list one item at time. Usually, you will be performing some type of action on each item, like writing out the object's contents to the log or modifying a property on the object.
Blocks, or anonymous functions, are used to perform enumeration with arrays. Blocks are functions that are not attached to an object. You can define a block on enclosing lines of code in curly brackets. Blocks can be treated like objects, which means that you can pass a block to an enumeration method just like you could do for a variable or an object.
Note
Blocks deserve their own treatment, apart from their use in arrays, and so more details about using blocks will be covered in Chapter 20.
Let's say you want to go through the array of numbers and print out each number's value when squared. You could enumerate through the list using the NSArray enumerateObjectsUsingBlock: method. This method will give you a reference to the current object, which you can use to perform this simple operation.
[numbers enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
NSLog(@"obj ^ 2= %f", [obj floatValue] * [obj floatValue]);
}];
All of the code after the colon is the block code. This block starts with the ^ symbol. Then you can see a comma-separated list of parameters followed up by curly brackets. The code inside the curly brackets is the block, and the parameters declared in the parentheses are the variables that the block can reference.
## NSMutableArray
More often than not, you need to be able to add and remove items from an array as your program executes code. You could be maintaining a list of action items or video game characters. When you need to do this, you can use NSMutableArray.
NSMutableArray does everything that NSArray does except that it gives you the ability to change the contents of the array. You can add and remove items and do other types of manipulations on objects in mutable arrays.
You can't use the shortcut for array creation here, though, and NSMutableArray will require you to use a constructor, like this:
NSMutableArray *mArray = [NSMutableArray arrayWithArray:@[@-2, @-1, @0]];
The constructor used above is arrayWithArray: and you just passed on an NSArray object to this constructor to get started.
To add an object to a mutable array, you use the addObject: message.
[mArray addObject:@1];
To remove an object, you must use the removeObject : message and pass a reference to the object that you want to remove.
[mArray removeObject:@1];
If you want to exchange one object with another, you can use the method exchangeObjectAtIndex:withObjectAtIndex:.
[mArray exchangeObjectAtIndex:0 withObjectAtIndex:1];
This will take whatever is in position 0 and switch with whatever is in position 1.
There are many other variations of these functions available to you. You can remove all items, add arrays of items into the mutable array, or insert items or arrays of items at a specific starting point in the array.
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_9
© Matthew Campbell 2013
# 9. Dictionaries
Matthew Campbell1
(1)
CA, US
Abstract
NSDictionary is a class used to organize objects in lists using keys and values. NSDictionary can maintain an index of objects and let you retrieve an object if you have the right key. Usually, the key will be an NSString object while the value will be whatever type of object you are indexing.
## NSDictionary
NSDictionary is a class used to organize objects in lists using keys and values. NSDictionary can maintain an index of objects and let you retrieve an object if you have the right key. Usually, the key will be an NSString object while the value will be whatever type of object you are indexing.
To create a dictionary, you include a comma-separate list of key value pairs enclosed in curly brackets and started with the @ symbol.
NSDictionary *d1 = @{@"one": @1, @"two": @2, @"three": @3};
This creates a dictionary of NSNumber objects that you can reference with their string keys. So, the key string @"one" can be used to retrieve the NSNumber object 1.
### Referencing Objects
You put objects in dictionaries so that you have an efficient way of getting references to these objects bases on keys. The general way of getting these references is to send an objectForKey : message to the dictionary. Here is how to get the number referenced by the key @"one":
NSNumber *n1 =[d1 objectForKey:@"one"];
### Enumeration
Enumeration is the process of moving through a list one item at time. Usually, you will be performing some type of action on each item, like writing out the object's contents to the log or modifying a property on the object.
You can enumerate through a dictionary in almost the same way as you do with an array. But you will get a reference to each key in the dictionary as well as each object.
[d1 enumerateKeysAndObjectsUsingBlock:^(id key, id obj, BOOL *stop) {
NSLog(@"key = %@, value = %@", key, obj);
}];
Just like with the array enumeration procedure discussed in Chapter 8, the block code declaration starts with the ^ character and the block code is enclosed in the curly brackets, { }.
Here is the output that would be generated with this message:
key = one, value = 1
key = two, value = 2
key = three, value = 3
## NSMutableDictionary
NSDictionary is an immutable object, so once you create an NSDictionary object you can't add or remove items from the dictionary. If you need to add or remove items from a dictionary, you must use the NSMutableDictionary class .
You can't use the shortcut for array creation here, though, and NSMutableDictionary requires you to use a constructor, like this:
NSMutableDictionary *md1 = [[NSMutableDictionary alloc] init];
The easiest thing to do is to follow the alloc and init pattern to create an empty dictionary to which you can add objects. When you are ready to add an object to the dictionary, you will need a key and the value that you want to add. These two parameters will be supplied to the setObject: for Key: method.
[md1 setObject:@4 forKey:@"four"];
To remove an object, send the r emoveObjectForKey: message and supply the key.
[md1 removeObjectForKey:@"four"];
You can remove every object from a mutable dictionary by sending the removeAllObjects message.
[md1 removeAllObjects];
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_10
© Matthew Campbell 2013
# 10. For Loops
Matthew Campbell1
(1)
CA, US
Abstract
Loops are used when you want to repeat a similar type of task many times. For loops are used when you know beforehand how many times you want to repeat a similar line of code. Here is a for loop that will write to the console window 10 times:
## For Loops Defined
Loops are used when you want to repeat a similar type of task many times. For loops are used when you know beforehand how many times you want to repeat a similar line of code. Here is a for loop that will write to the console window 10 times:
for (int i=0; i<10; i++) {
NSLog(@"i = %i", i);
}
This for loop will produce this output:
i = 0
i = 1
i = 2
i = 3
i = 4
i = 5
i = 6
i = 7
i = 8
i = 9
Let's take a closer loop at the parts of this loop. The first thing is the for keyword. This lets the compiler know that you are coding a for loop.
Next, you have a series of code lines enclosed in parenthesis: (int i=0; i<10; i++). These lines of code specify a starting condition (int i=0;), an ending condition (i<10), and an increment instruction (i++). This means that the loop will repeat 10 times by starting at 0 while the variable i increases by 1 each time the loop executes as long as i is less than 10.
Finally, you have a code block defined by curly brackets, { }. The code contained in these curly brackets will execute each time you go through the loop. In the example above, the code block had only one line of code, NSLog(@"i = %i", i);. Notice that the variable i, sometimes called the counter variable, is in scope and you can use i in the code block.
### For Loops and Arrays
More often than not, you will use a for loop to go through a list and do something with each object in that list. Let's assume that you have an array named list that has a five NSNumber objects ranging from -2.0 to 2.0.
NSArray *list = @[@-2.0, @-1.0, @0.0, @1.0, @2.0];
Let's say that you want to construct a string that includes all values in list but spelled out with words. For example, you want a string minus two, minus one, etc . You would need to use NSNumberFormatter and an NSMutableString, both of which have been covered in the chapters on numbers and strings.
Just to set this up, let's get the number formatter, mutable string, and array before you go into the for loop.
NSArray *list = @[@-2.0, @-1.0, @0.0, @1.0, @2.0];
NSMutableString *report = [[NSMutableString alloc] init];
NSNumberFormatter *formatter = [[NSNumberFormatter alloc] init];
formatter.numberStyle = NSNumberFormatterSpellOutStyle;
Note that you are specifying NSNumberFormatterSpellOutStyle here so that the number formatter can give you the value of each number object spelled out.
Since you want the for loop to move through each number in the array list, you will need to find out how many objects are contained in list. NSArray objects have a count property that you can use to get the number of objects contained in the array, and you can use this value directly in the for loop.
for (int i=0; i<list.count; i++) {
NSNumber *num = [list objectAtIndex:i];
NSString *spelledOutNum = [formatter stringFromNumber:num];
[report appendString:spelledOutNum];
[report appendString:@", "];
}
What you are doing above is going through each object in list and getting a reference to the number in the list that corresponds to the index that is associated with the current value of i.
Next, you use the number formatter to get the spelled-out string version of the number. Finally, you append this spelled-out string value to the end of the mutable string. Here is what the output would look like:
report = minus two, minus one, zero, one, two,
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_11
© Matthew Campbell 2013
# 11. While Loops
Matthew Campbell1
(1)
CA, US
Abstract
Like for loops, while loops are used when you want to repeat a similar type of task many times. While loops are used when you want to execute a line of code many times until a condition is met. Here is a while loop that will write to the console window 10 times:
## While Loops Defined
Like for loops, while loops are used when you want to repeat a similar type of task many times. While loops are used when you want to execute a line of code many times until a condition is met. Here is a while loop that will write to the console window 10 times:
int i = 0;
while (i < 10) {
NSLog(@"i = %i", i);
i++;
}
This while loop will produce this output:
i = 0
i = 1
i = 2
i = 3
i = 4
i = 5
i = 6
i = 7
i = 8
i = 9
The loop above does the same thing as the for loop from Chapter 10, but note that the specifications for the loop are in different spots. The first thing that you will notice is that you need to have a counter variable on hand to use in a while loop. You can't just put the counter variable right in the body of the loop like you did before. So you need a separate line of code before the while loop to declare and assign the counter variable.
int i = 0;
Then you have the while loop itself that is started with the while keyword. In the parentheses after the while keyword is the ending condition, (i < 10). This means that the loop will go on as long as the value of i is less than 10.
Finally, you have a code block defined by curly brackets. The code contained in these curly brackets will execute each time you go through the loop. You have one line of code, NSLog(@"i = %i", i);, to write to the log. You also increment the counter variable in this code block, i++;.
Note
It's important to remember to increment the counter variable here. If you don't do this, then i will never go beyond 10. The loop will never end, which will effectively cause your program to hang until a user terminates it.
### While Loops and Arrays
Now let's go ahead and repeat the example from Chapter 10 where you formatted a list of numbers in an array with a loop. This is the array and other objects that you worked with in Chapter 10:
NSArray *list = @[@-2.0, @-1.0, @0.0, @1.0, @2.0];
NSMutableString *report = [[NSMutableString alloc] init];
NSNumberFormatter *formatter = [[NSNumberFormatter alloc] init];
formatter.numberStyle = NSNumberFormatterSpellOutStyle;
You are going to do the same thing you did in Chapter 10, but you will use the while loop instead.
int i = 0;
while(i < list.count) {
NSNumber *num = [list objectAtIndex:i];
NSString *spelledOutNum = [formatter stringFromNumber:num];
[report appendString:spelledOutNum];
[report appendString:@", "];
i++;
}
What you are doing above is going through each object in the list and getting a reference to the number in the list that corresponds to the index that is associated with the current value of i.
Next, you use the number formatter to get the spelled-out string version of the number. Finally, you append this spelled-out string value to the end of the mutable string. Here is what the value of report would look like:
report = minus two, minus one, zero, one, two,
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_12
© Matthew Campbell 2013
# 12. Do While Loops
Matthew Campbell1
(1)
CA, US
Abstract
Do while loops are used for the same reasons as for loops and while loops. The syntax is different, and do while loops are notable because the code in the block will execute at least once. This is because the ending condition is not evaluated until the end of the loop. Here is how you would code a do while loop to count to 10:
## Do While Loops Defined
Do while loops are used for the same reasons as for loops and while loops. The syntax is different, and do while loops are notable because the code in the block will execute at least once. This is because the ending condition is not evaluated until the end of the loop. Here is how you would code a do while loop to count to 10:
int i = 0;
do{
NSLog(@"i = %i", i);
i++;
}while (i <10);
This do while loop will produce this output:
i = 0
i = 1
i = 2
i = 3
i = 4
i = 5
i = 6
i = 7
i = 8
i = 9
The loop above does the same thing as the for loop from Chapter 10 and the while loop in Chapter 11. The specifications for the do while loop are similar to the while loop but they are located in different lines of code.
Like the while loop, you need to have a counter variable on hand.
int i = 0;
Then you have the do while loop itself that is started with the do keyword. Immediately after the do keyword you have a code block defined by curly brackets. The code contained in these curly brackets will execute each time you go through the loop. You have one line of code, NSLog(@"i = %i", i);, to write to the log. You also increment the counter variable in this code block, i++;.
The condition (i < 10) is after the while keyword right after the code block. This means that the loop will go on as long as the value of i is less than 10.
Note
It's important to remember to increment the counter variable here. If you don't do this, then i will never go beyond 10, and the loop will never end, which will effectively cause your program to hang until a user terminates it.
### Do While Loops and Arrays
Now let's go ahead and repeat the example from Chapter 10 and Chapter 11 where you formatted a list of numbers in an array with a loop. This is the array and other objects that you worked with in Chapter 10 and Chapter 11:
NSArray *list = @[@-2.0, @-1.0, @0.0, @1.0, @2.0];
NSMutableString *report = [[NSMutableString alloc] init];
NSNumberFormatter *formatter = [[NSNumberFormatter alloc] init];
formatter.numberStyle = NSNumberFormatterSpellOutStyle;
You are going to use the do while loop instead.
int i = 0;
do {
NSNumber *num = [list objectAtIndex:i];
NSString *spelledOutNum = [formatter stringFromNumber:num];
[report appendString:spelledOutNum];
[report appendString:@", "];
i++;
}while(i < list.count);
NSLog(@"report = %@", report);
You will end up with a mutablable string that you can write out to the console log. The output will look like this:
report = minus two, minus one, zero, one, two,
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_13
© Matthew Campbell 2013
# 13. For-Each Loops
Matthew Campbell1
(1)
CA, US
Abstract
For-each loops are a more specific type of loop that can only be used with collection objects like NSArray and NSDictionary. You can use a for loop when you want to move through a list of objects to perform an action on each object in the list.
## For-Each Loops Defined
For-each loops are a more specific type of loop that can only be used with collection objects like NSArray and NSDictionary. You can use a for loop when you want to move through a list of objects to perform an action on each object in the list.
For example, let's take the array example from Chapter 8.
NSArray *numbers = @[@-2, @-1, @0, @1, @2];
In Chapter 8, you used the enumeration method with a block to go through the list and square each number. You can use a for loop as an alternative to the enumeration method.
for (NSNumber *num in numbers){
NSLog(@"num ^ 2= %f", [num floatValue] * [num floatValue]);
}
This loop starts with the for keyword before a specification in parentheses. The specification starts with an object type (NSNumber) and a variable that will give you a reference to the current object in the list, (*num).
Next, you can see the keyword in followed by the array (numbers). Taken all together, you can read this as "for each number num in the array numbers, do something." The something here is defined in the code block that comes right after the first part of the loop. It will go through the entire array and square each number, producing the following output:
num ^ 2= 4.000000
num ^ 2= 1.000000
num ^ 2= 0.000000
num ^ 2= 1.000000
num ^ 2= 4.000000
### For Loops with NSDictionary
Dictionaries are a little bit more complicated than arrays because dictionaries maintain a list of keys and objects. You might expect that a for each loop used on a dictionary would yield a list of objects; however it turns out that you will just get a list of the dictionary keys.
So, if you code a for each loop with an NSDictionary, as in
NSDictionary *d1 = @{@"one": @1, @"two": @2, @"three": @3};
for (id object in d1){
NSLog(@"object = %@", object);
}
you will get output that lists out all the keys like this :
object = one
object = two
object = three
If you want to output values in the dictionary, you will need to send the objectForKey: messageto the dictionary.
for (id object in d1){
NSNumber *num = [d1 objectForKey:object];
NSLog(@"num = %@", num);
}
This for loop will print out the values of the objects.
num = 1
num = 2
num = 3
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_14
© Matthew Campbell 2013
# 14. If Statements
Matthew Campbell1
(1)
CA, US
Abstract
If statements are used when you want to make a choice to execute code based on the trueness of a condition. To make this work, you evaluate an expression that uses relational operators to yield a YES or NO result. If you evaluate an express to be true, then you can execute the code; otherwise you can ignore the code.
## If Statements Defined
If statements are used when you want to make a choice to execute code based on the trueness of a condition. To make this work, you evaluate an expression that uses relational operators to yield a YES or NO result. If you evaluate an express to be true, then you can execute the code; otherwise you can ignore the code.
You need the if keyword and an expression here along with a code block to use the if statement.
if(1 < 2){
NSLog(@"That is true");
}
The statement is saying that if 1 is less than 2, then execute the code that will print out the string "That is true" to the console log.
### Else Keyword
You can also define an alternate action with the else keyword. This gives you a way of executing either one of two actions based on the results of the expression that you are evaluating.
if(1 < 2){
NSLog(@"That is true");
}
else{
NSLog(@"Not true");
}
#### Nested If Else
Each if statement can contain nested if statements. This gives you a way of testing multiple conditions. Generally speaking, it's best to limit yourself to three nested if statements at most. Here is what a nested if statement looks like:
if(1 > 2){
NSLog(@"True");
}
else{
if(3 > 4){
NSLog(@"True");
}
else{
NSLog(@"Not True");
}
}
### If Statements and Variables
Generally you will see if statements used along with variables that are used to keep track of the state of a program. You can use variables inside the parentheses as part of the expression in the if statement or you can test the variables directly.
BOOL isTrue = 1 == 2;
if(isTrue){
NSLog(@"isTrue = %@", isTrue ? @"YES" : @"NO");
NSLog(@"That was a true statement.");
}
else{
NSLog(@"isTrue = %@", isTrue ? @"YES" : @"NO");
NSLog(@"That was not a true statement.");
}
In the code above, you are assigning the result of an expression to the Boolean variable isTrue and then testing this later on with an if statement.
Here is what you will see in the console log if you test this code for yourself:
isTrue = NO
That was not a true statement.
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_15
© Matthew Campbell 2013
# 15. Switch Statements
Matthew Campbell1
(1)
CA, US
Abstract
Switch statements are used to execute code based on the value of an integer. To make a switch statement work, you need to define a level variable and then you need to write a code block for each possible value of the level variable that you expect.
## Switch Statements Defined
Switch statements are used to execute code based on the value of an integer. To make a switch statement work, you need to define a level variable and then you need to write a code block for each possible value of the level variable that you expect.
For this chapter, let's assume you are writing code to help you do some geometry work. You have different shapes that you need to work with and you want to calculate the area of each shape. You can keep track of what type of shape you are working with by using an NSInteger variablelike shape.
NSInteger shape = 0;
Each value of the NSInteger shape will correspond to a type of shape. Zero could be a square, one could be a parallelogram, and two could be a circle. Variables like shape are called a level variable because they represent possible levels.
For the purposes of this example, you also need a variable to store the results of any calculation you make, which is why you have a float variable named area.
float area;
### Switch Keyword
Now, let's get to the switch statement itself. To start a switch statement, you need the switch keyword followed by the level variable in parentheses. Also, you should use curly brackets to create a code block for the switch statement.
switch (shape) {
}
### Case Keyword
Next, you can define code blocks that will be associated with each value that the level variable can take on. You use the case keyword to associate each possible value with a code block.
switch (shape) {
case 0:{
float length = 3;
area = length * length;
NSLog(@"Square area is %f", area);
break;
}
}
What you see above is the case keyword followed by the value that you are testing for, which is 0. Then you have a colon followed by curly brackets that define the code block that will execute whenever the value of shape is 0.
### break Keyword
At the end of the code block above you can see the break keyword. This keyword will return control the program back to the main program and outside of the case statement. If this statement didn't appear, then every line of code after would execute whenever the first case was true (switch statements stop evaluating the level variable once it finds a true value).
### Complete Switch Statement
Here is what the statement looks like with multiple case statements:
switch (shape) {
case 0:{
float length = 3;
area = length * length;
NSLog(@"Square area is %f", area);
break;
}
case 1:{
float base = 16;
float height = 24;
area = base * height;
NSLog(@"Parallelogram area is %f", area);
break;
} default:{
area = -999;
NSLog(@"No Shape Specified");
break;
}
}
### Default Case
If you look closely at the code above, you can see that there is a default keyword. This keyword is used to define a default case, which is a way to define a code block that will execute if none of the other conditions are met. So, if the value of shape happened to be 6 and had no code block defined, you would be sure that at least the code that was included in the default case would execute.
Here is what you will find in the console log if you run the code from this chapter:
Square area is 9.000000
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_16
© Matthew Campbell 2013
# 16. Defining Classes
Matthew Campbell1
(1)
CA, US
Abstract
I covered objects when I demonstrated the Objective-C objects used to work with strings, number, arrays, and dictionaries. Objects are an essential object-oriented programming pattern. While you will often simply use Foundation objects that are already set up for you, usually you will need to define your own types of objects customized for your app.
## Classes
I covered objects when I demonstrated the Objective-C objects used to work with strings, number, arrays, and dictionaries. Objects are an essential object-oriented programming pattern. While you will often simply use Foundation objects that are already set up for you, usually you will need to define your own types of objects customized for your app.
You can use classes to define your own object types. Classes are code definitions that are used to create objects. The primary purpose of coding class definitions is to express an entity that has attributes and behaviors.
Attributes are called properties when coding classes and behaviors are called methods. Properties are used to describe an object while methods are used to get objects to perform an action.
You need to do two important tasks when defining a class: code an interface and code an implementation.
## Class Interfaces
You use a class interface to specify the name of the class and the properties and methods that make up the class. Here is how you would set up a class called Project:
#import <Foundation/Foundation.h>
@interface Project : NSObject
@end
The line of the code that begins with the #import imports the Foundation framework. This framework is needed whenever you want to work with Objective-C classes like NSObject or NSString (which you almost always do).
In particular, you need to reference NSObject since that will be your base class. A base class is what your class will be derived from and provides a starting point for you. NSObject provides the object creation methods you need to make your objects work like objects (such as alloc and init).
In the line that starts with @interface keyword, you can use the name of the class, Project, and the base class NSObject, which comes after the colon.
The @interface keyword must be matched with the @end keyword.
### Property Forward Declarations
Properties require a forward declaration that is coded in the class interface. These belong in the space between the @interface line and @end line.
#import <Foundation/Foundation.h>
@interface Project : NSObject
@property(strong) NSString *name;
@end
Property forward declarations start with the @property keyword followed by a property descriptor in parenthesis. See Table 16-1 for a list of the possible parameter descriptors that you can use.
Table 16-1.
Parameter Descriptors
Attribute | Description
---|---
Readwrite | The property needs both a getter and a setter (default).
Readonly | The property only needs a getter (objects cannot set this property).
strong | The property will have a strong relationship.
weak | The property will be set to nil when the destination object is deallocated.
assign | The property will simply use assignment (used with primitive types).
Copy | The property returns a copy and must implement the NSCopying protocol.
retain | A retain message will be sent in the setter method.
nonatomic | Specifies the property is not atomic (not locked while being accessed).
The next two parts of the property forward declaration are the datatype and the name of the property. You can read this forward declaration as defining a property name of type NSString that Project will have a strong relationship with.
Note
Terms like strong, retain, and weak have to do with how memory management is handled for the property value. Both strong and retain mean that your class objects will always retain a reference to the property value, which guarantees that the object will stay in scope for as long as you need it. Property descriptors weak and assign don't provide this guarantee.
### Method Forward Declarations
Methods also need forward declarations. While properties describe an object, methods represent actions that an object can take. Here is how you would add a forward declaration to the Project class:
#import <Foundation/Foundation.h>
@interface Project : NSObject
@property(strong) NSString *name;
-(void)generateReport;
@end
Method forward declarations start with the minus sign followed by the return type in parenthesis. This method has a void return type, but you can have any datatype or class as a return type for a method.
After that is the method's signature, which I will talk more about once you see a method that includes parameters. Here is an example of a method that includes parameters:
#import <Foundation/Foundation.h>
@interface Project : NSObject
@property(strong) NSString *name;
-(void)generateReport;
-(void)generateReportAndAddThisString:(NSString *)string
andThenAddThisDate:(NSDate *)date;
@end
In this parameter, you can see that the method signature is broken up into two parts. Each part has a parameter and a parameter descriptor separated by a colon. In Objective-C, method signatures are a collection of parameter descriptors and parameters. When you have no parameters (like the first method), you just have a parameter descriptor to describe the method.
## Implementing Classes
Defining the class interface is the first part of the process of defining a class. The next part is called the implementation because this is where you provide the code implementation that makes the class objects work.
To start implementing a class, you use the @implementation keyword along with the class name.
#import "Project.h"
@implementation Project
@end
The first line of code is importing the Project forward declarations into this file so the class is aware of what needs to be implemented.
Note
While it is not required, usually class interfaces and implementations will be coded in separate files. Interface files end with the .h file extension and are sometimes called header files. Implementation files end with the .m file extension and are sometimes called code files.
The implementation begins with the @implementation keyword followed by the name of the class Project. Implementations end with the @end keyword. Now, you need to implement the properties and methods.
Properties are implemented for you automatically and you don't need to take any action to make properties work.
### Implementing Methods
When you implement a method, you repeat the forward declaration of the method from the interface. But you add a code block and the code that you need to get the method to do something.
#import "Project.h"
@implementation Project
-(void)generateReport{
NSLog(@"This is a report!");
}
@end
When you implement a method that includes parameters, you can reference those parameter values in your code.
#import "Project.h"
@implementation Project
-(void)generateReport{
NSLog(@"This is a report!");
}
-(void)generateReportAndAddThisString:(NSString *)string
andThenAddThisDate:(NSDate *)date{
[self generateReport];
NSLog(@"%@", string);
NSLog(@"Date: %@", date);
}
@end
### Private Properties and Methods
The procedures above define properties and methods publically. This means that other objects can reference these properties and use those methods. Classes that are derived from this class can use and override these properties and methods. If you want to prevent that from happening to make properties and methods private, you can use a class extension.
#### Class Extensions
Class extensions give you a way to extend a class interface in the implementation file. Since other classes will be importing the interface file that ends with the .h extension (the header file), they will not be able to access anything with forward declarations that are defined in a class extension.
You can put a class extension in the implementation file like this:
#import "Project.h"
@interface Project()
@property(strong) NSArray *listOfTasks;
-(void)generateAnotherReport;
@end
@implementation Project
-(void)generateReport{
NSLog(@"This is a report!");
}
-(void)generateReportAndAddThisString:(NSString *)string
andThenAddThisDate:(NSDate *)date{
[self generateReport];
NSLog(@"%@", string);
NSLog(@"Date: %@", date);
}
-(void)generateAnotherReport{
NSLog(@"Another report!");
}
@end
The class extension looks like the interface but has empty parenthesis after the class name. The forward declarations that are in the class extension above follow the same rules as the other forward declarations for properties and methods. Class extensions must end with the @end keyword (each @interface requires a matching @end).
#### Local Instance Variables
Sometimes you need storage variables that don't merit a property declaration. For instance, sometimes you want to have a "counter" or "progress" variable or a variable that maintains a log. These are needed but don't really describe the object so they don't merit the same treatment as a property.
Instead, you can use an instance variable or ivar in these situations. Instance variables can be included right in the class extension, like this:
#import "Project.h"
@interface Project() {
int counter;
NSString *log;
}
@property(strong) NSArray *listOfTasks;
-(void)generateAnotherReport;
@end
@implementation Project
-(void)generateReport{
NSLog(@"This is a report!");
}
-(void)generateReportAndAddThisString:(NSString *)string
andThenAddThisDate:(NSDate *)date{
[self generateReport];
NSLog(@"%@", string);
NSLog(@"Date: %@", date);
}
-(void)generateAnotherReport{
NSLog(@"Another report!");
}
@end
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_17
© Matthew Campbell 2013
# 17. Class Methods
Matthew Campbell1
(1)
CA, US
Abstract
In Chapter 16, we described how to define classes with properties and methods. The type of method we focused on was instance methods. Instance methods are methods can only be used with objects. When you want to use an instance method, you send a message to an object.
## Class Methods Defined
In Chapter 16, we described how to define classes with properties and methods. The type of method we focused on was instance methods. Instance methods are methods can only be used with objects. When you want to use an instance method, you send a message to an object.
For instance, if you want to send the generateReport message to a Project object, you first need create the object and then send the message right to the object.
Project *p = [[Project alloc] init];
[p generateReport];
Class methods are like instance methods except that they can only be used with classes. When you want to use a class method, you must send the message to the class. If you look closely at the constructor above, you can see that you are already using a class method called alloc.
### Coding Class Methods
If you want to create your own class method, you need to start in the class interface. Let's code a forward declaration for a class method that prints out a time stamp called printTimeStamp. You can add this to the class that you started in the last chapter.
#import <Foundation/Foundation.h>
@interface Project : NSObject
@property(strong) NSString *name;
-(void)generateReport;
-(void)generateReportAndAddThisString:(NSString *)string
andThenAddThisDate:(NSDate *)date;
+(void)printTimeStamp;
@end
This method looks like the instance methods that you already coded, except that it has a plus sign (+) in front of the return type.
The next step is to implement this method, which you must do in the implementation for Project. Note that some of the code from Chapter 16 is omitted here to avoid making this example too long and distracting.
#import "Project.h"
@implementation Project
+(void)printTimeStamp{
NSLog(@"Timestamp: %@", [NSDate date]);
}
@end
You need to use the plus sign here again to define this as a class method, but the coding follows the same pattern as the instance methods.
When you want to use the printTimeStamp method, you send the message directly to the class. You don't need to create an object first here.
[Project printTimeStamp];
This message will print the following out to the console log:
Timestamp: 2014-10-30 18:13:01 +0000
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_18
© Matthew Campbell 2013
# 18. Inheritance
Matthew Campbell1
(1)
CA, US
Abstract
When you want to code a new class that shares most of the properties and methods of another class, you can use inheritance. A class that is inherited from another class takes on all the properties and methods of the superclass.
When you want to code a new class that shares most of the properties and methods of another class, you can use inheritance. A class that is inherited from another class takes on all the properties and methods of the superclass.
You use inheritance when you want to leverage the work that has already been completed and then add more properties and methods to customize the new class. This pattern gives us code reuse.
You saw examples of inheritance in Chapter 16: when you created the Project class, you inherited NSObject. This gave Project all the methods and properties of NSObject.
A more interesting application happens when you use this technique with your object graph. For instance, let's assume that now you want to create a new class that's like Project but has a few key differences.
## Creating Subclasses
To create a new subclass, you can follow the same pattern that was laid out in Chapter 16. You define an interface and implementation. The difference here is that you will choose Project instead of NSObject as the superclass.
#import "Project.h"
@interface SpecialProject : Project
@end
The two things to note in the interface above are that you are importing Project.h (and not Foundation as before) and you now have Project after the colon and not NSObject, indicating that Project is the superclass and that SpecialProject is your subclass.
The implementation for SpecialProject is straightforward and resembles what you did for the original Project class.
#import "SpecialProject.h"
@implementation SpecialProject
@end
If you were to use SpecialProject right now, it would behave just like Project. Now you can add additional properties and methods to customize SpecialProject. This is called extending a class.
## Extending Classes
To extend a class, you can add properties and methods to the subclass. Your subclass is SpecialProject. Let's add a method named generateSpecialReport to your SpecialProject class.
#import "Project.h"
@interface SpecialProject : Project
-(void)generateSpecialReport;
@end
Now, of course, you need to implement this method.
#import "SpecialProject.h"
@implementation SpecialProject
-(void)generateSpecialReport{
NSLog(@"This is a special report!");
}
@end
The procedure to extend classes is identical to adding properties and methods as you normally do. What makes this a good tool is that you can share the code for the methods that are common among a type of class.
## Overriding Methods
Another thing you can do is override a method from your superclass. Overriding a method means that you are going to write your own version of a method with the exact same signature (the collection of parameter descriptors). The code in your method will be different so this means that objects from the inherited class will behave differently even though they get the same message sent to them as the superclass.
Let's say you want to make sure that SpecialProject objects always print out the special report even if the generateReport message from the Project superclass is sent.
What you need to do first is code generateReport method in the SpecialProject interface.
#import "Project.h"
@interface SpecialProject : Project
-(void)generateSpecialReport;
-(void)generateReport;
@end
Then you need to code a new implementation of generateReport and have that method send a message to generateSpecialReport. Often in this situation, you will also send a message to the super's implementation of the method, which you can do by sending a message to super.
#import "SpecialProject.h"
@implementation SpecialProject
-(void)generateSpecialReport{
NSLog(@"This is a special report!");
}
-(void)generateReport{
[super generateReport];
[self generateSpecialReport];
}
@end
### Instance Variable Visibility
When I discussed instance variables, or ivars, in Chapter 16, the use case for these was straightforward. If your class needed data storage that didn't really describe an attribute of the object (and therefore shouldn't have a property), then you would just use an ivar.
Since you added ivars to the class extension, the class that you were implementing in that file could only use them. These ivars are considered private because they are only visible to the class that they are coded in.
#### Visibility Levels
When you are planning on using inheritance with your object graph, you may want instance variables to have different levels of visibility. The term "visibility" refers to the other entities' access to the variable. Instance variables can either be private, protected, or public. When you want to have different visibility levels, you must code your instance variables in the interface, which should be located in a header file since this is the file that other classes will be importing.
##### Private Instance Variables
Private instance variables can only be used in the class they are coded in. To make an instance variable private, you would declare the instance variable in the class interface. Let's say you wanted to add some NSString objects to act as logs for all of your Project classes. To add a private log to be used in Project objects, you would do something like this in the Project interface:
#import <Foundation/Foundation.h>
@interface Project : NSObject {
@private NSString *log1;
}
@property(strong) NSString *name;
-(void)generateReport;
-(void)generateReportAndAddThisString:(NSString *)string
andThenAddThisDate:(NSDate *)date;
+(void)printTimeStamp;
@end
You must use the @private keyword to designate that log1 will have private visibility. This means you can use log1 in your Project methods, but not in your SpecialProject methods.
##### Protected Instance Methods
Instance variables with protected visibility can be accessed by methods in the class they are coded in as well as any derived classes. So, if you wanted to have an NSString log variable that can be used by Project and SpecialProject, you would need to make it protected by using the @protected keyword.
#import <Foundation/Foundation.h>
@interface Project : NSObject{
@private NSString *log1;
@protected NSString *log2;
}
@property(strong) NSString *name;
-(void)generateReport;
-(void)generateReportAndAddThisString:(NSString *)string
andThenAddThisDate:(NSDate *)date;
+(void)printTimeStamp;
@end
##### Public Instance Variables
Public instance variables are available to the class they are coded in and all derived classes. In addition, other objects can reference public instance variables directly.
Note
I am discussing public instance variables here for the sake of completeness, but it is generally not accepted practice to use instance variables in this way. Instead, you should define properties to return any values that you want to make available to other objects.
To make an instance variable public, you must use the @public keyword.
#import <Foundation/Foundation.h>
@interface Project : NSObject{
@private NSString *log1;
@protected NSString *log2;
@public NSString *log3;
}
@property(strong) NSString *name;
-(void)generateReport;
-(void)generateReportAndAddThisString:(NSString *)string
andThenAddThisDate:(NSDate *)date;
+(void)printTimeStamp;
@end
To use this object, you would need to use the member of operator,->. The member of operator is a traditional C operation that you can use to reference a member of a structure that is referenced by a pointer. Here is an example of how you would do this:
#import <Foundation/Foundation.h>
#import "SpecialProject.h"
int main(int argc, const char * argv[]){
@autoreleasepool {
SpecialProject *sp = [[SpecialProject alloc]init];
NSString *tempLog = sp->log3;
NSLog(@"temp = %@", tempLog);
return 0;
}
}
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_19
© Matthew Campbell 2013
# 19. Categories
Matthew Campbell1
(1)
CA, US
Abstract
Categories are used to extend classes without using inheritance. When you use a category, you can add properties and methods to a class without declaring a super class.
## Categories Defined
Categories are used to extend classes without using inheritance. When you use a category, you can add properties and methods to a class without declaring a super class.
To define a category, you need to add an interface and implementation. You can do this by adding new header and code files or you can add the categories right in the code file where you are working.
### Category Example
As an example, let's say that you want to take the Project class that you defined in Chapter 16 and add a constructor method that would initialize a new Project object and assign a name at the same time. You could add code like this right in the main.m file where you are coding.
The first thing you would do is add the interface.
#import <Foundation/Foundation.h>
#import "Project.h"
@interface Project(ProjectExtension)
@end
int main(int argc, const char * argv[]){
@autoreleasepool {
return 0;
}
}
Category interfaces look similar to class interfaces, but they follow a slightly different format. These interfaces start with the @interface keyword and are followed by the original class name. Following the class name is the name of the category in parentheses.
You put forward declarations in the category interface. Since you want an initializer that sets the name for you, you would add something like this to your category interface:
#import <Foundation/Foundation.h>
#import "Project.h"
@interface Project(ProjectExtension)
-(id)initWithName:(NSString *)aName;
@end
int main(int argc, const char * argv[]){
@autoreleasepool {
return 0;
}
}
The next step is to implement the new method, and to do that you must code the category implementation. Category implementations follow the same pattern as the category interface.
#import <Foundation/Foundation.h>
#import "Project.h"
@interface Project(ProjectExtension)
-(id)initWithName:(NSString *)aName;
@end
@implementation Project (ProjectExtension)
@end
int main(int argc, const char * argv[]){
@autoreleasepool {
return 0;
}
}
Finally, you need to implement the new method just like you would for a class.
#import <Foundation/Foundation.h>
#import "Project.h"
@interface Project(ProjectExtension)
-(id)initWithName:(NSString *)aName;
@end
@implementation Project (ProjectExtension)
-(id)initWithName:(NSString *)aName{
self = [super init];
if (self) {
self.name = aName;
}
return self;
}
@end
int main(int argc, const char * argv[]){
@autoreleasepool {
return 0;
}
}
Now that you have the category set up, you can use this initializer to help you create and initialize new Project classes right inside the main function.
#import <Foundation/Foundation.h>
#import "Project.h"
@interface Project(ProjectExtension)
-(id)initWithName:(NSString *)aName;
@end
@implementation Project (ProjectExtension)
-(id)initWithName:(NSString *)aName{
self = [super init];
if (self) {
self.name = aName;
}
return self;
}
@end
int main(int argc, const char * argv[]){
@autoreleasepool {
Project *p = [[Project alloc] initWithName:@"ThisNewProject"];
NSLog(@"p.name = %@", p.name);
return 0;
}
}
If you were to build and run this project now, this would print out to the log:
p.name = ThisNewProject
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_20
© Matthew Campbell 2013
# 20. Blocks
Matthew Campbell1
(1)
CA, US
Abstract
Blocks are a way to define a block of code that you will use at a later time. Blocks are a lot like methods or functions in that they can take parameters and return a value. Sometimes people refer to blocks as anonymous functions because they are functions that aren't attached to an entity.
## Blocks Defined
Blocks are a way to define a block of code that you will use at a later time. Blocks are a lot like methods or functions in that they can take parameters and return a value. Sometimes people refer to blocks as anonymous functions because they are functions that aren't attached to an entity.
One thing that sets blocks apart is that they are coded in the same scope as the rest of your program so you can add a block without a class definition. Blocks have some other properties that make them very useful. While blocks don't need to be attached to objects, blocks can be treated as objects. This means that you can code a set of blocks and then store them in a data collection.
Blocks also copy all the variable values that are in scope where the block is declared. This feature gives the block the ability to capture state and then use that state in the future, even if the original variables are out of scope when the block is executed.
### Defining Blocks
As an example, let's code a block that will take a float number as a parameter and then return a squared float result. Call the block squareThis. The first thing you need to do is to declare the block. This follows a similar pattern to declaring a datatype or an object, but with some differences that allow you to use the block's function-like behavior.
float (^squareThis)(float);
The block declaration starts with the float return type. This means that when you use this block, you will get a float number returned to you.
The next part of the block declaration is the name of the block squareThis proceeded by the caret (^) symbol. The entire name is enclosed in parentheses.
Finally, you have a list of parameter types enclosed in parentheses. If there is more than one parameter type, the list must be comma-separated. A semicolon ends the line of code.
### Assigning Blocks
You can use the assignment operator (= ) to assign a block of code to the block you just declared. When you assign the block code to the block variable, you will need to use the caret and declare variable names. You also need to include the code scoped with curly brackets.
squareThis = ^(float x){
return x * x;
};
This block will take the number supplied, multiply the number by itself, and then return the result.
### Using Blocks
You can call the block like a function when you are ready to execute the code. Here is how you would use squareThis:
float result = squareThis(4);
NSLog(@"result = %f", result);
This will print out the following output to the console log:
result = 16.000000
### Copying Scoped Variables
Blocks copy the variable values of every variable that is currently in scope where the block is declared. This means that blocks can save the state of the variables around them to use at a later time when the block is executed, whether or not the variable is still in scope.
Here is an example of a block called multiplyThese that takes two numbers and multiplies them returning a float result. This block requires two parameters, and you will define and assign the block at the same time. Notice that that you also have a string defined near the multiplyThese block.
NSString *title = @"Multiply Block Execution";
float (^multiplyThese)(float, float) = ^(float x, float y){
NSLog(title);
return x * y;
};
Before returning the result, the multiplyThese block will print out the string value it captured from the context. To use this block, you would do this:
NSLog(@"multiplyThese(3,4) = %f", multiplyThese(3,4));
This will produce the following output:
Multiply Block Execution
multiplyThese(3,4) = 12.000000
### Blocks as Properties
Even though blocks don't require an entity like a class, you can use blocks to make objects more flexible. By adding a block to an object as a property, you can provide an interface to give clients a way to inject custom behavior into your objects.
#### Block Forward Declaration
You can use blocks as properties since you treat blocks just like objects. To do this, you need to start in the class interface (either the public interface or the private class extension). Here is how you would add a customReport block to the Project class that you originally defined in Chapter 16:
#import <Foundation/Foundation.h>
@interface Project : NSObject{
@private NSString *log1;
@protected NSString *log2;
@public NSString *log3;
}
@property(strong) NSString *name;
@property (copy) void (^makeCustomReport)(NSString *title);
-(void)generateReport;
-(void)generateReportAndAddThisString:(NSString *)string
andThenAddThisDate:(NSDate *)date;
+(void)printTimeStamp;
@end
You need to use the copy property descriptor because you want the block and all its scoped variables to be copied and retained appropriately. The next thing you need to do as class authors is figure out when you want this block to execute, keeping in mind that you never know beforehand what code will be present in the block.
#### Use Blocks in a Class
For this example, the generateReport method seems like a good place to do this. You need to go to the Project class implementation and find the generateReport method to add this block call.
#import "Project.h"
@implementation Project
-(void)generateReport{
NSLog(@"This is a report!");
self.makeCustomReport(@"Custom Project Report Title");
}
@end
If you look at the bold code above, you will see that using parameter blocks is just like using other blocks, except that you have the self keyword in place so that you can reference the block.
#### Assigning Blocks
What is really powerful about this pattern is that you can set up a way to execute a behavior even if you don't know exactly what that behavior will be or if the behavior will change over time.
To get this to work, a client will need to assign the block and actually define that behavior for you. You can do this in the main function for your example.
Project *p =[[Project alloc]init];
p.makeCustomReport = ^(NSString* title){
NSLog(@"%@", title);
NSLog(@"This is a custom report requested by the author");
NSLog(@"Say This");
NSLog(@"Say That");
NSLog(@"Say The Other Thing");
};
[p generateReport];
You need to send the generateReport message for your example; when you do, you will get this output:
This is a report!
Custom Project Report Title
This is a custom report requested by the author
Say This
Say That
Say The Other Thing
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_21
© Matthew Campbell 2013
# 21. Key-Value Coding
Matthew Campbell1
(1)
CA, US
Abstract
Normally when you want to get or set property values in an object you use dot notation to get a reference to the property to change the value. However, with key-value coding you can store and retrieve property values indirectly using string keys. Applications that require archiving need this type of functionality so that apps can retrieve object data from archive files.
## Key-Value Coding Defined
Normally when you want to get or set property values in an object you use dot notation to get a reference to the property to change the value. However, with key-value coding you can store and retrieve property values indirectly using string keys. Applications that require archiving need this type of functionality so that apps can retrieve object data from archive files.
### Setting Property Values
You can use key-value coding to set property values. To set property values, you must use the setValue:forKey: message and provide the new property value and the NSString name of the property.
[p setValue:@"New Project" forKey:@"name"];
### Retrieving Property Values
To retrieve a property value using key-value coding, you can simply send the valueForKey message to the object. This message requires the property name in NSString format as a parameter. Here is how you would retrieve the name property value from a Project object:
NSString *retrievedName = [p valueForKey:@"name"];
This works for any type of object including data collection objects and custom objects. You could now print this value out to the log using the NSLog function.
NSLog(@"retrievedName = %@", retrievedName);
If you do this, you will get the following output, assuming the Project name was New Project:
retrievedName = New Project
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_22
© Matthew Campbell 2013
# 22. Key-Value Observation
Matthew Campbell1
(1)
CA, US
Abstract
One of the applications of key-value coding is implementing the observer pattern. The observer pattern is used when you want an object to get a notification when the state of another object changes. This pattern is implemented with key-value observation in Objective-C.
## Key-Value Observation Defined
One of the applications of key-value coding is implementing the observer pattern. The observer pattern is used when you want an object to get a notification when the state of another object changes. This pattern is implemented with key-value observation in Objective-C.
To see a clear example of key-value observation, you need at least two objects. One object will be observed while the other object will be observing. For this example, let's assume that you have two types of objects: a Project object and a Task object. Project objects maintain a list of Task objects. The project object needs to be notified when the state of a Task object changes (when the task is marked as complete, for example).
## Project and Task Object Graph
Let's go over this object graph before implementing key-value observation here. Project has been simplified for this example and I've added a Task class definition. The object graph will get set up in the main.m file.
Here is the interface for the Project class:
#import <Foundation/Foundation.h>
#import "Task.h"
@interface Project : NSObject
@property(strong) NSString *name;
@property(strong) NSMutableArray *listOfTasks;
-(void)generateReport;
@end
Here is the implementation for the Project class:
#import "Project.h"
@implementation Project
-(void)generateReport{
NSLog(@"Report for %@ Project", self.name);
[self.listOfTasks enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
[obj generateReport];
}];
}
@end
Here is the interface for the Task class:
#import <Foundation/Foundation.h>
@interface Task : NSObject
@property(strong) NSString *name;
@property(assign) BOOL done;
-(void)generateReport;
@end
Here is the implementation for the Task class:
#import "Task.h"
@implementation Task
-(void)generateReport{
NSLog(@"Task %@ is %@", self.name, self.done ? @"DONE" : @"IN PROGRESS");
}
@end
Finally, you set up the object graph in main.m like this:
#import <Foundation/Foundation.h>
#import "Project.h"
int main(int argc, const char * argv[]){
@autoreleasepool {
Project *p = [[Project alloc]init];
p.listOfTasks = [[NSMutableArray alloc]init];
p.name = @"Cook Dinner";
Task *t1 = [[Task alloc]init];
t1.name = @"Choose Menu";
[p.listOfTasks addObject:t1];
Task *t2 = [[Task alloc]init];
t2.name = @"Buy Groceries";
[p.listOfTasks addObject:t2];
Task *t3 = [[Task alloc]init];
t3.name = @"Prepare Ingredients";
[p.listOfTasks addObject:t3];
Task *t4 = [[Task alloc]init];
t4.name = @"Cook Food";
[p.listOfTasks addObject:t4];
return 0;
}
}
This is going to give you a Project object named Cook Dinner with four tasks. Now you are ready to implement key-value observation.
## Implementing Key-Value Observation
You want Project objects to be notified when the state of their Task objects changes. So, if a Task object gets marked as complete, then the Project object will be notified. There are three steps to using key-value observation:
* Send the addObserver:forKeyPath:options:context: message to each object that is being observed.
* Override the method observeValueForKeyPath:ofObject:change:content: in the class definition of the object that is doing the observing.
* Override the dealloc method and remove the observer reference in the class definition of the object that is observing.
## Add the Observer
The easiest way to send this message to each Task that Project is responsible for is to use the listOfTasks enumeration method:
[p.listOfTasks enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
[obj addObserver:p
forKeyPath:@"done"
options:NSKeyValueObservingOptionNew
context:nil];
}];
This code can be located in the main.m file after all the Task objects have been added to the Project object. The first parameter after the AddObserver parameter descriptor is the object that will be observing. The next parameter is the key path, which is where you put the key for the property that you want to observe. Next you have some options that you can set; you can choose the NSKeyValueObservingOptionNew to keep track of new changes to the property value.
Note
You could have chosen NSKeyValueObservingOptionOld to get the previous values instead of the new values like you did above.
### Observing Value Changes
To receive a notification when a property value has changed, the object that is observing needs to override a method in the object's class definition. This is where you locate the code you need to respond to the change. Here is an example of how you would do this in the Project.m implementation file:
#import "Project.h"
@implementation Project
-(void)observeValueForKeyPath:(NSString *)keyPath
ofObject:(id)object
change:(NSDictionary *)change
context:(void *)context{
if([keyPath isEqualToString:@"done"]){
NSNumber *updatedStatus = [change objectForKey:@"new"];
BOOL done = [updatedStatus boolValue];
NSLog(@"Task '%@' is now %@", [object name], done ? @"DONE" : @"IN PROGRESS");
}
}
@end
Note
The code above is added to the code that you added in the beginning of this chapter.
Whenever a Task object that you are watching changes its status, this method will execute. When this happens, you first test to make sure that the property you are expecting (done) is the property that changed. You need to do this because this method can be shared for all sorts of notifications.
In the next line of code, you pull out the NSNumber version of your property from the supplied NSDictionary changes before converting this to the BOOL type that you are expecting. Finally, you write out a report to the console log, reporting the updated status of the task.
Before you test this code, you need to clean up after yourself.
### De-Registering Observers
You need to make sure that the observer object goes through and stops observing each object since the observer object will soon be deleted. The place to perform this task is in the dealloc method. Every object has a dealloc method that executes before the object is removed from a program, so this is a good place to do this type of cleanup work.
Here is how you would code the dealloc method in the Project.m implementation file:
-(void)dealloc{
[self.listOfTasks enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
[obj removeObserver:self
forKeyPath:@"done"];
}];
}
This will enumerate through the listOfTasks array and remove the observer from each object.
### Testing the Observer
To test this, just change the done property in some of the Task objects. Each time you do this, you will see the report written out to the log. For example, let's say you did this in main.m:
t4.done = YES;
t4.done = NO;
t2.done = YES;
t1.done = NO;
Your Project object would be notified each time this state is changed, which would produce this output to the log:
Task 'Cook Food' is now DONE
Task 'Cook Food' is now IN PROGRESS
Task 'Buy Groceries' is now DONE
Task 'Choose Menu' is now IN PROGRESS
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_23
© Matthew Campbell 2013
# 23. Protocols
Matthew Campbell1
(1)
CA, US
Abstract
Protocols are used to define a set of methods and properties independently of a class. Any class can adopt a protocol, which means that the class implements the properties and methods defined by the protocol. In effect, protocols define a contract that classes can agree to fulfill. When a class adopts a protocol, you can be confident that the class will have implemented the properties and methods in the protocol.
## Protocols Overview
Protocols are used to define a set of methods and properties independently of a class. Any class can adopt a protocol, which means that the class implements the properties and methods defined by the protocol. In effect, protocols define a contract that classes can agree to fulfill. When a class adopts a protocol, you can be confident that the class will have implemented the properties and methods in the protocol.
### Defining Protocols
To use protocols, you must start by defining the protocol. Use the @protocol keyword to start defining the protocol. You can simply include this in the same file as the class that the protocol is associated with or you can include the protocol in a separate header file. If you want to define a protocol for the Task class that you coded in the previous chapter, you could do this:
#import <Foundation/Foundation.h>
@class Task;
@protocol TaskDelegate <NSObject>
@optional
-(void)thisTask:(Task *)task statusHasChangedToThis:(BOOL)done;
@end
@interface Task : NSObject
@end
The protocol name follows the @protocol keyword and the word in the angle brackets is the protocol that you are inheriting. NSObject is the protocol that NSObject classes must conform to. Inheriting a protocol is like inheriting a class, but the implication now is that classes that adopt your protocol will be responsible for the methods that you define, in addition to the methods defined in the inherited protocol.
The protocol definition ends with the @end keyword. All the methods and properties between the @protocol and the @end are the methods that are required to be implemented when a class adopts this protocol.
Note
You needed to use the @class keyword above because you referenced the Task class in the protocol before defining the class below. @class gives you a way of referencing a class without the interface.
#### Optional and Required Methods and Properties
Protocol methods are required by default. However, you can specify methods to be optional. Optional methods are used when the functionality is present but not crucial. To mark methods as optional, use the @optional keyword. Every property and method that appears after the @optional keyword will be considered optional.
Use the @required key to mark methods and property as required. Every method and property that follows will be considered required.
### Adopting Protocols
You indicate that a class will adopt a protocol by including the protocol name in angle brackets after the superclass in the class interface. If a class adopts more than one protocol, then you must provide a comma-separated list of protocol names in the angle brackets.
If you want Project to adopt the TaskDelegate protocol, you would go to the Project interface and adopt the protocol, like this:
#import <Foundation/Foundation.h>
#import "Task.h"
@interface Project : NSObject <TaskDelegate>
@property(strong) NSString *name;
@property(strong) NSMutableArray *listOfTasks;
-(void)generateReport;
@end
Once you adopt the TaskDelegate protocol, you are agreeing to implement the TaskDelegate methods and properties. If you were to attempt to build your project right now, you would get a warning. The next thing you have to do is implement the methods defined in TaskDelegate.
### Implementing Protocol Methods
You implement protocol methods just as you would any other methods. You implement the protocol methods in the implementation of the class that adopted the protocol.
For this example, you would go to the Project class implementation and add this method:
#import "Project.h"
@implementation Project
-(void)thisTask:(Task *)task statusHasChangedToThis:(BOOL)done{
NSLog(@"Task '%@' is now %@", task.name, done ? @"DONE" : @"IN PROGRESS");
}
@end
You can't test this code yet because you still need to add code to give Task objects the ability to use this protocol. This is part of the Delegation design pattern that is covered in the next chapter.
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_24
© Matthew Campbell 2013
# 24. Delegation
Matthew Campbell1
(1)
CA, US
Abstract
Delegation is a design pattern where one object asks another object for help. Protocols are an important part of Delegation, because protocols define how an object will be helped.
## Delegation Defined
Delegation is a design pattern where one object asks another object for help. Protocols are an important part of Delegation, because protocols define how an object will be helped.
Delegation works by defining a protocol that will list out all the methods and properties an object will need help with. Another object, known as the delegate, will provide the help needed by adopting and implementing the protocol methods. Objects ask for help by sending messages to their delegates.
### Defining Delegate Protocols
Let's say you want to implement Delegation for your object graph that includes the Project object and Task objects. In your object graph, your Task objects may need help from the Project object. For instance, when a Task status is marked as Done, the task may not know what to do next. The task could ask the Project for help if the Project was capable of acting as the Task's delegate.
To make that happen, you would need to first define a protocol for Task that defined the ways that Task would need help. Luckily for you, you already did that in the previous chapter.
#import <Foundation/Foundation.h>
@class Task;
@protocol TaskDelegate <NSObject>
@optional
-(void)thisTask:(Task *)task statusHasChangedToThis:(BOOL)done;
@end
@interface Task : NSObject
@property(strong) NSString *name;
@property(assign) BOOL done;
-(void)generateReport;
@end
The protocol is called TaskDelegate(because you are using this to define how delegates can help you). thisTask:statusHasChangedToThis: is the method that delegates can use to help you.
### Delegate References
Objects that need help (Task objects in your example) need to maintain a reference to their delegate. You can reference the delegate by adding a property, usually called delegate, with the class type of id. The id class type must be followed by the protocol name in angle brackets indicating that the property can be any object as long as the protocol is implemented.
#import <Foundation/Foundation.h>
@class Task;
@protocol TaskDelegate <NSObject>
@optional
-(void)thisTask:(Task *)task statusHasChangedToThis:(BOOL)done;
@end
@interface Task : NSObject
@property(strong) NSString *name;
@property(assign) BOOL done;
@property(assign) id<TaskDelegate> delegate;
-(void)generateReport;
@end
You use the assign property descriptor because you don't want Task objects to have a strong relationship to the delegate object.
### Sending Messages to the Delegate
When Task objects need help, they can send messages to the delegate. Since you want this to happen when the Task status is changed, you can do this by writing a custom property accessor for the Task done property.
Note
In Chapter 16, you declared properties and allowed them to be supported by automatically generated getters and setters. There are some situations where you want to code your own getters and setters.
You can send the message right in the Task done setter method.
#import "Task.h"
@implementation Task
-(void)generateReport{
NSLog(@"Task %@ is %@", self.name, self.done ? @"DONE" : @"IN PROGRESS");
}
BOOL _done;
-(void)setDone:(BOOL)done{
_done = done;
[self.delegate thisTask:self statusHasChangedToThis:done];
}
-(BOOL)done{
return _done;
}
@end
In the setter, you can see that you are sending the message to the delegate. Your delegate can implement this method to respond to the event of the done property changing.
The other thing you may notice is that you have coded the getter as well. Even though you don't need to add any new code to the getter, you must manually implement both the getter and the setter when you decide to implement one or the other.
### Assigning the Delegate
The next step is to assign the delegate. In this example, this is something that can be done in the main.m file. You can do this by simply assigning the project to each task's delegate property.
Project *p = [[Project alloc]init];
p.listOfTasks = [[NSMutableArray alloc]init];
p.name = @"Cook Dinner";
Task *t1 = [[Task alloc]init];
t1.name = @"Choose Menu";
t1.delegate = p;
[p.listOfTasks addObject:t1];
Task *t2 = [[Task alloc]init];
t2.name = @"Buy Groceries";
[p.listOfTasks addObject:t2];
t2.delegate = p;
Task *t3 = [[Task alloc]init];
t3.name = @"Prepare Ingredients";
[p.listOfTasks addObject:t3];
t3.delegate = p;
Task *t4 = [[Task alloc]init];
t4.name = @"Cook Food";
[p.listOfTasks addObject:t4];
t4.delegate = p;
Now when you assign a different value to a task's done property, the Project delegate will be notified. In the last chapter, you already adopted the TaskDelegate protocol and implemented the protocol method -(void)thisTask:(Task *)task statusHasChangedToThis:(BOOL)done;.
When you set a done property on a Task object like this
t4.done = YES;
the protocol method you implemented in the Project implementation thisTask: statusHasChangedToThis: will execute. Remember, this is what you coded in the last chapter:
-(void)thisTask:(Task *)task statusHasChangedToThis:(BOOL)done{
NSLog(@"Task '%@' is now %@", task.name, done ? @"DONE" : @"IN PROGRESS");
}
This method will generate this output in the console log:
Task 'Cook Food' is now DONE
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_25
© Matthew Campbell 2013
# 25. Singleton
Matthew Campbell1
(1)
CA, US
Abstract
Singleton is a design pattern where you can have only one instance of a class. Usually, when you define a class, you expect to use many instances of the class. But in some designs this doesn't make sense.
## Singleton Defined
Singleton is a design pattern where you can have only one instance of a class. Usually, when you define a class, you expect to use many instances of the class. But in some designs this doesn't make sense.
For instance, an application may only need one reference to the file system (since there is only one file system). Or the app has a data model that should stay in sync and so you want to make sure you have only one instance of a class available.
To implement a Singleton pattern, you will need to create a special type of constructor and then only use this constructor to get a reference to the Singleton object.
### Singleton Interface
The first step is to code the interface. Let's assume that you are creating a class AppSingleton that will be your singleton. Here is how you would code the interface:
#import <Foundation/Foundation.h>
@interface AppSingleton : NSObject
\+ (AppSingleton *)sharedInstance;
@end
What you have above is a class method that returns an instance of AppSingleton.
### Singleton Implementation
To implement this singleton, you need a static variable and the implementation of the class method sharedInstance that you defined in the interface.
#import "AppSingleton.h"
@implementation AppSingleton
static AppSingleton *singletonInstance = nil;
\+ (AppSingleton *)sharedInstance{
@synchronized(self){
if (singletonInstance == nil)
singletonInstance = [[self alloc] init];
return(singletonInstance);
}
}
@end
The static instance is an AppSingleton type named singletonInstance and you have it initially set to nil. In the sharedInstance method, you are testing the singletonInstance; if it is nil, then you will create a new instance. Either way you return this instance to the caller.
The code in this method is surrounded by the @synchronized(self) block. This is used to lock the code so that only one thread can use these lines of code at a time. This ensures that you only have one instance of this singleton even when you have more than one thread.
### Referencing Singletons
When you need a singleton object, you must use the method that returns the instance. This will be the method that you coded.
If you wanted to use the AppSingleton class, you would do this:
AppSingleton *ap = [AppSingleton sharedInstance];
This is used as in place of the alloc and init pattern normally used to create objects. You can do this from any class or file in your app, and each one will get the same instance of the class. The only caveat is that you must use the Singleton constructor; while you can still use alloc and init, doing so will break the pattern because these methods can still create more than one object while the constructor you created can only create one object.
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_26
© Matthew Campbell 2013
# 26. Error Handling
Matthew Campbell1
(1)
CA, US
Abstract
When programs encounter unexpected errors, they behave unexpectedly or stop working altogether. Ideally, programmers would find and fix all bugs before programs are used, but there are some situations were programmers don't have control of the entire situation. For instance, errors can happen when programs require resources like files or web sites that are no longer present.
## Error Handling Defined
When programs encounter unexpected errors, they behave unexpectedly or stop working altogether. Ideally, programmers would find and fix all bugs before programs are used, but there are some situations were programmers don't have control of the entire situation. For instance, errors can happen when programs require resources like files or web sites that are no longer present.
The best practice in dealing with situations like this is to add error handling to a program. This means that when an error occurs, the program can recover or gracefully shutdown. NSError is the Foundation class that programmers use to deal with errors.
### NSError
One place where NSError is used frequently is when you are working with operations involving files. Many Foundation classes use NSError objects to help with error handling. The pattern is that you declare the NSError object and set it to nil before passing it by reference. Here is how this works with the NSString method that creates a string from the contents of a file:
NSError *error = nil;
NSString *file = @"/Users/Shared/array.txt";
NSString *content = [NSString stringWithContentsOfFile:file
encoding:NSStringEncodingConversionAllowLossy
error:&error];
The & symbol that you see in front of the error parameter is called the AddressOf operator. This means that you are passing the memory address of the object and not a copy, so when the code in the method needs to modify the error object you will be able to see the results.
To check the error object, you would include code like this right after the message:
if(!error)
NSLog(@"content = %@", content);
else
NSLog(@"error = %@", error);
If there is no error, then do something with the content; otherwise, deal with the error. If this code is successful, then you would get this printed out to the log:
content = <?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" " http://www.apple.com/DTDs/PropertyList-1.0.dtd ">
<plist version="1.0">
<array>
<string>A</string>
<string>B</string>
<string>C</string>
<string>D</string>
</array>
</plist>
This file is something I had on my Mac, but the actual content doesn't matter. If you changed the filename to a file that didn't exist on my Mac, you would get the error message instead and it would look like this:
error = Error Domain=NSCocoaErrorDomain Code=260 "The file "arrayf.txt" couldn't be opened because there is no such file." UserInfo=0x10010a8a0 {NSFilePath=/Users/Shared/arrayf.txt, NSUnderlyingError=0x10010a650 "The operation couldn't be completed. No such file or directory"}
This is when you would use the app user interface to prompt the user for assistance.
### Try/Catch Statements
Try/catch statements are another way that you can try to catch errors. The idea is that you can identify areas of code that are error-prone and then wrap up these areas in a block, called the try block. You can also identify a block of code called the catch block that will execute if the code in the try block fails.
You can also set up a block of code called the finally block that will execute regardless of whether the try block fails or not. Let's try this by reading in an array file that only has four elements and then attempt to read a fifth element that would be out of bounds.
NSArray *array = [NSArray arrayWithContentsOfFile:file];
@try {
NSString *fifthItem = [array objectAtIndex:4];
NSLog(@"fifthItem = %@", fifthItem);
}
@catch (NSException *exception) {
NSLog(@"exception = %@", exception);
}
@finally {
NSLog(@"Moving on...");
}
If you execute this code, the try block would fail, and control would go to the catch block, and you would get a message printed out to the log:
exception = *** -[__NSArrayM objectAtIndex:]: index 4 beyond bounds [0 .. 3]
Moving on...
The text "Moving on..." appears because that is part of the finally block and will execute no matter what happens.
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_27
© Matthew Campbell 2013
# 27. Background Processing
Matthew Campbell1
(1)
CA, US
Abstract
When your program needs to do more than one thing at a time, you can use background processing. Background processing in Objective-C is done with a Foundation class called NSOperationQueue.
## Background Processing Defined
When your program needs to do more than one thing at a time, you can use background processing. Background processing in Objective-C is done with a Foundation class called NSOperationQueue.
NSOperationQueue manages lists of operations and decides how to schedule the resources needed to run an operation. Operations are blocks of code. Operations can execute simultaneously or one at a time.
Let's say that you want to count to 10,000 while printing this out to the log. To do this, you would code something like the following:
for (int y=0; y<=10000; y++) {
NSLog(@"y = %i", y);
}
This works fine. If you run this in an app, you will see a long list of y values printed out to the console log.
y = 0
. . .
y = 9998
y = 9999
y = 10000
Now, let's say you also want to count backwards from 20,000 to 0. If you just code another loop, then you would have to wait for the 10,000 count to complete before moving on to the 20,000 count. But if you use a background queue, you can do both tasks at once in potentially the same amount of time.
Here is how you could set this up using a background queue:
NSOperationQueue *background = [[NSOperationQueue alloc] init];
[background addOperationWithBlock:^{
for (int i1=20000; i1>0; i1--) {
NSLog(@"i1 = %i", i1);
}
}];
for (int y=0; y<=10000; y++) {
NSLog(@"y = %i", y);
}
You can create the queue with the alloc and init functions and then use the addOperationWithBlock: message to add a code block to the queue. If you run this code, you will get something like this printing out to the log:
i1 = 20000
y = 0
i1 = 19999
y = 1
i1 = 19998
y = 2
y = 3
i1 = 19997
i1 = 19996
. . .
y = 9997
i1 = 10001
y = 9998
y = 9999
i1 = 10000
i1 = 9999
y = 10000
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_28
© Matthew Campbell 2013
# 28. Object Archiving
Matthew Campbell1
(1)
CA, US
Abstract
Saving a copy of your app's object graph to be used later on as a backup is called object archiving. Objective-C has classes that can help you archive your object graph. Each class that supports archiving must adopt the NSCoding protocol and implement two methods needed by the NSKeyedArchiver and NSKeyedUnarchiver classes.
## Object Archiving Defined
Saving a copy of your app's object graph to be used later on as a backup is called object archiving. Objective-C has classes that can help you archive your object graph. Each class that supports archiving must adopt the NSCoding protocol and implement two methods needed by the NSKeyedArchiver and NSKeyedUnarchiver classes.
### NSCoding
Each class that will support archiving must adopt the NSCoding protocol and implement the required methods. These methods will help the archivers to store the property values stored in the objects that need to be archived.
To adopt the NSCoding protocol, add the NSCoding protocol name in angle brackets to the class interface. Here is how you would do this for the Task class that you already coded:
#import <Foundation/Foundation.h>
@interface Task : NSObject <NSCoding>
@property(strong) NSString *name;
@property(assign) BOOL done;
-(void)generateReport;
@end
The syntax for the adopted protocol NSCoding is shown above in bold. Now you need to implement two protocol methods. The first method is called an encoder because it will be used to encode the property values into an archive file.
#import "Task.h"
@implementation Task
-(void)generateReport{
NSLog(@"Task %@ is %@", self.name, self.done ? @"DONE" : @"IN PROGRESS");
}
-(void)encodeWithCoder:(NSCoder *)encoder {
[encoder encodeObject:self.name forKey:@"namekey"];
[encoder encodeBool:self.done forKey:@"donekey"];
}
@end
Each property that you want to save in the archive needs to have an encode message sent along with a key. Different data types require different messages. See Apple's NSCoding documentation for a complete listing of available methods. For instance, objects require the encodeObject:forKey: message while Boolean require the encodeBool:forKey: message.
Next, you need to implement the decoder method. This method is a type of constructor initializer that will add the property values to the object. As before, the significant methods are in bold.
#import "Task.h"
@implementation Task
-(void)generateReport{
NSLog(@"Task %@ is %@", self.name, self.done ? @"DONE" : @"IN PROGRESS");
}
-(void)encodeWithCoder:(NSCoder *)encoder {
[encoder encodeObject:self.name forKey:@"namekey"];
[encoder encodeBool:self.done forKey:@"donekey"];
}
-(id)initWithCoder:(NSCoder *)decoder {
self = [super init];
if (self) {
self.name = [decoder decodeObjectForKey:@"namekey"];
self.done = [decoder decodeBoolForKey:@"donekey"];
}
return self;
}
@end
Each key and property in this method must match the ones in the encodeWithCoder: method. At this point, Task now supports NSCoding. To see an example, you also need to add NSCoding support to Project. Here is what you would do to the Project interface:
#import <Foundation/Foundation.h>
#import "Task.h"
@interface Project : NSObject <NSCoding>
@property(strong) NSString *name;
@property(strong) NSMutableArray *listOfTasks;
-(void)generateReport;
@end
Here is what you would do the Project implementation:
#import "Project.h"
@implementation Project
-(void)generateReport{
NSLog(@"Report for %@ Project", self.name);
[self.listOfTasks enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
[obj generateReport];
}];
}
-(void)encodeWithCoder:(NSCoder *)encoder {
[encoder encodeObject:self.name forKey:@"namekey"];
[encoder encodeObject:self.listOfTasks forKey:@"listOfTaskskey"];
}
-(id)initWithCoder:(NSCoder *)decoder {
self = [super init];
if (self) {
self.name = [decoder decodeObjectForKey:@"namekey"];
self.listOfTasks = [decoder decodeObjectForKey:@"listOfTaskskey"];
}
return self;
}
@end
### Using the Archiver
Assuming that you have an object graph already set up, it's easy to use the archiver. The class that you use is called NSKeyedArchiver and you just need to send the archiveRootObject:toFile: message. The two parameters are the filename that you want to use and the root object in the object graph. Your root object is the Project p object because you have only one p that contains many Task objects.
Let's assume that you still have the object graph created in Chapter 24 that listed out all the tasks for p. If you want to archive this, you could do this in the main.m file.
NSString *file = @"/Users/Shared/project.dat";
[NSKeyedArchiver archiveRootObject:p toFile:file];
This is will create a data file on your Mac. In the future, you could read in this file and use it to restore the object graph to your app.
NSString *file = @"/Users/Shared/project.dat";
Project *p = [NSKeyedUnarchiver unarchiveObjectWithFile:file];
[p generateReport];
Since you sent the generateReport message here, you would get a printout of the object graph in your console log.
Report for Cook Dinner Project
Task Choose Menu is IN PROGRESS
Task Buy Groceries is IN PROGRESS
Task Prepare Ingredients is IN PROGRESS
Task Cook Food is IN PROGRESS
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0_29
© Matthew Campbell 2013
# 29. Web Services
Matthew Campbell1
(1)
CA, US
Abstract
Many companies like Facebook and Twitter make their services available to users via web sites. Often these services are also available to developers to use in their apps; these are called web services. Web services are functions and content that reside on a web server that you can use via a well-defined set of rules called an API (Application Programming Interface).
## Web Services Defined
Many companies like Facebook and Twitter make their services available to users via web sites. Often these services are also available to developers to use in their apps; these are called web services. Web services are functions and content that reside on a web server that you can use via a well-defined set of rules called an API (Application Programming Interface).
The general pattern to working with web services is to formulate a request, send the request, receive the response, and then interpret the response. Objective-C comes with support for web services. To send requests and receive responses you can use the NSURLConnection class with the NSData class. To interpret, or parse, the response, you can use the NSJSONSerialization class.
Note
JSON stands for JavaScript Object Notation and is used for data storage and transportation. Web services that are implemented as REST (Representational State Transfer) web services will provide JSON response data. The NSJSONSerialization class makes working with JSON easier in Objective-C.
### Bitly Example
Bitly is a good example of a web service that I like to use as a demonstration because it is pretty simple and provides a very clear function. Bitly will take a long URL (the string that you type into a web browser) and turn it into a short URL that is easier to type. I am going to use the bitly web service to showcase the NSURLConnection class.
Note
To follow along with this recipe, you will need to create a free account with bit.ly and get your own API key and API username. Go to https://bit.ly to get your account if you wish to follow along with this example. In the examples, when I include [YOUR API LOGIN] or [YOUR API KEY] you will need to substitute the login and key that you obtained from bitly.
### Formulate Request String
When you work with a web service, you should use the documentation provided by the company that published the web service as a reference. This documentation will give you a string and parameters that you can use. You are going to use the string from the API documentation(http://api.bit.ly/shorten?version=2.0.1&longUrl=&login=&apiKey=&format=json) as a starting point along with the bitly login, bitly key, and a long URL as parameters to formulate your request string.
NSString *APILogin = @"[YOUR API LOGIN]";
NSString *APIKey = @"[YOUR API KEY]";
NSString *longURL = @" https://mobileappmastery.com ";
NSString *requestString = [NSString stringWithFormat:@" http://api.bit.ly/shorten?version=2.0.1&longUrl=%@&login=%@&apiKey=%@&format=json ", longURL, APILogin, APIKey];
### Create the Session and URL
You are going to need two objects: an NSURL object to represent the request URL that you are sending to the server and a NSURLSession object to do your web work for you.
NSURL *requestURL = [NSURL URLWithString:requestString];
NSURLSession *session = [NSURLSession sharedSession];
### Send and Receive the Response
You are going to use a block with the NSURLSession object to ask the web service to shorten the URL. Put all the code that you need to work with the response in the block that you send as a parameter. You are really doing two things at once with this method.
[[session dataTaskWithURL:requestURL
completionHandler:^(NSData *data,
NSURLResponse *response,
NSError *error) {
}] resume];
sleep(60);
You still need to fill in the block where you handle the response, but this will start the action. Notice that you need to put a sleep function in this code. The sleep function will stop new code from executing on the main thread for 60 seconds. You need this because the method you are using is going to execute on a background thread (this is the best practice when using web services). If you don't stop the command line app from finishing the web service, it won't have enough time to fetch the results for you.
#### Parsing JSON
Inside the completion block, you can add the code used to interpret the response from the web server. Since you know that you are going to be working with JSON, you will use the NSJSONSerialization class. You need an NSError object here as well as the NSData object supplied by the block (this contains the data from the web service response).
[[session dataTaskWithURL:requestURL
completionHandler:^(NSData *data,
NSURLResponse *response,
NSError *error) {
NSError *e = nil;
NSDictionary *bitlyJSON = [NSJSONSerialization JSONObjectWithData:data
options:0
error: & e];
}] resume];
This gives you all the JSON data organized in an NSDictionary collection. This dictionary can have other dictionaries, arrays, numbers, and strings located inside it. The next step is a process of going through all these returned objects to locate what you need. You also need to test for errors here.
[[session dataTaskWithURL:requestURL
completionHandler:^(NSData *data,
NSURLResponse *response,
NSError *error) {
NSError *e = nil;
NSDictionary *bitlyJSON = [NSJSONSerialization JSONObjectWithData:data
options:0
error:&e];
if(!error){
NSDictionary *results = [bitlyJSON objectForKey:@"results"];
NSDictionary *resultsForLongURL = [results objectForKey:longURL];
NSString *shortURL = [resultsForLongURL objectForKey:@"shortUrl"];
NSLog(@"shortURL = %@", shortURL);
}
else
NSLog(@"There was an error parsing the JSON");
}] resume];
Once this is all set up, if you run your app, you will have retrieved the shortURL from the response and printed the following out to your console log:
shortURL = http://bit.ly/1fHrAsT
Note
When you are parsing a web service response like this, you will need to investigate where the important data is by looking at the API documentation or viewing the string that is returned.
Matthew CampbellObjective-C Quick Syntax Reference10.1007/978-1-4302-6488-0© Apress 2013
Index
A
Alloc and init method
AppendString
Application Programming Interface (API)
Arithmetic operators
floating point numbers
operator precedence
types
Arrays
count property
mutablable string
NSArray class
NSArray enumerateObjectsUsingBlock method
numbers array
NSMutableArray
NSNumberFormatter
NSNumber objects
program
spelled-out string
Assignment operator (=)
B
Background process
alloc and init functions
for statement
NSOperationQueue
Bitly
Blocks
assignment operator (=)
copy property
declaration
definition
generateReport method
main function
multiplyThese block
Project class
self keyword
squareThis function
Boolean types
Boolean variable
Build and run
building definition
bundle
buttons
compiling code
console log's
product build options
Bundle
C
Categories
category interfaces
See(see Class interface)
definition
@interface keyword
main function
main.m file
method implementation
Project class
CGFloat data type
Classes
Class interface
base class
definition
foundation framework
implementation
class extension
@end and @implementation keyword
instance variable/ivar
method implementation
@interface line and @end line
Project class
property descriptor
void return type
Class methods
alloc function
console log, 58
instance methods
printTimeStamp method
project implementation
project object
sample program
Command Line Tool
Compilers
See also See also Build and run
Console log
Constructors
alloc function
init function
new keyword
Curly brackets {}
D
Decrement operator (\--)
Delegation
definition
hisTask: statusHasChangedToThis method
project assignment
protocol
references
TaskDelegate
Task done property, sending messages
thisTask:statusHasChangedToThis method
De-registering observers
Dictionaries
NSDictionary
enumeration
NSDictionary class
objectForKey
@ symbol
NSMutableDictionary
add/remove items
alloc patterns
init patterns
Do keyword
Do While loops
and Array
counter variable
do keyword
for loop
sample program
E
Else Keyword
Error handling
NSError
try/catch statements
F, G
For-each loops
array (numbers)
collection objects
NSDictionary
output
sample program
For loop
and arrays
count property
NSNumberFormatter
NSNumber objects
spelled-out string
curly brackets
increment (i++)
usage
Format specifier
H
Hello World
auto release pool
build and run
code comments
Command Line Tool projects
#import statement
NSLog function
NSString object
project creation
Code editor and project navigator
Xcode welcome screen
Project Navigator
Xcode downloading steps
I
If statements
Boolean variable
definition
else keyword
nested if else statement
Implementation
class extension
@end and @implementationkeyword
instance variable/ivar
method implementation
Increment operator (++)
Inheritance
class extension
instance variables
member of operator
private instance variables
@private keyword
protected visibility
public instance variable
visibility levels
method overriding
subclass creation
Instance methods
Instance variables
member of operator
private instance variables
protected visibility
public instance variable
visibility levels
Integer types
NSIntegers
NSUIntegers
Integrated development environment (IDE)
Interface
J
JSON
errors
NSData object
NSDictionary collection
K
Key-value coding
definition
retrieve property value
set property values
Key-value observation
definition
implementation
observer
adding
de-registering observers
testing
value changes
project and task object graph
L
LLVM compiler
Logical operators
M
Method overriding
N
Nested If Else statement
NSArray class
NSArray enumerateObjectsUsingBlock method
numbers array
NSCoding protocol
decoder method
encoder method
project implementation
Task class
NSDictionary
enumeration
key, objects list
NSDictionary class
objectForKey
output
sample program
@ symbol
NSError
NSInteger variable
NSLiteralSearch
NSLog
NSMutableArray
NSMutableDictionary
add/remove items
alloc patterns
init patterns
NSMutableString
atIndex parameter
deleteCharactorsInRange
find function
insertion
NSSMutableString
replace function
NSNumber
converting strings into numbers
NSNumberFormatter
primitive data types
NSNumberFormatter
NSObject class
NSString constructor
O
Object archiving
console log
definition
NSCoding protocol
decoder method
encoder method
project implementation
Task class
root object
ObjectForKey
Object-oriented programming
Objects
constructors
alloc function
init function
new keyword
declaration
definition
format specifier
messages
NSFileManager object
removeItemAtPath:error
NSObject class
Operators
arithmetic operators
floating point numbers
operator precedence
types
assignment operators (=)
decrement operator (\--)
definition
increment operator (++)
logical operators
relational operators
P, Q
primitive data types
PrintTimeStamp method
Protocols
adoption
definition
optional methods
methods implementation
R
Relational operators
RemoveItemAtPath:error
S
Singleton
alloc and init function
AppSingleton class
definition
implementation
interface
@synchronized (self) block
SquareThis function
Strings
NSMutableString
atIndex parameter
deleteCharactorsInRange
find function
insertion
NSSMutableString
replace function
NSString class
NSString constructor
NSStringobjects
Switch statements
break keyword
case keyword
default case
multiple case statements
NSInteger variable
sample program
switch keyword
usage
T
Try/catch statements
U
url NSURL object
V
Variables
assignment operator (=)
boolean types
CGFloat data type
curly brackets {}
data types
declaration
definition
integer types
NSIntegers
NSUIntegers
W
Web services
API
Bitly
definition
JSON
errors
NSData object
NSDictionary collection
request string
sleep function
URL creation
While loops
Arrays
counter variable
curly brackets
ending condition
progarm
X, Y, Z
Xcode
downloading steps
Hello World project
See(see Hello World)
integrated development environment (IDE)
| {
"redpajama_set_name": "RedPajamaBook"
} | 543 |
Seeking Competent Tow Truck Companies Near College Station Texas?
Nothing is more annoying than losing time and money if your car is stuck on the streets around College Station Texas and needs a tow. Anytime that happens, needing quick, skilled, and affordable Tow Truck Companies just a call away will make a huge improvement on your everyday work and profitability.
• Multi- Truck Competency - Imagine contacting a towing organization in College Station Texas and learning they can't move your model of car or truck? BDS Towing can help any type of truck!
• Expertise In Big 18-Wheelers - When it concerns finding Tow Truck Companies which would tow 18-wheelers or any category of large truck, our personnel and know-how are unparalleled near College Station Texas!
• Around The Clock Presence - It is a large comfort to realize you are able to get Tow Truck Companies anytime around College Station Texas and get fast and efficient 18 wheeler rescue!
Choosing the best Tow Truck Companies in College Station Texas can help remove any pressure anytime one of your trucks becomes stuck or stops working. Quick and efficient Tow Truck Companies is what you receive with BDS Towing!
Require Quick and Reliable Tow Truck Companies Around College Station Texas? | {
"redpajama_set_name": "RedPajamaC4"
} | 5,427 |
require 'test/unit'
require '../Graph/Vertex'
require '../Graph/graph'
require '../Graph/edge'
class MyTest < Test::Unit::TestCase
def testEdge_add
@vertex = Vertex.new('NW')
@Edge = Edge.new('NYC','MEX',5345)
assert(Graph.edge_add(@vertex, @Edge, 'CLR','SRV',2345))
end
end | {
"redpajama_set_name": "RedPajamaGithub"
} | 7,203 |
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Exploiting the interwebz — Intermediate geoprocessing with Python, AR GIS Users Forum 2013 1.0 documentation</title>
<link rel="stylesheet" href="_static/cloud.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<link rel="stylesheet" href="http://fonts.googleapis.com/css?family=Noticia+Text|Open+Sans|Droid+Sans+Mono" type="text/css" />
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: './',
VERSION: '1.0',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true
};
</script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<script type="text/javascript" src="_static/jquery.cookie.js"></script>
<script type="text/javascript" src="_static/cloud.js"></script>
<link rel="top" title="Intermediate geoprocessing with Python, AR GIS Users Forum 2013 1.0 documentation" href="index.html" />
<link rel="next" title="Working with files" href="files.html" />
<link rel="prev" title="Writing code and running scripts" href="writing-running.html" />
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
<div class="relbar-top">
<div class="related">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="genindex.html" title="General Index"
accesskey="I">index</a></li>
<li class="right" >
<a href="files.html" title="Working with files"
accesskey="N">next</a> </li>
<li class="right" >
<a href="writing-running.html" title="Writing code and running scripts"
accesskey="P">previous</a> </li>
<li><a href="index.html">Intermediate geoprocessing with Python, AR GIS Users Forum 2013 1.0 documentation</a> »</li>
</ul>
</div>
</div>
<div class="document">
<div class="documentwrapper">
<div class="bodywrapper">
<div class="body">
<div class="section" id="exploiting-the-interwebz">
<h1>Exploiting the interwebz<a class="headerlink" href="#exploiting-the-interwebz" title="Permalink to this headline">¶</a></h1>
<div class="section" id="the-python-docs">
<h2>The Python docs<a class="headerlink" href="#the-python-docs" title="Permalink to this headline">¶</a></h2>
<p>Before we go any further, let’s take a look at the Python docs for the version
of Python ArcGIS 10.1 uses, 2.7.2:</p>
<p><a class="reference external" href="http://docs.python.org/release/2.7.2/">http://docs.python.org/release/2.7.2/</a></p>
<p>Let’s look at the <tt class="docutils literal"><span class="pre">os</span></tt> and <tt class="docutils literal"><span class="pre">urllib</span></tt> modules, which we will be using in this
section.</p>
</div>
<div class="section" id="doing-a-simple-file-fetch-from-the-web">
<h2>Doing a simple file fetch from the web<a class="headerlink" href="#doing-a-simple-file-fetch-from-the-web" title="Permalink to this headline">¶</a></h2>
<div class="highlight-python"><div class="highlight"><pre><span class="gp">>>> </span><span class="kn">import</span> <span class="nn">urllib</span>
<span class="gp">>>> </span><span class="n">url</span> <span class="o">=</span> <span class="s">"http://www.fhwa.dot.gov/bridge/nbi/2012/delimited/DC12.txt"</span>
<span class="gp">>>> </span><span class="n">f</span> <span class="o">=</span> <span class="s">r"C:\Users\class5user\ar-gis-python\outputs\dc.csv"</span>
<span class="gp">>>> </span><span class="n">urllib</span><span class="o">.</span><span class="n">urlretrieve</span><span class="p">(</span><span class="n">url</span><span class="p">,</span> <span class="n">f</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="fetching-a-bunch-of-files-from-a-known-url-structure">
<h2>Fetching a bunch of files from a known URL structure<a class="headerlink" href="#fetching-a-bunch-of-files-from-a-known-url-structure" title="Permalink to this headline">¶</a></h2>
<p>If files are sitting out on a server somewhere in a fairly rigid folder
structure, it’s relatively easy to fetch those files. The <a class="reference external" href="http://libremap.org/">Libre Map Project</a>
hosts the 24K USGS DRGs for all 50 states for FREE on archive.org. We are
interested in the <a class="reference external" href="http://libremap.org/data/state/arkansas/drg/">Arkansas DRGs</a>.</p>
<p>On the Arkansas data page, hover over some of the links for the TIFF, TFW, and
FGD files. See a pattern there?</p>
<p><tt class="docutils literal"><span class="pre">http://www.archive.org/download/usgs_drg_</span></tt> <tt class="docutils literal"><span class="pre">ar</span></tt> _ <tt class="docutils literal"><span class="pre">35</span></tt> <tt class="docutils literal"><span class="pre">094</span></tt> _ <tt class="docutils literal"><span class="pre">a2</span></tt> /
<tt class="docutils literal"><span class="pre">o</span></tt> <tt class="docutils literal"><span class="pre">35094</span></tt> <tt class="docutils literal"><span class="pre">a2</span></tt> <tt class="docutils literal"><span class="pre">.tif</span></tt></p>
<p>This breaks down to:</p>
<p><tt class="docutils literal"><span class="pre">base_url</span></tt> <tt class="docutils literal"><span class="pre">state</span></tt> _ <tt class="docutils literal"><span class="pre">deg</span> <span class="pre">lat</span></tt> <tt class="docutils literal"><span class="pre">deg</span> <span class="pre">lon</span></tt> _ <tt class="docutils literal"><span class="pre">map</span>
<span class="pre">index</span> <span class="pre">no</span></tt> / <tt class="docutils literal"><span class="pre">category</span></tt> <tt class="docutils literal"><span class="pre">deg</span> <span class="pre">lat</span></tt> <tt class="docutils literal"><span class="pre">deg</span> <span class="pre">lon</span></tt> <tt class="docutils literal"><span class="pre">map</span> <span class="pre">index</span> <span class="pre">no</span></tt> <tt class="docutils literal"><span class="pre">.file</span> <span class="pre">ext</span></tt></p>
<div class="admonition note">
<p class="first admonition-title">Note</p>
<p class="last">This USGS standard explains the above nicely:
<a class="reference external" href="http://topomaps.usgs.gov/drg/drg_name.html">http://topomaps.usgs.gov/drg/drg_name.html</a></p>
</div>
<p>This quad information is available in the attributes of the 24K USGS quadrangle
footprint that is readily available on the web. I got mine from the <a class="reference external" href="http://datagateway.nrcs.usda.gov/GDGOrder.aspx">USDA NRCS
Data Gateway</a>. Order by State | Arkansas | Map Indexes | Quadrangle Index
1:24,000. You have this in the <tt class="docutils literal"><span class="pre">shapefiles</span></tt> directory as <tt class="docutils literal"><span class="pre">quads24k_a_ar.shp</span></tt>.
Load it into ArcMap and look at the attribute table.</p>
<a class="reference internal image-reference" href="_images/24k-att-table.png"><img alt="_images/24k-att-table.png" src="_images/24k-att-table.png" style="width: 500px;" /></a>
<p>Now open up 24K_Quads.xlsx from the <tt class="docutils literal"><span class="pre">inputs</span></tt> directory, in Excel. Let’s build
our formula to in turn create our list of drgs to fetch.</p>
<div class="html-toggle section" id="solution">
<h3>Solution<a class="headerlink" href="#solution" title="Permalink to this headline">¶</a></h3>
<p>Look in Cell B16</p>
</div>
<div class="section" id="write-the-script-to-fetch-drgs">
<h3>Write the script to fetch DRGs<a class="headerlink" href="#write-the-script-to-fetch-drgs" title="Permalink to this headline">¶</a></h3>
<p>Import modules we will use:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="kn">import</span> <span class="nn">urllib</span>
<span class="kn">import</span> <span class="nn">os</span>
</pre></div>
</div>
<p>Paste in your DRG list:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="n">drg_list</span> <span class="o">=</span> <span class="p">[[</span><span class="s">'o36094a2'</span><span class="p">,</span> <span class="s">'Fayetteville'</span><span class="p">,</span> <span class="s">'AR'</span><span class="p">],</span>
<span class="p">[</span><span class="s">'o36094b1'</span><span class="p">,</span> <span class="s">'Sonora'</span><span class="p">,</span> <span class="s">'AR'</span><span class="p">],</span>
<span class="p">[</span><span class="s">'o36094a1'</span><span class="p">,</span> <span class="s">'Elkins'</span><span class="p">,</span> <span class="s">'AR'</span><span class="p">],</span>
<span class="p">[</span><span class="s">'o36094b2'</span><span class="p">,</span> <span class="s">'Springdale'</span><span class="p">,</span> <span class="s">'AR'</span><span class="p">]]</span>
</pre></div>
</div>
<p>Create a list of extensions, these are the 3 filetypes we will fetch for each
quad:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="n">exts</span> <span class="o">=</span> <span class="p">[</span><span class="s">'tif'</span><span class="p">,</span> <span class="s">'tfw'</span><span class="p">,</span> <span class="s">'fgd'</span><span class="p">]</span>
</pre></div>
</div>
<p>Setup the <tt class="docutils literal"><span class="pre">base_url</span></tt>:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="n">base_url</span><span class="o">=</span> <span class="s">"http://www.archive.org/download/usgs_drg_"</span>
</pre></div>
</div>
<p>Start a <tt class="docutils literal"><span class="pre">for</span></tt> loop to iterate through each nested list (each DRG) in
<tt class="docutils literal"><span class="pre">drg_list</span></tt>:</p>
<div class="highlight-python"><pre>for drg in drg_list:
</pre>
</div>
<p>Start <em>another</em> <tt class="docutils literal"><span class="pre">for</span></tt> loop <em>nested in the previous loop</em> to get all 3
filetypes for each DRG (<strong>note it is indented 4 spaces!</strong>):</p>
<div class="highlight-python"><pre> for ext in exts:
</pre>
</div>
<p>Let’s play around with <tt class="docutils literal"><span class="pre">drg_list</span></tt> before we move on. Copy <tt class="docutils literal"><span class="pre">drg_list</span></tt>
above and paste it into the ArcGIS Python window.</p>
<p>Type the following commands to get a feel for how lists and slicing work:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="gp">>>> </span><span class="k">for</span> <span class="n">drg</span> <span class="ow">in</span> <span class="n">drg_list</span><span class="p">:</span>
<span class="go"> print drg</span>
<span class="gp">>>> </span><span class="k">for</span> <span class="n">drg</span> <span class="ow">in</span> <span class="n">drg_list</span><span class="p">:</span>
<span class="go"> print type(drg)</span>
<span class="gp">>>> </span><span class="k">for</span> <span class="n">drg</span> <span class="ow">in</span> <span class="n">drg_list</span><span class="p">:</span>
<span class="go"> for each in drg:</span>
<span class="go"> print each</span>
<span class="gp">>>> </span><span class="n">drg_list</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="gp">>>> </span><span class="n">drg_list</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span>
<span class="gp">>>> </span><span class="n">drg_list</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">1</span><span class="p">:</span><span class="mi">6</span><span class="p">]</span>
<span class="gp">>>> </span><span class="n">drg_list</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">6</span><span class="p">:]</span>
</pre></div>
</div>
<p>Now let’s build on list indexing and slicing and construct our url for
fetching the data we want (<strong>note that this and all following lines are
indented 8 spaces!</strong>). This takes parts of the quad info and builds the url
for each DRG on the fly:</p>
<div class="highlight-python"><div class="highlight"><pre> <span class="n">full_url</span> <span class="o">=</span> <span class="p">(</span><span class="n">base_url</span> <span class="o">+</span> <span class="n">drg</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span><span class="o">.</span><span class="n">lower</span><span class="p">()</span> <span class="o">+</span> <span class="s">'_'</span> <span class="o">+</span> <span class="n">drg</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">1</span><span class="p">:</span><span class="mi">6</span><span class="p">]</span> <span class="o">+</span>
<span class="s">'_'</span> <span class="o">+</span> <span class="n">drg</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">6</span><span class="p">:]</span> <span class="o">+</span> <span class="s">"/"</span> <span class="o">+</span> <span class="n">drg</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">+</span> <span class="s">'.'</span> <span class="o">+</span> <span class="n">ext</span><span class="p">)</span>
</pre></div>
</div>
<p>We need to have a local file and file path to store the fetched file in.</p>
<div class="highlight-python"><div class="highlight"><pre> <span class="n">local_file</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">dirname</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">abspath</span><span class="p">(</span><span class="n">__file__</span><span class="p">))[</span><span class="mi">0</span><span class="p">]),</span>
<span class="s">"outputs"</span><span class="p">,</span> <span class="n">drg</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">+</span> <span class="s">'.'</span> <span class="o">+</span> <span class="n">ext</span><span class="p">)</span>
<span class="k">print</span> <span class="n">local_file</span>
</pre></div>
</div>
<p>Call <tt class="docutils literal"><span class="pre">urllib.urlretrieve</span></tt> method and fetch the file:</p>
<div class="highlight-python"><div class="highlight"><pre> <span class="n">urllib</span><span class="o">.</span><span class="n">urlretrieve</span><span class="p">(</span><span class="n">full_url</span><span class="p">,</span> <span class="n">local_file</span><span class="p">)</span>
</pre></div>
</div>
<div class="section" id="os-path-goodness">
<h4><tt class="docutils literal"><span class="pre">os.path</span></tt> goodness<a class="headerlink" href="#os-path-goodness" title="Permalink to this headline">¶</a></h4>
<p>Let’s figure out what all the <tt class="docutils literal"><span class="pre">os.whatever</span></tt> stuff was in the script we just
wrote by deconstructing it.</p>
<p>In Windows Explorer, navigate to the <tt class="docutils literal"><span class="pre">source\code</span></tt> directory. Create a new file
there call <tt class="docutils literal"><span class="pre">os-stuff.py</span></tt>. Right-click on the file and select “Edit in IDLE”.</p>
<p>Enter the following in <tt class="docutils literal"><span class="pre">os-stuff.py</span></tt>:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="kn">import</span> <span class="nn">os</span>
<span class="k">print</span> <span class="s">"os.path.abspath:"</span><span class="p">,</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">abspath</span><span class="p">(</span><span class="n">__file__</span><span class="p">)</span>
</pre></div>
</div>
<p>Save the file. Leave it open in IDLE. Open up a Windows command prompt it you
don’t have one open already. Navigate to the <tt class="docutils literal"><span class="pre">source\code</span></tt> directory and enter:</p>
<div class="highlight-bash"><div class="highlight"><pre>C:<span class="se">\U</span>sers<span class="se">\c</span>lass5user<span class="se">\a</span>rgis-python-2013<span class="se">\s</span>ource<span class="se">\c</span>ode>python os-stuff.py
</pre></div>
</div>
<p>On line 3, enter:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="k">print</span> <span class="s">"os.path.split:"</span><span class="p">,</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">abspath</span><span class="p">(</span><span class="n">__file__</span><span class="p">))</span>
</pre></div>
</div>
<p>Save and run again.</p>
<p>One line 4, enter:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="k">print</span> <span class="s">"os.path.split[0]:"</span><span class="p">,</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">abspath</span><span class="p">(</span><span class="n">__file__</span><span class="p">))[</span><span class="mi">0</span><span class="p">]</span>
</pre></div>
</div>
<p>Save and run again.</p>
<p>On line 5, enter:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="k">print</span> <span class="s">"os.path.dirname:"</span><span class="p">,</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">dirname</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">abspath</span><span class="p">(</span><span class="n">__file__</span><span class="p">))[</span><span class="mi">0</span><span class="p">])</span>
</pre></div>
</div>
<p>Save and run again.</p>
<p>One line 6-7 enter:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="k">print</span> <span class="s">"os.path.join:"</span><span class="p">,</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">dirname</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">abspath</span><span class="p">(</span><span class="n">__file__</span><span class="p">))[</span><span class="mi">0</span><span class="p">]),</span>
<span class="s">"outputs"</span><span class="p">)</span>
</pre></div>
</div>
<p>Save and run for the last time.</p>
</div>
</div>
<div class="html-toggle section" id="id1">
<h3>Solution<a class="headerlink" href="#id1" title="Permalink to this headline">¶</a></h3>
<div class="highlight-python"><div class="highlight"><pre><span class="kn">import</span> <span class="nn">urllib</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="n">drg_list</span> <span class="o">=</span> <span class="p">[[</span><span class="s">'o36094a2'</span><span class="p">,</span> <span class="s">'Fayetteville'</span><span class="p">,</span> <span class="s">'AR'</span><span class="p">],</span>
<span class="p">[</span><span class="s">'o36094b1'</span><span class="p">,</span> <span class="s">'Sonora'</span><span class="p">,</span> <span class="s">'AR'</span><span class="p">],</span>
<span class="p">[</span><span class="s">'o36094a1'</span><span class="p">,</span> <span class="s">'Elkins'</span><span class="p">,</span> <span class="s">'AR'</span><span class="p">],</span>
<span class="p">[</span><span class="s">'o36094b2'</span><span class="p">,</span> <span class="s">'Springdale'</span><span class="p">,</span> <span class="s">'AR'</span><span class="p">]]</span>
<span class="n">exts</span> <span class="o">=</span> <span class="p">[</span><span class="s">'tif'</span><span class="p">,</span> <span class="s">'tfw'</span><span class="p">,</span> <span class="s">'fgd'</span><span class="p">]</span>
<span class="n">base_url</span><span class="o">=</span> <span class="s">"http://www.archive.org/download/usgs_drg_"</span>
<span class="k">for</span> <span class="n">drg</span> <span class="ow">in</span> <span class="n">drg_list</span><span class="p">:</span>
<span class="k">for</span> <span class="n">ext</span> <span class="ow">in</span> <span class="n">exts</span><span class="p">:</span>
<span class="c"># http://www.archive.org/download/usgs_drg_ar_35094_d2/o35094d2.tif</span>
<span class="n">full_url</span> <span class="o">=</span> <span class="p">(</span><span class="n">base_url</span> <span class="o">+</span> <span class="n">drg</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span><span class="o">.</span><span class="n">lower</span><span class="p">()</span> <span class="o">+</span> <span class="s">'_'</span> <span class="o">+</span> <span class="n">drg</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">1</span><span class="p">:</span><span class="mi">6</span><span class="p">]</span> <span class="o">+</span>
<span class="s">'_'</span> <span class="o">+</span> <span class="n">drg</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">6</span><span class="p">:]</span> <span class="o">+</span> <span class="s">"/"</span> <span class="o">+</span> <span class="n">drg</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">+</span> <span class="s">'.'</span> <span class="o">+</span> <span class="n">ext</span><span class="p">)</span>
<span class="n">local_file</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">dirname</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">abspath</span><span class="p">(</span><span class="n">__file__</span><span class="p">))[</span><span class="mi">0</span><span class="p">]),</span>
<span class="s">"outputs"</span><span class="p">,</span> <span class="n">drg</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">+</span> <span class="s">'.'</span> <span class="o">+</span> <span class="n">ext</span><span class="p">)</span>
<span class="k">print</span> <span class="n">local_file</span>
<span class="n">urllib</span><span class="o">.</span><span class="n">urlretrieve</span><span class="p">(</span><span class="n">full_url</span><span class="p">,</span> <span class="n">local_file</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="bonus">
<h2>Bonus<a class="headerlink" href="#bonus" title="Permalink to this headline">¶</a></h2>
<p>Apply the principles of lists, iteration, and building of a url pattern we just
learned to the first example of the NBI bridge data. Is there a way to fetch
multiple states by using a list somehow?</p>
<div class="admonition note">
<p class="first admonition-title">Note</p>
<p class="last">Use small states like RI, DE, and the District of Columbia, that way the
files you fetch are smaller.</p>
</div>
<div class="html-toggle section" id="id2">
<h3>Solution<a class="headerlink" href="#id2" title="Permalink to this headline">¶</a></h3>
<div class="highlight-python"><div class="highlight"><pre><span class="kn">import</span> <span class="nn">urllib</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="n">base_url</span> <span class="o">=</span> <span class="s">"http://www.fhwa.dot.gov/bridge/nbi/2012/delimited/"</span>
<span class="n">states</span> <span class="o">=</span> <span class="p">[</span><span class="s">"RI"</span><span class="p">,</span> <span class="s">"DE"</span><span class="p">,</span> <span class="s">"DC"</span><span class="p">]</span>
<span class="k">for</span> <span class="n">state</span> <span class="ow">in</span> <span class="n">states</span><span class="p">:</span>
<span class="n">url</span> <span class="o">=</span> <span class="n">base_url</span> <span class="o">+</span> <span class="n">state</span> <span class="o">+</span> <span class="s">"12.txt"</span>
<span class="n">local_file</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">dirname</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">abspath</span><span class="p">(</span><span class="n">__file__</span><span class="p">))[</span><span class="mi">0</span><span class="p">]),</span>
<span class="s">"outputs"</span><span class="p">,</span> <span class="n">state</span> <span class="o">+</span> <span class="s">"12.txt"</span><span class="p">)</span>
<span class="n">urllib</span><span class="o">.</span><span class="n">urlretrieve</span><span class="p">(</span><span class="n">url</span><span class="p">,</span> <span class="n">local_file</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="sphinxsidebar">
<div class="sphinxsidebarwrapper">
<h3><a href="index.html">Table Of Contents</a></h3>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="what-we-are-doing.html">What we will accomplish</a></li>
<li class="toctree-l1"><a class="reference internal" href="writing-running.html">Writing code and running scripts</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="">Exploiting the interwebz</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#the-python-docs">The Python docs</a></li>
<li class="toctree-l2"><a class="reference internal" href="#doing-a-simple-file-fetch-from-the-web">Doing a simple file fetch from the web</a></li>
<li class="toctree-l2"><a class="reference internal" href="#fetching-a-bunch-of-files-from-a-known-url-structure">Fetching a bunch of files from a known URL structure</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#solution">Solution</a></li>
<li class="toctree-l3"><a class="reference internal" href="#write-the-script-to-fetch-drgs">Write the script to fetch DRGs</a></li>
<li class="toctree-l3"><a class="reference internal" href="#id1">Solution</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#bonus">Bonus</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#id2">Solution</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="files.html">Working with files</a></li>
<li class="toctree-l1"><a class="reference internal" href="objects.html">Working with objects</a></li>
<li class="toctree-l1"><a class="reference internal" href="geometries.html">Working with geometries</a></li>
<li class="toctree-l1"><a class="reference internal" href="spatial-refs.html">Working with spatial references</a></li>
<li class="toctree-l1"><a class="reference internal" href="tools.html">Tools and tooboxes</a></li>
</ul>
<h3>This Page</h3>
<ul class="this-page-menu">
<li><a href="_sources/web.txt"
rel="nofollow">Show Source</a></li>
</ul>
<h4>Previous topic</h4>
<p class="topless"><a href="writing-running.html"
title="previous chapter">Writing code and running scripts</a></p>
<h4>Next topic</h4>
<p class="topless"><a href="files.html"
title="next chapter">Working with files</a></p>
</div>
</div>
<div class="clearer"></div>
</div>
<div class="relbar-bottom">
<div class="related">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="genindex.html" title="General Index"
>index</a></li>
<li class="right" >
<a href="files.html" title="Working with files"
>next</a> </li>
<li class="right" >
<a href="writing-running.html" title="Writing code and running scripts"
>previous</a> </li>
<li><a href="index.html">Intermediate geoprocessing with Python, AR GIS Users Forum 2013 1.0 documentation</a> »</li>
</ul>
</div>
</div>
<div class="footer">
© Copyright 2013, Center for Advanced Spatial Technologies.
Last updated on Sep 06, 2013.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.2b1.
</div>
<!-- cloud_sptheme 1.4 -->
</body>
</html> | {
"redpajama_set_name": "RedPajamaGithub"
} | 8,696 |
<div class="container" ng-controller="temperatureCtrl as temperature">
<div class="row">
<h3 class="col-lg-10">Chicago Temperature 2015 {{temperature.getGraphTitle()}}</h3>
</div>
<div class="row">
<h5 class="col-lg-10 description">Below is an interactive visualization of the min and max temperatures for the first 11 months of 2015 on a given day. You may:
</h5>
</div>
<div class="row">
<ul>
<li>Highlight a section of the lower graph to zoom in on a range of dates in the upper graph. Drag that selection to change the range of dates.</li>
<li>Click on the legend key icon to hide or display the associated line.</li>
<li>Use the dropdown below to change the frequency by which the temperatures are averaged over (day, week, month).</li>
<li>Hover over the larger graph to view temperature values.</li>
</ul>
</div>
<div class="row">
<div class="col-lg-10">
<label>Frequency -</label>
<select name="frequencySelect" ng-model="temperature.frequency" ng-change="temperature.getDataAndBuildGraph()">
<option value="day">Day</option>
<option value="week">Week</option>
<option value="month">Month</option>
</select>
</div>
</div>
<div class="row top-buffer" id="TemperatureGraph"></div>
</div> | {
"redpajama_set_name": "RedPajamaGithub"
} | 6,247 |
July Club of the Month!
St Agnes AFC have been at the heart of their community for more than a century and the club's inclusive disability football programme has seen their impact hit new heights.
For club chairman Mandy Kimmins and Hailey Collins, lead coach of the disability squad, the club ethos – 'everyone is welcome, everyone can be included, everyone can play football' – is consistently at the heart of everything they do.
Nowhere is this more prevalent than in the club's disability team and the St Agnes family pulled out all the stops during lockdown when isolation hit the players hard.
Physical challenges raised valuable funds to keep membership costs down, coaching staff hosted regular Zoom meetings while a 15-year-old from the club's youth set-up compiled hand-written notes featuring magazine cut-outs and bespoke messages to each player to maintain a vital social connection.
"We know that with our players, if they don't have their normal routines, they probably won't do anything, so we wanted them to do a little bit of fitness and keep them doing something," Collins said.
"We set up a challenge and had players, walking, running, cycling – whatever they could do, to try and raise some money.
"Each day the players sent in their competitive stats and gave us evidence around what they had been doing with pictures, so we had loads of engagement through that.
"It raised money for the club but also kept the guys doing fitness and staying in touch even during lockdown.
"I think it is really crucial for them to stay active. They don't always realise the wider benefits of fitness, but if they don't keep to their routines, it stops their mobility and their ability to return to normal after lockdown.
"We have one player who really struggles with being mobile but even he was able to do some kind of walking, and it was great to see all the places he went walking. It gave us some kind of normality."
Such a heavy involvement with the club has also given Collins a new lease of life since her return to a game she thought she had left behind.
"I have come back to football, which isn't my first sport, but personally I have managed to gain qualifications, volunteer and get a really nice buzz from it," Collins said.
"To have an impact on people you wouldn't meet in an everyday normal job is something that has been really beneficial to me.
"They have enriched my life. I always look forward to seeing them and hearing what they have been up to.
"St Agnes harnesses so much more than just football, it also includes everybody that wants to come along.
"It is not just 'come and play football', it about making friends and being part of something."
Having a coaching team capable of working with a whole range of disabilities and ages is vital to the success of the programme.
The club pays for and supports players of all ages in completing their FA Playmaker, Level 1 volunteering and coaching programmes which instil the highest values for respect of difference, skills on teaching and managing others.
And the heartening efforts of all at St Agnes have been rewarded through being named the Parasport Club of the Month for July, an honour Kimmins is delighted to hold.
"This club means an awful lot, not just to me, but to all the players, volunteers, coaches and managers, everyone who has been part of this journey," she said.
"When I joined 18 years ago, we had two adult sides and one youth team. Over the last year we have had 34 teams, including our amazing disability set-up.
"To win this award is a massive recognition of all the work our team have put in, to create an inclusive club that welcomes everybody and gives everybody the opportunity to be part of something really special."
For more information on St Agnes AFC and how to get involved, click HERE | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,094 |
L'occultamento o distruzione di documenti contabili è un reato previsto dall'ordinamento giuridico italiano dall'art. 10 del decreto legislativo n. 74 del 10 marzo 2000.
Elementi della fattispecie delittuosa
Elemento soggettivo del reato
Ai fini della punibilità è richiesto il dolo specifico cioè il reo deve avere specifica finalità di evadere le imposte sui redditi o sul valore aggiunto o di consentire a terzi di evadere le stesse imposte.
Elemento oggettivo del reato
Viene punita la condotta di distruzione o occultamento di documenti contabili, la cui tenuta è obbligatoria per legge, al fine di impedire la ricostruzione della contabilità da parte delle autorità preposte, il reato per tali motivi può essere commesso solo da coloro che sono obbligati alla tenuta della contabilità.
Nessuna soglia di punibilità
A differenza di altri reati tributari non è previsto il superamento di alcuna soglia di punibilità al fine della commissione del reato.
Note
Reati tributari italiani | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,048 |
package org.apache.geode.internal.cache.wan.parallel;
import static org.apache.geode.internal.cache.tier.sockets.Message.MAX_MESSAGE_SIZE_PROPERTY;
import static org.apache.geode.test.dunit.Assert.*;
import org.junit.Rule;
import org.junit.Test;
import org.junit.experimental.categories.Category;
import org.apache.geode.GemFireIOException;
import org.apache.geode.cache.wan.GatewaySender;
import org.apache.geode.distributed.internal.DistributionConfig;
import org.apache.geode.internal.cache.tier.sockets.Message;
import org.apache.geode.internal.cache.tier.sockets.MessageTooLargeException;
import org.apache.geode.internal.cache.wan.AbstractGatewaySender;
import org.apache.geode.internal.cache.wan.GatewaySenderException;
import org.apache.geode.internal.cache.wan.WANTestBase;
import org.apache.geode.test.dunit.AsyncInvocation;
import org.apache.geode.test.dunit.IgnoredException;
import org.apache.geode.test.dunit.LogWriterUtils;
import org.apache.geode.test.dunit.RMIException;
import org.apache.geode.test.dunit.Wait;
import org.apache.geode.test.dunit.rules.DistributedRestoreSystemProperties;
import org.apache.geode.test.junit.categories.DistributedTest;
/**
* DUnit test for operations on ParallelGatewaySender
*/
@Category(DistributedTest.class)
public class ParallelGatewaySenderOperationsDUnitTest extends WANTestBase {
@Rule
public DistributedRestoreSystemProperties restoreSystemProperties =
new DistributedRestoreSystemProperties();
@Override
protected final void postSetUpWANTestBase() throws Exception {
IgnoredException.addIgnoredException("Broken pipe||Unexpected IOException");
}
@Test(timeout = 300000)
public void testStopOneConcurrentGatewaySenderWithSSL() throws Throwable {
Integer lnPort = (Integer) vm0.invoke(() -> WANTestBase.createFirstLocatorWithDSId(1));
Integer nyPort = (Integer) vm1.invoke(() -> WANTestBase.createFirstRemoteLocator(2, lnPort));
createCacheInVMs(nyPort, vm2, vm3);
vm2.invoke(() -> WANTestBase.createReceiverWithSSL(nyPort));
vm3.invoke(() -> WANTestBase.createReceiverWithSSL(nyPort));
vm4.invoke(() -> WANTestBase.createCacheWithSSL(lnPort));
vm5.invoke(() -> WANTestBase.createCacheWithSSL(lnPort));
vm4.invoke(() -> createConcurrentSender("ln", 2, true, 100, 10, false, true, null, true, 5,
GatewaySender.OrderPolicy.KEY));
vm5.invoke(() -> createConcurrentSender("ln", 2, true, 100, 10, false, true, null, true, 5,
GatewaySender.OrderPolicy.KEY));
String regionName = getTestMethodName() + "_PR";
vm4.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
vm5.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
vm2.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
vm3.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
startSenderInVMs("ln", vm4, vm5);
vm4.invoke(() -> WANTestBase.doPuts(regionName, 10));
vm4.invoke(() -> WANTestBase.stopSender("ln"));
vm4.invoke(() -> WANTestBase.startSender("ln"));
vm4.invoke(() -> WANTestBase.doPuts(regionName, 10));
vm5.invoke(() -> WANTestBase.stopSender("ln"));
vm5.invoke(() -> WANTestBase.startSender("ln"));
}
@Test
public void testParallelGatewaySenderWithoutStarting() {
Integer[] locatorPorts = createLNAndNYLocators();
Integer lnPort = locatorPorts[0];
Integer nyPort = locatorPorts[1];
createSendersReceiversAndPartitionedRegion(lnPort, nyPort, false, false);
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 1000));
vm4.invoke(() -> WANTestBase.verifySenderStoppedState("ln"));
vm5.invoke(() -> WANTestBase.verifySenderStoppedState("ln"));
vm6.invoke(() -> WANTestBase.verifySenderStoppedState("ln"));
vm7.invoke(() -> WANTestBase.verifySenderStoppedState("ln"));
validateRegionSizes(getTestMethodName() + "_PR", 0, vm2, vm3);
}
/**
* Defect 44323 (ParallelGatewaySender should not be started on Accessor node)
*/
@Test
public void testParallelGatewaySenderStartOnAccessorNode() {
Integer[] locatorPorts = createLNAndNYLocators();
Integer lnPort = locatorPorts[0];
Integer nyPort = locatorPorts[1];
createSendersReceiversAndPartitionedRegion(lnPort, nyPort, true, true);
Wait.pause(2000);
vm6.invoke(() -> WANTestBase.waitForSenderRunningState("ln"));
vm7.invoke(() -> WANTestBase.waitForSenderRunningState("ln"));
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 10));
vm4.invoke(() -> WANTestBase.validateParallelSenderQueueAllBucketsDrained("ln"));
vm5.invoke(() -> WANTestBase.validateParallelSenderQueueAllBucketsDrained("ln"));
validateRegionSizes(getTestMethodName() + "_PR", 10, vm2, vm3);
}
/**
* Normal scenario in which the sender is paused in between.
*/
@Test
public void testParallelPropagationSenderPause() throws Exception {
Integer[] locatorPorts = createLNAndNYLocators();
Integer lnPort = locatorPorts[0];
Integer nyPort = locatorPorts[1];
createSendersReceiversAndPartitionedRegion(lnPort, nyPort, false, true);
// make sure all the senders are running before doing any puts
waitForSendersRunning();
// FIRST RUN: now, the senders are started. So, start the puts
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 100));
// now, pause all of the senders
vm4.invoke(() -> WANTestBase.pauseSender("ln"));
vm5.invoke(() -> WANTestBase.pauseSender("ln"));
vm6.invoke(() -> WANTestBase.pauseSender("ln"));
vm7.invoke(() -> WANTestBase.pauseSender("ln"));
// SECOND RUN: keep one thread doing puts to the region
vm4.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 1000));
// verify region size remains on remote vm and is restricted below a specified limit (i.e.
// number of puts in the first run)
vm2.invoke(() -> WANTestBase.validateRegionSizeRemainsSame(getTestMethodName() + "_PR", 100));
}
/**
* Normal scenario in which a paused sender is resumed.
*/
@Test
public void testParallelPropagationSenderResume() throws Exception {
Integer[] locatorPorts = createLNAndNYLocators();
Integer lnPort = locatorPorts[0];
Integer nyPort = locatorPorts[1];
createSendersReceiversAndPartitionedRegion(lnPort, nyPort, false, true);
// make sure all the senders are running before doing any puts
waitForSendersRunning();
// now, the senders are started. So, start the puts
vm4.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 1000));
// now, pause all of the senders
vm4.invoke(() -> WANTestBase.pauseSender("ln"));
vm5.invoke(() -> WANTestBase.pauseSender("ln"));
vm6.invoke(() -> WANTestBase.pauseSender("ln"));
vm7.invoke(() -> WANTestBase.pauseSender("ln"));
// sleep for a second or two
Wait.pause(2000);
// resume the senders
vm4.invoke(() -> WANTestBase.resumeSender("ln"));
vm5.invoke(() -> WANTestBase.resumeSender("ln"));
vm6.invoke(() -> WANTestBase.resumeSender("ln"));
vm7.invoke(() -> WANTestBase.resumeSender("ln"));
Wait.pause(2000);
validateParallelSenderQueueAllBucketsDrained();
// find the region size on remote vm
vm2.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_PR", 1000));
}
/**
* Negative scenario in which a sender that is stopped (and not paused) is resumed. Expected:
* resume is only valid for pause. If a sender which is stopped is resumed, it will not be started
* again.
*/
@Test
public void testParallelPropagationSenderResumeNegativeScenario() throws Exception {
Integer[] locatorPorts = createLNAndNYLocators();
Integer lnPort = locatorPorts[0];
Integer nyPort = locatorPorts[1];
createCacheInVMs(nyPort, vm2, vm3);
createReceiverInVMs(vm2, vm3);
createCacheInVMs(lnPort, vm4, vm5);
vm4.invoke(() -> WANTestBase.createSender("ln", 2, true, 100, 10, false, false, null, true));
vm5.invoke(() -> WANTestBase.createSender("ln", 2, true, 100, 10, false, false, null, true));
vm4.invoke(() -> WANTestBase.createPartitionedRegion(getTestMethodName() + "_PR", "ln", 1, 100,
isOffHeap()));
vm5.invoke(() -> WANTestBase.createPartitionedRegion(getTestMethodName() + "_PR", "ln", 1, 100,
isOffHeap()));
vm2.invoke(() -> WANTestBase.createPartitionedRegion(getTestMethodName() + "_PR", null, 1, 100,
isOffHeap()));
vm3.invoke(() -> WANTestBase.createPartitionedRegion(getTestMethodName() + "_PR", null, 1, 100,
isOffHeap()));
startSenderInVMs("ln", vm4, vm5);
// wait till the senders are running
vm4.invoke(() -> WANTestBase.waitForSenderRunningState("ln"));
vm5.invoke(() -> WANTestBase.waitForSenderRunningState("ln"));
// start the puts
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 100));
// let the queue drain completely
vm4.invoke(() -> WANTestBase.validateQueueContents("ln", 0));
// stop the senders
vm4.invoke(() -> WANTestBase.stopSender("ln"));
vm5.invoke(() -> WANTestBase.stopSender("ln"));
// now, try to resume a stopped sender
vm4.invoke(() -> WANTestBase.resumeSender("ln"));
vm5.invoke(() -> WANTestBase.resumeSender("ln"));
// do more puts
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 1000));
// validate region size on remote vm to contain only the events put in local site
// before the senders are stopped.
vm2.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_PR", 100));
}
/**
* Normal scenario in which a sender is stopped.
*/
@Test
public void testParallelPropagationSenderStop() throws Exception {
IgnoredException.addIgnoredException("Broken pipe");
Integer[] locatorPorts = createLNAndNYLocators();
Integer lnPort = locatorPorts[0];
Integer nyPort = locatorPorts[1];
createSendersReceiversAndPartitionedRegion(lnPort, nyPort, false, true);
// make sure all the senders are running before doing any puts
waitForSendersRunning();
// FIRST RUN: now, the senders are started. So, do some of the puts
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 100));
// now, stop all of the senders
stopSenders();
// SECOND RUN: keep one thread doing puts
vm4.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 1000));
// verify region size remains on remote vm and is restricted below a specified limit (number of
// puts in the first run)
vm2.invoke(() -> WANTestBase.validateRegionSizeRemainsSame(getTestMethodName() + "_PR", 100));
}
/**
* Normal scenario in which a sender is stopped and then started again.
*/
@Test
public void testParallelPropagationSenderStartAfterStop() throws Exception {
IgnoredException.addIgnoredException("Broken pipe");
Integer[] locatorPorts = createLNAndNYLocators();
Integer lnPort = locatorPorts[0];
Integer nyPort = locatorPorts[1];
String regionName = getTestMethodName() + "_PR";
createCacheInVMs(nyPort, vm2, vm3);
createCacheInVMs(lnPort, vm4, vm5, vm6, vm7);
vm2.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
vm3.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
createReceiverInVMs(vm2, vm3);
vm4.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
vm5.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
vm6.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
vm7.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
vm4.invoke(() -> createSender("ln", 2, true, 100, 10, false, false, null, true));
vm5.invoke(() -> createSender("ln", 2, true, 100, 10, false, false, null, true));
vm6.invoke(() -> createSender("ln", 2, true, 100, 10, false, false, null, true));
vm7.invoke(() -> createSender("ln", 2, true, 100, 10, false, false, null, true));
startSenderInVMs("ln", vm4, vm5, vm6, vm7);
// make sure all the senders are running before doing any puts
vm4.invoke(() -> waitForSenderRunningState("ln"));
vm5.invoke(() -> waitForSenderRunningState("ln"));
vm6.invoke(() -> waitForSenderRunningState("ln"));
vm7.invoke(() -> waitForSenderRunningState("ln"));
// FIRST RUN: now, the senders are started. So, do some of the puts
vm4.invoke(() -> WANTestBase.doPuts(regionName, 200));
// now, stop all of the senders
vm4.invoke(() -> stopSender("ln"));
vm5.invoke(() -> stopSender("ln"));
vm6.invoke(() -> stopSender("ln"));
vm7.invoke(() -> stopSender("ln"));
// Region size on remote site should remain same and below the number of puts done in the FIRST
// RUN
vm2.invoke(() -> WANTestBase.validateRegionSizeRemainsSame(regionName, 200));
// SECOND RUN: do some of the puts after the senders are stopped
vm4.invoke(() -> WANTestBase.doPuts(regionName, 1000));
// Region size on remote site should remain same and below the number of puts done in the FIRST
// RUN
vm2.invoke(() -> WANTestBase.validateRegionSizeRemainsSame(regionName, 200));
// start the senders again
startSenderInVMs("ln", vm4, vm5, vm6, vm7);
vm4.invoke(() -> waitForSenderRunningState("ln"));
vm5.invoke(() -> waitForSenderRunningState("ln"));
vm6.invoke(() -> waitForSenderRunningState("ln"));
vm7.invoke(() -> waitForSenderRunningState("ln"));
// Region size on remote site should remain same and below the number of puts done in the FIRST
// RUN
vm2.invoke(() -> WANTestBase.validateRegionSizeRemainsSame(regionName, 200));
// SECOND RUN: do some more puts
AsyncInvocation async = vm4.invokeAsync(() -> WANTestBase.doPuts(regionName, 1000));
async.join();
// verify all the buckets on all the sender nodes are drained
validateParallelSenderQueueAllBucketsDrained();
// verify the events propagate to remote site
vm2.invoke(() -> WANTestBase.validateRegionSize(regionName, 1000));
vm4.invoke(() -> WANTestBase.validateQueueSizeStat("ln", 0));
vm5.invoke(() -> WANTestBase.validateQueueSizeStat("ln", 0));
vm6.invoke(() -> WANTestBase.validateQueueSizeStat("ln", 0));
vm7.invoke(() -> WANTestBase.validateQueueSizeStat("ln", 0));
}
/**
* Normal scenario in which a sender is stopped and then started again. Differs from above test
* case in the way that when the sender is starting from stopped state, puts are simultaneously
* happening on the region by another thread.
*/
@Test
public void testParallelPropagationSenderStartAfterStop_Scenario2() throws Exception {
IgnoredException.addIgnoredException("Broken pipe");
Integer[] locatorPorts = createLNAndNYLocators();
Integer lnPort = locatorPorts[0];
Integer nyPort = locatorPorts[1];
createSendersReceiversAndPartitionedRegion(lnPort, nyPort, false, true);
// make sure all the senders are running before doing any puts
waitForSendersRunning();
LogWriterUtils.getLogWriter().info("All the senders are now started");
// FIRST RUN: now, the senders are started. So, do some of the puts
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 200));
LogWriterUtils.getLogWriter().info("Done few puts");
// Make sure the puts make it to the remote side
vm2.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_PR", 200, 120000));
vm3.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_PR", 200, 120000));
// now, stop all of the senders
stopSenders();
LogWriterUtils.getLogWriter().info("All the senders are stopped");
vm2.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_PR", 200, 120000));
vm3.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_PR", 200, 120000));
// SECOND RUN: do some of the puts after the senders are stopped
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 1000));
LogWriterUtils.getLogWriter().info("Done some more puts in second run");
// Region size on remote site should remain same and below the number of puts done in the FIRST
// RUN
vm2.invoke(() -> WANTestBase.validateRegionSizeRemainsSame(getTestMethodName() + "_PR", 200));
// SECOND RUN: start async puts on region
AsyncInvocation async =
vm4.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 5000));
LogWriterUtils.getLogWriter().info("Started high number of puts by async thread");
LogWriterUtils.getLogWriter().info("Starting the senders at the same time");
// when puts are happening by another thread, start the senders
startSenderInVMsAsync("ln", vm4, vm5, vm6, vm7);
LogWriterUtils.getLogWriter().info("All the senders are started");
async.join();
// verify all the buckets on all the sender nodes are drained
validateParallelSenderQueueAllBucketsDrained();
// verify that the queue size ultimately becomes zero. That means all the events propagate to
// remote site.
vm4.invoke(() -> WANTestBase.validateQueueContents("ln", 0));
}
/**
* Normal scenario in which a sender is stopped and then started again on accessor node.
*/
@Test
public void testParallelPropagationSenderStartAfterStopOnAccessorNode() throws Exception {
IgnoredException.addIgnoredException("Broken pipe");
IgnoredException.addIgnoredException("Connection reset");
IgnoredException.addIgnoredException("Unexpected IOException");
Integer[] locatorPorts = createLNAndNYLocators();
Integer lnPort = locatorPorts[0];
Integer nyPort = locatorPorts[1];
createSendersReceiversAndPartitionedRegion(lnPort, nyPort, true, true);
// make sure all the senders are not running on accessor nodes and running on non-accessor nodes
waitForSendersRunning();
// FIRST RUN: now, the senders are started. So, do some of the puts
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 200));
// now, stop all of the senders
stopSenders();
Wait.pause(2000);
// SECOND RUN: do some of the puts after the senders are stopped
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 1000));
// Region size on remote site should remain same and below the number of puts done in the FIRST
// RUN
vm2.invoke(() -> WANTestBase.validateRegionSizeRemainsSame(getTestMethodName() + "_PR", 200));
// start the senders again
startSenderInVMs("ln", vm4, vm5, vm6, vm7);
// Region size on remote site should remain same and below the number of puts done in the FIRST
// RUN
vm2.invoke(() -> WANTestBase.validateRegionSizeRemainsSame(getTestMethodName() + "_PR", 200));
// SECOND RUN: do some more puts
AsyncInvocation async =
vm4.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 1000));
async.join();
Wait.pause(5000);
// verify all buckets drained only on non-accessor nodes.
vm4.invoke(() -> WANTestBase.validateParallelSenderQueueAllBucketsDrained("ln"));
vm5.invoke(() -> WANTestBase.validateParallelSenderQueueAllBucketsDrained("ln"));
// verify the events propagate to remote site
vm2.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_PR", 1000));
}
/**
* Normal scenario in which a combinations of start, pause, resume operations is tested
*/
@Test
public void testStartPauseResumeParallelGatewaySender() throws Exception {
Integer[] locatorPorts = createLNAndNYLocators();
Integer lnPort = locatorPorts[0];
Integer nyPort = locatorPorts[1];
createSendersReceiversAndPartitionedRegion(lnPort, nyPort, false, true);
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 1000));
LogWriterUtils.getLogWriter().info("Done 1000 puts on local site");
// Since puts are already done on userPR, it will have the buckets created.
// During sender start, it will wait until those buckets are created for shadowPR as well.
// Start the senders in async threads, so colocation of shadowPR will be complete and
// missing buckets will be created in PRHARedundancyProvider.createMissingBuckets().
startSenderInVMsAsync("ln", vm4, vm5, vm6, vm7);
waitForSendersRunning();
LogWriterUtils.getLogWriter().info("Started senders on local site");
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 5000));
LogWriterUtils.getLogWriter().info("Done 5000 puts on local site");
vm4.invoke(() -> WANTestBase.pauseSender("ln"));
vm5.invoke(() -> WANTestBase.pauseSender("ln"));
vm6.invoke(() -> WANTestBase.pauseSender("ln"));
vm7.invoke(() -> WANTestBase.pauseSender("ln"));
LogWriterUtils.getLogWriter().info("Paused senders on local site");
vm4.invoke(() -> WANTestBase.verifySenderPausedState("ln"));
vm5.invoke(() -> WANTestBase.verifySenderPausedState("ln"));
vm6.invoke(() -> WANTestBase.verifySenderPausedState("ln"));
vm7.invoke(() -> WANTestBase.verifySenderPausedState("ln"));
AsyncInvocation inv1 =
vm4.invokeAsync(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 1000));
LogWriterUtils.getLogWriter().info("Started 1000 async puts on local site");
vm4.invoke(() -> WANTestBase.resumeSender("ln"));
vm5.invoke(() -> WANTestBase.resumeSender("ln"));
vm6.invoke(() -> WANTestBase.resumeSender("ln"));
vm7.invoke(() -> WANTestBase.resumeSender("ln"));
LogWriterUtils.getLogWriter().info("Resumed senders on local site");
vm4.invoke(() -> WANTestBase.verifySenderResumedState("ln"));
vm5.invoke(() -> WANTestBase.verifySenderResumedState("ln"));
vm6.invoke(() -> WANTestBase.verifySenderResumedState("ln"));
vm7.invoke(() -> WANTestBase.verifySenderResumedState("ln"));
try {
inv1.join();
} catch (InterruptedException e) {
fail("Interrupted the async invocation.", e);
}
// verify all buckets drained on all sender nodes.
validateParallelSenderQueueAllBucketsDrained();
validateRegionSizes(getTestMethodName() + "_PR", 5000, vm2, vm3);
}
/**
* Since the sender is attached to a region and in use, it can not be destroyed. Hence, exception
* is thrown by the sender API.
*/
@Test
public void testDestroyParallelGatewaySenderExceptionScenario() {
Integer[] locatorPorts = createLNAndNYLocators();
Integer lnPort = locatorPorts[0];
Integer nyPort = locatorPorts[1];
createSendersReceiversAndPartitionedRegion(lnPort, nyPort, false, true);
// make sure all the senders are running before doing any puts
waitForSendersRunning();
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 1000));
// try destroying on couple of nodes
try {
vm4.invoke(() -> WANTestBase.destroySender("ln"));
} catch (RMIException e) {
assertTrue("Cause of the exception should be GatewaySenderException",
e.getCause() instanceof GatewaySenderException);
}
try {
vm5.invoke(() -> WANTestBase.destroySender("ln"));
} catch (RMIException e) {
assertTrue("Cause of the exception should be GatewaySenderException",
e.getCause() instanceof GatewaySenderException);
}
vm2.invoke(() -> WANTestBase.validateRegionSize(getTestMethodName() + "_PR", 1000));
}
@Test
public void testDestroyParallelGatewaySender() {
Integer[] locatorPorts = createLNAndNYLocators();
Integer lnPort = locatorPorts[0];
Integer nyPort = locatorPorts[1];
createSendersReceiversAndPartitionedRegion(lnPort, nyPort, false, true);
// make sure all the senders are running
waitForSendersRunning();
vm4.invoke(() -> WANTestBase.doPuts(getTestMethodName() + "_PR", 1000));
Wait.pause(2000);
// stop the sender and remove from region before calling destroy on it
stopSenders();
vm4.invoke(() -> WANTestBase.removeSenderFromTheRegion("ln", getTestMethodName() + "_PR"));
vm5.invoke(() -> WANTestBase.removeSenderFromTheRegion("ln", getTestMethodName() + "_PR"));
vm6.invoke(() -> WANTestBase.removeSenderFromTheRegion("ln", getTestMethodName() + "_PR"));
vm7.invoke(() -> WANTestBase.removeSenderFromTheRegion("ln", getTestMethodName() + "_PR"));
vm4.invoke(() -> WANTestBase.destroySender("ln"));
vm5.invoke(() -> WANTestBase.destroySender("ln"));
vm6.invoke(() -> WANTestBase.destroySender("ln"));
vm7.invoke(() -> WANTestBase.destroySender("ln"));
vm4.invoke(() -> WANTestBase.verifySenderDestroyed("ln", true));
vm5.invoke(() -> WANTestBase.verifySenderDestroyed("ln", true));
vm6.invoke(() -> WANTestBase.verifySenderDestroyed("ln", true));
vm7.invoke(() -> WANTestBase.verifySenderDestroyed("ln", true));
}
@Test
public void testParallelGatewaySenderMessageTooLargeException() {
vm4.invoke(() -> System.setProperty(MAX_MESSAGE_SIZE_PROPERTY, String.valueOf(1024 * 1024)));
Integer[] locatorPorts = createLNAndNYLocators();
Integer lnPort = locatorPorts[0];
Integer nyPort = locatorPorts[1];
// Create and start sender with reduced maximum message size and 1 dispatcher thread
String regionName = getTestMethodName() + "_PR";
vm4.invoke(() -> createCache(lnPort));
vm4.invoke(() -> setNumDispatcherThreadsForTheRun(1));
vm4.invoke(() -> createSender("ln", 2, true, 100, 100, false, false, null, false));
vm4.invoke(() -> createPartitionedRegion(regionName, "ln", 0, 100, isOffHeap()));
// Do puts
int numPuts = 200;
vm4.invoke(() -> doPuts(regionName, numPuts, new byte[11000]));
validateRegionSizes(regionName, numPuts, vm4);
// Start receiver
IgnoredException ignoredMTLE =
IgnoredException.addIgnoredException(MessageTooLargeException.class.getName(), vm4);
IgnoredException ignoredGIOE =
IgnoredException.addIgnoredException(GemFireIOException.class.getName(), vm4);
vm2.invoke(() -> createCache(nyPort));
vm2.invoke(() -> createPartitionedRegion(regionName, null, 0, 100, isOffHeap()));
vm2.invoke(() -> createReceiver());
validateRegionSizes(regionName, numPuts, vm2);
vm4.invoke(() -> {
final AbstractGatewaySender sender = (AbstractGatewaySender) cache.getGatewaySender("ln");
assertTrue(sender.getStatistics().getBatchesResized() > 0);
});
ignoredMTLE.remove();
ignoredGIOE.remove();
}
private void createSendersReceiversAndPartitionedRegion(Integer lnPort, Integer nyPort,
boolean createAccessors, boolean startSenders) {
// Note: This is a test-specific method used by several test to create
// receivers, senders and partitioned regions.
createSendersAndReceivers(lnPort, nyPort);
createPartitionedRegions(createAccessors);
if (startSenders) {
startSenderInVMs("ln", vm4, vm5, vm6, vm7);
}
}
private void createSendersAndReceivers(Integer lnPort, Integer nyPort) {
// Note: This is a test-specific method used by several test to create
// receivers and senders.
createCacheInVMs(nyPort, vm2, vm3);
createReceiverInVMs(vm2, vm3);
createCacheInVMs(lnPort, vm4, vm5, vm6, vm7);
vm4.invoke(() -> createSender("ln", 2, true, 100, 10, false, false, null, true));
vm5.invoke(() -> createSender("ln", 2, true, 100, 10, false, false, null, true));
vm6.invoke(() -> createSender("ln", 2, true, 100, 10, false, false, null, true));
vm7.invoke(() -> createSender("ln", 2, true, 100, 10, false, false, null, true));
}
private void createPartitionedRegions(boolean createAccessors) {
// Note: This is a test-specific method used by several test to create
// partitioned regions.
String regionName = getTestMethodName() + "_PR";
vm4.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
vm5.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
if (createAccessors) {
vm6.invoke(() -> createPartitionedRegionAsAccessor(regionName, "ln", 1, 100));
vm7.invoke(() -> createPartitionedRegionAsAccessor(regionName, "ln", 1, 100));
} else {
vm6.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
vm7.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
}
vm2.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
vm3.invoke(() -> createPartitionedRegion(regionName, "ln", 1, 100, isOffHeap()));
}
private void stopSenders() {
vm4.invoke(() -> stopSender("ln"));
vm5.invoke(() -> stopSender("ln"));
vm6.invoke(() -> stopSender("ln"));
vm7.invoke(() -> stopSender("ln"));
}
private void waitForSendersRunning() {
vm4.invoke(() -> waitForSenderRunningState("ln"));
vm5.invoke(() -> waitForSenderRunningState("ln"));
vm6.invoke(() -> waitForSenderRunningState("ln"));
vm7.invoke(() -> waitForSenderRunningState("ln"));
}
private void validateParallelSenderQueueAllBucketsDrained() {
vm4.invoke(() -> validateParallelSenderQueueAllBucketsDrained("ln"));
vm5.invoke(() -> validateParallelSenderQueueAllBucketsDrained("ln"));
vm6.invoke(() -> validateParallelSenderQueueAllBucketsDrained("ln"));
vm7.invoke(() -> validateParallelSenderQueueAllBucketsDrained("ln"));
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,662 |
Want it by Tuesday 23rd April? Order within 5 hours, 16 minutes and choose DPD Next Working Day Delivery (UK) at checkout.
The SikSilk Zonal Overhead Hoodie is a new addition to the SS/19 collection. This Track top could be paired with almost any SikSilk Pants to achieve a fashionable look. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,898 |
{"url":"https:\/\/www.physicsforums.com\/threads\/magnetic-fields.203532\/","text":"# Magnetic fields\n\n1. Dec 9, 2007\n\n### aurora14421\n\n1. The problem statement, all variables and given\/known data\n\nA proton enters a uniform magnetic field of strength B. Sketch it subsequent motion and derive an expression for the time, t, spent in the magnetic field.\n\n2. Relevant equations\n\n$$F=qvB=mv^{2}\\div r$$\n\n3. The attempt at a solution\n\nI calculated the period of the particle in the field as:\n\n$$T= 2 \\pi m\\div qB$$\n\nI don't know if this is what they want or if there's a way to calculate the total time.\n\nLast edited: Dec 9, 2007\n2. Dec 14, 2007\n\n### Shooting Star\n\nLast edited by a moderator: Apr 23, 2017\n3. Dec 14, 2007\n\n### Astronuc\n\nStaff Emeritus\nA charged particle will spend 1\/2 period in a magnetic field (assuming it's constant) when it enters at the boundary from a region without the magnetic field.","date":"2017-12-18 21:06:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6510636806488037, \"perplexity\": 1059.1946645096248}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-51\/segments\/1512948623785.97\/warc\/CC-MAIN-20171218200208-20171218222208-00532.warc.gz\"}"} | null | null |
<?php
/**
* Quote address attribute backend resource model
*
* @category Mage
* @package Mage_Sales
* @author Magento Core Team <core@magentocommerce.com>
*/
class Mage_Sales_Model_Resource_Quote_Address_Attribute_Backend extends Mage_Eav_Model_Entity_Attribute_Backend_Abstract
{
/**
* Collect totals
*
* @param Mage_Sales_Model_Quote_Address $address
* @return Mage_Sales_Model_Resource_Quote_Address_Attribute_Backend
*/
public function collectTotals(Mage_Sales_Model_Quote_Address $address)
{
return $this;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,315 |
How to work out how many gallons your pond holds!
when measuring try to take an average for example if a pond is 5ft wide for half its length and 7ft wide for the other half we would take the average width at 6ft! The same applies to the water depth.
If you are working out how many gallons your pond holds for a filtration system it is ok to over estimate on water volume but if you need pond gallons to treat sick fish then try to be within 10% so you do not overdose your stock!
We recommend using a pond flow meter when first filling a pond, this counts how many litres your pond holds exactly when it is full ! | {
"redpajama_set_name": "RedPajamaC4"
} | 439 |
{"url":"http:\/\/heattransfer.asmedigitalcollection.asme.org\/issue.aspx?journalid=124&issueid=27834","text":"0\n\n### Guest Editorial\n\nJ. Heat Transfer. 2008;130(4):040301-040301-1. doi:10.1115\/1.2818789.\nFREE TO VIEW\n\nIt is with great pleasure that we present this special issue of the Journal of Heat Transfer, dedicated to Energy Nanotechnology. This focus area is the natural convergence of two subjects of tremendous and lasting importance. The ever-growing global demand for energy in both developing and industrialized nations is widely recognized as one of modern society\u2019s greatest challenges. To have a positive worldwide impact, new energy technologies must not only have the potential to be large scale and cost effective but must also address concerns about national security and environmental issues such as global climate change. As we strive to improve all aspects of the energy cycle\u2014from primary production and extraction to storage, transmission, utilization, and mitigation\u2014our attention naturally turns to nanotechnology because its additional degrees of freedom offer great potential for innovative breakthroughs.\n\nTopics: Nanotechnology\nCommentary by Dr. Valentin Fuster\n\n### Research Papers: Micro\/Nanoscale Heat Transfer\n\nJ. Heat Transfer. 2008;130(4):042401-042401-6. doi:10.1115\/1.2787020.\n\nThe heat transfer characteristics of silica $(SiO2)$ nanofluids at $0.5vol%$ concentration and particle sizes of $10nm$ and $20nm$ in pool boiling with a suspended heating Nichrome wire have been analyzed. The influence of acidity on heat transfer has been studied. The $pH$ value of the nanosuspensions is important from the point of view that it determines the stability of the particles and their mutual interactions toward the suspended heated wire. When there is no particle deposition on the wire, the nanofluid increases critical heat flux (CHF) by about 50% within the uncertainty limits regardless of $pH$ of the base fluid or particle size. The extent of oxidation on the wire impacts CHF, and is influenced by the chemical composition of nanofluids in buffer solutions. The boiling regime is further extended to higher heat flux when there is agglomeration on the wire. This agglomeration allows high heat transfer through interagglomerate pores, resulting in a nearly threefold increase in burnout heat flux. This deposition occurs for the charged $10nm$ silica particle. The chemical composition, oxidation, and packing of the particles within the deposition on the wire are shown to be the reasons for the extension of the boiling regime and the net enhancement of the burnout heat flux.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):042402-042402-11. doi:10.1115\/1.2818751.\n\nThe combustion synthesis of carbon nanotubes is reviewed, examining their formation and control in diffusion flames. Much of the initial work in this area employed coflow diffusion flames and provided insight into carbon nanotube (CNT) formation. However, the inherent multidimensional nature of such coflow flames made the critical spatial location difficult to maintain. Among this early work, our UIC group demonstrated the superiority of the opposed flow diffusion flame configuration due to its uniform radial distribution that reduces such flow to a one-dimensional process. While a summary of the early coflow flame work is presented, the use of the opposed flow diffusion flame will be the focus of this review. The production of carbon nanostructures in the absence of a catalyst is discussed together with the range of morphology of nanostructures generated when a catalyst is employed. The important aspect of control of the growth and orientation of CNTs and generation of CNT arrays through the use of electric fields is examined as is the use of anodized aluminum oxide templates. Fruitful areas for further research such as the functional coating of CNTs with polymers and the application of these opposed flow flames to synthesis of other materials are discussed.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):042403-042403-11. doi:10.1115\/1.2818760.\n\nMicroscale truss architectures provide high mechanical strength, light weight, and open porosity in polymer sheets. Liquid evaporation and transport of the resulting vapor through truss voids cool nearby surfaces. Thus, microtruss materials can simultaneously prevent mechanical and thermal damage. Assessment of promise requires quantitative understanding of vapor transport through microtruss pores for realistic heat loads and latent heat carriers. Pore size may complicate exegesis owing to vapor rarefaction or surface interactions. This paper quantifies the nonboiling evaporative cooling of a flat surface by water vapor transport through two different hydrophobic polymer membranes, $112\u2013119\u03bcm$ (or $113\u2013123\u03bcm$) thick, with microtruss-like architectures, i.e., straight-through pores of average diameter of $1.0\u20131.4\u03bcm$ (or $12.6\u201314.2\u03bcm$) and average overall porosity of 7.6% (or 9.9%). The surface, heated at $1350\u00b120Wt\u2215m2$ to mimic human thermal load in a desert (daytime solar plus metabolic), was the bottom of a $3.1cm$ inside diameter, $24.9cm3$ cylindrical aluminum chamber capped by the membrane. Steady-state rates of water vapor transport through the membrane pores to ambient were measured by continuously weighing the evaporation chamber. The water vapor concentration at the membrane exit was maintained near zero by a cross flow of dry nitrogen $(velocity=2.8m\u2215s)$. Each truss material enabled $13\u201314\u00b0C$ evaporative cooling of the surface, roughly 40% of the maximum evaporative cooling attainable, i.e., with an uncapped chamber. Intrinsic pore diffusion coefficients for dilute water vapor $(<10.4mole%)$ in air ($P$ total $\u223c112,000Pa$) were deduced from the measured vapor fluxes by mathematically disaggregating the substantial mass transfer resistances of the boundary layers $(\u223c50%)$ and correcting for radial variations in upstream water vapor concentration. The diffusion coefficients for the $1.0\u20131.4\u03bcm$ pores (Knudsen number $\u223c0.1$) agree with literature for the water vapor-air mutual diffusion coefficient to within $\u00b120%$, but for the nominally $12.6\u201314.2\u03bcm$ pores (Kn $\u223c0.01$), the diffusion coefficient values were smaller, possibly because considerable pore area resides in noncircular, i.e., narrow, wedge-shaped cross sections that impede diffusion owing to enhanced rarefaction. The present data, parameters, and mathematical models support the design and analysis of microtruss materials for thermal or simultaneous thermal-and-mechanical protection of microelectromechanical systems, nanoscale components, humans, and other macrosystems.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):042404-042404-8. doi:10.1115\/1.2818763.\n\nThis paper examines the effects of rarefaction, dissipation, curvature, and accommodation coefficients on flow and heat transfer characteristics in rotating microdevices. The problem is modeled as a cylindrical Couette flow with a rotating shaft and stationary housing. The housing is maintained at uniform temperature while the rotating shaft is insulated. Thus, heat transfer is due to viscous dissipation only. An analytic solution is obtained for the temperature distribution in the gas filled concentric clearance between the rotating shaft and its stationary housing. The solution is valid in the slip flow and temperature jump domain defined by the Knudsen number range of $0.001. The important effect of the momentum accommodation coefficient on velocity reversal and its impact on heat transfer is determined. The Nusselt number was found to depend on four parameters: the momentum accommodation coefficient of the stationary surface $\u03c3uo$, Knudsen number Kn, ratio of housing to shaft radius $ro\u2215ri$, and the dimensionless group $[\u03b3\u2215(\u03b3+1)](2\u03c3to\u22121)\u2215(\u03c3toPr)$. Results indicate that curvature, Knudsen number, and the accommodation coefficients have significant effects on temperature distribution, heat transfer, and Nusselt number.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):042405-042405-8. doi:10.1115\/1.2818764.\n\nTo improve the thermal performance of phase change materials (PCMs), graphite nanofibers were embedded into a paraffin PCM. The thermal effects of graphite fiber loading levels $(0\u20135wt%)$ and graphite fiber type (herringbone, ribbon, or platelet) during the melting process were examined for a $131cm3$ volume system with power loads between $3W$ and $7W$$(1160\u20132710W\u2215m2)$. It was found that the maximum system temperature decreased as graphite fiber loading levels increased and that the results were fiber-structure dependent.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):042406-042406-13. doi:10.1115\/1.2818768.\n\nNanofluids, i.e., liquids containing nanometer sized metallic or nonmetallic solid particles, show an increase in thermal conductivity compared to that of the pure liquid. In this paper, a simple model for predicting thermal conductivity of nanofluids based on Brownian motion of nanoparticles in the liquid is developed. A general expression for the effective thermal conductivity of a colloidal suspension is derived by using ensemble averaging under the assumption of small departures from equilibrium and the presence of pairwise additive interaction potential between the nanoparticles. The resulting expression for thermal conductivity enhancement is applied to the nanofluids with a polar base fluid, such as water or ethylene glycol, by assuming an effective double layer repulsive potential between pairs of nanoparticles. It is shown that the model predicts a particle size and temperature dependent thermal conductivity enhancement. The results of the calculation are compared with the experimental data for various nanofluids containing metallic and nonmetallic nanoparticles.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):042407-042407-7. doi:10.1115\/1.2789719.\n\nNanofluids are being studied for their potential to enhance heat transfer, which could have a significant impact on energy generation and storage systems. However, only limited experimental data on metal and metal-oxide based nanofluids, showing enhancement of the thermal conductivity, are currently available. Moreover, the majority of the data currently available have been obtained using transient methods. Some controversy exists as to the validity of the measured enhancement and the possibility that this enhancement may be an artifact of the experimental methodology. In the current investigation, $Al2O3$\u2215water nanofluids with normal diameters of $47nm$ at different volume fractions (0.5%, 2%, 4%, and 6%) have been investigated, using two different methodologies: a transient hot-wire method and a steady-state cut-bar method. The comparison of the measured data obtained using these two different experimental systems at room temperature was conducted and the experimental data at higher temperatures were obtained with steady-state cut-bar method and compared with previously reported data obtained using a transient hot-wire method. The arguments that the methodology is the cause of the observed enhancement of nanofluids effective thermal conductivity are evaluated and resolved. It is clear from the results that at room temperature, both the steady-state cut-bar and transient hot-wire methods result in nearly identical values for the effective thermal conductivity of the nanofluids tested, while at higher temperatures, the onset of natural convection results in larger measured effective thermal conductivities for the hot-wire method than those obtained using the steady-state cut-bar method. The experimental data at room temperature were also compared with previously reported data at room temperature and current available theoretical models, and the deviations of experimental data from the predicted values are presented and discussed.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):042408-042408-5. doi:10.1115\/1.2789721.\n\nThermal conductivity equations for the suspension of nanoparticles (nanofluids) have been derived from the kinetic theory of particles under relaxation time approximations. These equations, which take into account the microconvection caused by the particle Brownian motion, can be used to evaluate the contribution of particle Brownian motion to thermal transport in nanofluids. The relaxation time of the particle Brownian motion is found to be significantly affected by the long-time tail in Brownian motion, which indicates a surprising persistence of particle velocity. The long-time tail in Brownian motion could play a significant role in the enhanced thermal conductivity in nanofluids, as suggested by the comparison between the theoretical results and the experimental data for the $Al2O3$-in-water nanofluids.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):042409-042409-6. doi:10.1115\/1.2789722.\n\nA photoelectrochemical model for hydrogen production from water electrolysis using proton exchange membrane is proposed based on Butler-Volmer kinetics for electrodes and transport resistance in the polymer electrolyte. An equivalent electrical circuit analogy is proposed for the sequential kinetic and transport resistances. The model provides a relation between the applied terminal voltage of electrolysis cell and the current density in terms of Nernst potential, exchange current densities, and conductivity of polymer electrolyte. Effects of temperature on the voltage, power supply, and hydrogen production are examined with the developed model. Increasing temperature will reduce the required power supply and increase the hydrogen production. An increase of about 11% is achieved by varying the temperature from $30\u00b0Cto80\u00b0C$. The required power supply decreases as the illumination intensity becomes greater. The power supply due to the cathode overpotential does not change too much with the illumination intensity. Effects of the illumination intensity can be observed as the current density is relatively small for the examined illumination intensities.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):042410-042410-11. doi:10.1115\/1.2818765.\n\nThis paper presents a Monte Carlo simulation scheme to study the phonon transport and the thermal conductivity of nanocomposites. Special attention has been paid to the implementation of periodic boundary condition in Monte Carlo simulation. The scheme is applied to study the thermal conductivity of silicon germanium (Si\u2013Ge) nanocomposites, which are of great interest for high-efficiency thermoelectric material development. The Monte Carlo simulation was first validated by successfully reproducing the results of (two-dimensional) nanowire composites using the deterministic solution of the phonon Boltzmann transport equation reported earlier and the experimental thermal conductivity of bulk germanium, and then the validated simulation method was used to study (three-dimensional) nanoparticle composites, where Si nanoparticles are embedded in Ge host. The size effects of phonon transport in nanoparticle composites were studied, and the results show that the thermal conductivity of nanoparticle composites can be lower than that of the minimum alloy value, which is of great interest to thermoelectric energy conversion. It was also found that randomly distributed nanopaticles in nanocomposites rendered the thermal conductivity values close to that of periodic aligned patterns. We show that interfacial area per unit volume is a useful parameter to correlate the size effect of thermal conductivity in nanocomposites. The key for the thermal conductivity reduction is to have a high interface density where nanoparticle composites can have a much higher interface density than the simple 1D stacks, such as superlattices. Thus, nanocomposites further benefit the enhancement of thermoelectric performance in terms of thermal conductivity reduction. The thermal conductivity values calculated by this work qualitatively agrees with a recent experimental measurement of Si\u2013Ge nanocomposites.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):042411-042411-9. doi:10.1115\/1.2818771.\n\nHeterogeneous bubble nucleation was studied on surfaces having nanometer scale asperities and indentations as well as different surface-fluid interaction energies. Nonequilibrium molecular dynamics simulations at constant normal stress and either temperature or heat flux were carried out for the Lennard\u2013Jones fluid in contact with a Lennard\u2013Jones solid. When surface defects were of the same size or smaller than the estimated critical nucleus (the smallest nucleus whose growth is energetically favored) size of $1000\u20132000\u00c53$, there was no difference between the defected surfaces and atomically smooth surfaces. On the other hand, surfaces with significantly larger indentations had nucleation rates that were about two orders of magnitude higher than the systems with small defects. Moreover, nucleation was localized in the large indentations. This localization was greatest under constant heat flux conditions and when the solid-fluid interactions were weak. The results suggest strategies for enhancing heterogeneous bubble nucleation rates as well as for controlling the location of nucleation events.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):042412-042412-7. doi:10.1115\/1.2818775.\n\nThe turbulent convective heat transfer behavior of alumina $(Al2O3)$ and zirconia $(ZrO2)$ nanoparticle dispersions in water is investigated experimentally in a flow loop with a horizontal tube test section at various flow rates $(9000, temperatures $(21\u201376\u00b0C)$, heat fluxes (up to $\u223c190kW\u2215m2$), and particle concentrations ($0.9\u20133.6vol%$ and $0.2\u20130.9vol%$ for $Al2O3$ and $ZrO2$, respectively). The experimental data are compared to predictions made using the traditional single-phase convective heat transfer and viscous pressure loss correlations for fully developed turbulent flow, Dittus\u2013Boelter, and Blasius\/MacAdams, respectively. It is shown that if the measured temperature- and loading-dependent thermal conductivities and viscosities of the nanofluids are used in calculating the Reynolds, Prandtl, and Nusselt numbers, the existing correlations accurately reproduce the convective heat transfer and viscous pressure loss behavior in tubes. Therefore, no abnormal heat transfer enhancement was observed in this study.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):042413-042413-7. doi:10.1115\/1.2818783.\n\nHigh time-resolution flow field measurement in two microchannels with a complex shape is performed by a micro-digital-holographic particle-tracking velocimetry (micro-DHPTV). The first microchannel has a $Y$ junction that combines the flow of fluid from two inlets into one outlet. In this case, two laminar velocity profiles from the inlet regions merge into one laminar velocity profile. The second microchannel has a convergence region from where a fluid flows into a divergence region. At this region, two recirculation regions appear. Consequently, approximately 250 velocity vectors in both cases can be obtained instantaneously. For a microchannel with the convergence region, the two recirculation regions that appear at the divergence point are captured from a three-dimensional vector field, with which the axes of recircular vortices have some alignment. The reason why we can observe this phenomenon is that a three-dimensional velocity, including the depth direction, can be obtained by micro-DHPTV.\n\nCommentary by Dr. Valentin Fuster\n\n### Technical Briefs\n\nJ. Heat Transfer. 2008;130(4):044501-044501-3. doi:10.1115\/1.2818787.\n\nMany studies have shown that addition of nanosized particles to water enhances the critical heat flux (CHF) in pool boiling. The resulting colloidal dispersions are known in the literature as nanofluids. However, for most potential applications of nanofluids the situation of interest is flow boiling. This technical note presents first-of-a-kind data for flow boiling CHF in nanofluids. It is shown that a significant CHF enhancement (up to $\u223c30%$) can be achieved with as little as 0.01% by volume concentration of alumina nanoparticles in flow experiments at atmospheric pressure, low subcooling $(<20\u00b0C)$, and relatively high mass flux $(\u2a7e1000kg\u2215m2s)$.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):044502-044502-3. doi:10.1115\/1.2787026.\n\nA generic and material-independent dry route based on electrostatic force directed assembly (ESFDA) is used to assemble various nanoparticles onto multiwalled carbon nanotubes (CNTs). Charged and nonagglomerated aerosol nanocrystals are first produced using a mini-arc plasma source and then delivered in an inert carrier gas to electrically biased CNTs. The electric field near the CNT is significantly enhanced, and the aerosol nanoparticles are attracted to the external surface of CNTs. For the first time, CNTs have been sequentially coated with nanoparticles of multiple materials to realize the multicomponent coating. High resolution transmission electron microscopy images show that the nonagglomerated entity of nanoparticles and the crystallinity of both nanoparticles and CNTs are preserved during the assembly. The ESFDA technique enables unique hybrid nanostructures attractive for various energy applications.\n\nCommentary by Dr. Valentin Fuster\nJ. Heat Transfer. 2008;130(4):044503-044503-4. doi:10.1115\/1.2818784.\n\nThe potential of converting heat energy into electrical energy using a previously reported waveguide-ballistic device is presented. The interactions between incident electromagnetic waves and free electrons in a metal waveguide are analyzed with respect to their transport through a high-frequency ballistic rectifier using finite element method simulation. It was determined that the resulting conversion efficiency to a dc potential is approximately 6%, yielding a power density on the order of $30W\u2215m2$.\n\nCommentary by Dr. Valentin Fuster","date":"2019-06-25 07:06:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 54, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5559501051902771, \"perplexity\": 1523.9100326355604}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627999800.5\/warc\/CC-MAIN-20190625051950-20190625073950-00408.warc.gz\"}"} | null | null |
Komet Pons ali C/1808 F1 je komet, ki ga je odkril francoski astronom Jean-Louis Pons 25. marca 1808 v Marseillu, Francija.
Značilnosti
Komet je imel parabolično tirnico. Soncu se je najbolj približal 13. maja 1808, ko je bil na razdalji približno 0,39 a.e. od Sonca.
Sklici
Zunanje povezave
Simulacija tirnice pri JPL
Neperiodični kometi
Leto 1808
Astronomska telesa, odkrita leta 1808 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,741 |
trump riotmaga insurrectionjanuary 6lori lightfootdavid brownkarol j. chwiesiuk
Chicago Cop Latest Alleged Criminal Caught Bragging About Storming Capitol
Chicago Police Officer Karol J. Chwiesiuk was arrested Friday for his alleged role in storming the US Capitol on January 6. He's charged with five misdemeanor counts, including "violent entry and disorderly conduct on Capitol grounds, and knowingly entering or remaining in a restricted building without lawful authority."
The FBI reportedly obtained text messages Chwiesiuk sent to an acquaintance detailing his coup-related activities. On January 3, he texted, "Going to DC." The acquaintance responded, "When and for what?" and you can almost hear the weariness in those words.
"To save the nation. Leaving tomorrow or the fifth," Chwiseiuk wrote.
The acquaintance urged Chwiseiuk not to go, writing, "Fat man lost. Give it up," referring to [Donald] Trump's electoral defeat.
The acquaintance tried explaining to Chwiseiuk, as if he was a small, dumb child, that Donald Trump had lost the election, fairly and irrevocably, but Chwiseiuk was unmoved by logic or the law.
"Didn't read," he replied. "Busy planning how to fuck up commies."
Chwiseiuk picked up the text conversation on the evening of January 6, when he bragged that the "knocked out a commie last night." (Yet authorities don't appear to have charged him with assault.) He also texted a selfie of himself at the rally wearing a Chicago PD sweatshirt and standing next to a Black man in a MAGA hat. Chwiseiuk expressed astonishment that he'd encountered actual Black Trump supporters in the wild.
"There's so many blacks here I'm actually in disbelief," Chwiseiuk wrote.
He also shared a photo of himself chilling inside Oregon Democratic Senator Jeff Merkley's office. It's an interesting contrast to last summer, when Minneapolis cops were terrified protesters would breach their precinct and beat them to to death. Cops across the nation felt for their "brothers" in blue, but apparently not all their empathy extended to members of Congress.
Later that night, Chwiseiuk sent his lucky acquaintance another text: "N---a don't snitch." It's a gangster-style warning but also possible evidence that the acquaintance wasn't a fellow officer. FBI agents confirmed Chwiesiuk's location on January 6 with geolocation data that revealed a device connected to one of his Google accounts was present inside the Capitol. Chwiesiuk's a cop, so he probably should've known not to bring his iPhone to a crime scene, but he also seemed unaware that "N---a don't snitch" isn't legally binding.
Chwiesiuk was relieved of his police powers on June 2 and is currently on desk duty. He was released on an unsecured $15,000 bond. Mayor Lori Lightfoot and Chicago Police Superintendent David Brown both condemned Chwiesiuk after his appearance in federal court.
"This isn't about one police officer charged with a heinous assault on our democracy," [Lightfoot] said. It's about sending a "clear and unequivocal message" that "we will have no tolerance for hate. Period."
Brown echoed Lightfoot's sentiments. He sounds as if he's only encountered police officers on TV shows.
"We have a zero tolerance for hate or extremism of any kind," Brown said. " ... If you harbor such ignorance, you should take off your star now and find another line of work. Or I will do it for you."
He promised not to "leave any rock unturned" in his quest to find officers with "like-minded beliefs" and "root them out of this department." It's adorable that he thinks he'd have to work that hard to find insurrection-friendly, MAGA-supporting cops. They're not exactly hiding under rocks. According to the Washington Post, Chwiesiuk is the 18th law enforcement officer charged with taking part in Trump's insurrection.
The CPD hired Chwiesiuk in 2018, as part of a "hiring push" by former Mayor Rahm Emanuel. He was previously a deputy with the Cook County sheriff's office. Brown said there were no misconduct allegations against Chwiesiuk. However, court records show that Chwiesiuk was sued in January by two Uber passengers who claim they were injured when he struck their rideshare car while driving a police vehicle in October.
Although Brown claims "It's better to go slower and vet" police recruits, the department named Chwiesiuk an "Officer of the Month" in 2019, so they must've thought he was doing a good job. He also received a "crime reduction award" the same year, in addition to other "honorable mentions." He's on the list for another commendation, as well, according to his attorney, Tim Grace, a lawyer for the Fraternal Order of Police Lodge 7 that represents rank-and-file CPD officers. Presumably, they'll withdraw that commendation. It's just awkward right now.
It's also weird that a lawyer for the FOP is defending an accused insurrectionist, who should've been fired already. Don't make us protest over this one. We've got a full dance card.
[Chicago Sun-Times / Huffington Post]
Keep Wonkette going forever, please, if you are able! | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,700 |
Q: generate report from columns stored in table I would like to generate the report below based on the data and category table:
category table hold the category and pointer to the field on data table:
CAT FIELD
PRINTER P1
CHAIR P3
TABLE P2
data table holds the data and phisical fields:
ITEM_ID P1 P2 P3 P4
1 A B C D
2 X Y Z A
3 N M O P
this is how the report should look like:
ITEM_ID CAT
1 PRINTER_A
1 CHAIR_C
1 TABLE_B
2 PRINTER_X
2 CHAIR_Z
2 TABLE_Y
3 PRINTER_N
3 CHAIR_O
3 TABLE_M
for a solution I can fetch all ITEMS in DATA table
then loop on each CATEGORY and do an insert,
but there are millions of items in DATA table and 20+ items in the category table, it will have bad performance.
Any idea how to generate this efficiently?
Source:
CREATE TABLE [dbo].[CAT_REPORT](
[ITEM_ID] [nchar](100) NULL,
[CAT] [nchar](100) NULL
)
GO
CREATE TABLE [dbo].[DATA](
[ITEM_ID] [nchar](10) NULL,
[P1] [nchar](50) NULL,
[P2] [nchar](50) NULL,
[P3] [nchar](50) NULL,
[P4] [nchar](50) NULL
)
CREATE TABLE [dbo].[CATEGORY](
[CAT] [nchar](10) NULL,
[FIELD] [nchar](10) NULL
)
INSERT [dbo].[CATEGORY] ([CAT], [FIELD]) VALUES ('PRINTER', 'P1')
GO
INSERT .[CATEGORY] ([CAT], [FIELD]) VALUES ('CHAIR', 'P3')
GO
INSERT .[CATEGORY] ([CAT], [FIELD]) VALUES ('TABLE', 'P2')
GO
INSERT .[DATA] ([ITEM_ID], [P1], [P2], [P3], [P4]) VALUES ('1', 'A', 'B', 'C', 'D')
GO
INSERT .[DATA] ([ITEM_ID], [P1], [P2], [P3], [P4]) VALUES ('2', 'X', 'Y', 'Z', 'A')
GO
INSERT .[DATA] ([ITEM_ID], [P1], [P2], [P3], [P4]) VALUES ('3', 'N', 'M', 'O', 'P')
GO
Here Is my Stored I got so far:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
DROP procedure fill_category_report_table
go
CREATE PROCEDURE fill_category_report_table
AS
BEGIN
SET NOCOUNT ON;
DECLARE @CatName nvarchar(20),@CatField nvarchar(20),@ItemId nvarchar(20),@CatNameOut nvarchar(20),@out_var nvarchar(20)
DECLARE @sql nvarchar(100)
DECLARE DATA_CUR CURSOR
LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
SELECT ITEM_ID from [DATA]
OPEN DATA_CUR
FETCH NEXT FROM DATA_CUR INTO @ItemId
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT @ItemId
DECLARE CAT_CUR CURSOR
LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
SELECT CAT,FIELD from CATEGORY
OPEN CAT_CUR
FETCH NEXT FROM CAT_CUR INTO @CatName,@CatField
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT @CatName
SET @sql = N'SELECT @CatNameOut=@CatName + ''_'' + ' + @CatField + ' FROM [DATA] where ITEM_ID=' +@ItemId
EXECUTE sp_executesql @sql,N'@CatName varchar(100), @CatNameOut varchar(100) OUTPUT',@CatNameOut = @CatNameOut output,@CatName=@CatName;
INSERT INTO CAT_REPORT ([ITEM_ID],[CAT]) VALUES (@ItemId ,@CatNameOut)
FETCH NEXT FROM CAT_CUR INTO @CatName,@CatField
END
CLOSE CAT_CUR
DEALLOCATE CAT_CUR
FETCH NEXT FROM DATA_CUR INTO @ItemId
END
CLOSE DATA_CUR
DEALLOCATE DATA_CUR
END
GO
A: Based on your sample data, this performs the query without resorting to a cursor - which will ABSOLUTELY RUIN your performance.
SELECT
D.ITEM_ID,
RTRIM(Cast(C.CAT as nVarChar)) + '_' +
CASE C.FIELD
WHEN 'P1'
THEN d.P1
WHEN 'P2'
THEN d.P2
WHEN 'P3'
THEN d.P3
WHEN 'P4'
THEN d.P4
ELSE NULL
END as Cat
FROM Data D
CROSS JOIN Category C
ORDER BY ITEM_ID, FIELD
By the way, you really shouldn't store data in a char/nchar field without a really good reason. These fields use the full amount of data space, even if they are only storing one character. Varchar/nVarchar is a much more compact way of storing data unless you absolutely need each of the values stored in a field to be the same length.
A: You definitely want to a void a RBAR (Row By Agonising Row) Cursor.
Your code is performing an unpivot of data, in SQL Server the UNPIVOT function is not very good, a better way to unpivot data is to use CROSS APPLY with VALUES
Based on Comments it sounds like you don't need a Dynamic Query. In SQL, Dynamic queries tend to be used to handle queries that might need to dynamically access different Tables or Columns. All that seems to change are the Records in the Category Table but they will always refer to columns P1 - P4 where relevant.
Your Table Structure appears to be Static -only the data changes- if so then Query 1 will do.
--Query 1
INSERT INTO dbo.[CAT_REPORT]
SELECT
D.ITEM_ID
,RTRIM(C.CAT) + '_' + cl.Val AS [CAT]
FROM
dbo.[DATA] D
CROSS APPLY (VALUES ('P1',D.P1),('P3',D.P3),('P2',D.P2)) AS cl(Field,Val)
INNER JOIN dbo.[CATEGORY] C
ON cl.Field = C.FIELD
If the Structure of the Data Table can change, then Query 2 would be relevant.
--Query 2
--Dynamically build Field list from CATEGORY table.
DECLARE @Fields NVARCHAR(200) = (
SELECT TOP 1
STUFF((SELECT ',(''' + RTRIM(FIELD) + ''',D.' + RTRIM(FIELD) + ')' FROM [dbo].[CATEGORY] FOR XML PATH ('')),1,1,'') FIELD
FROM [dbo].[CATEGORY]
WHERE
--Ensure the column name exists in DATA
Field IN (SELECT name from sys.columns WHERE object_id = object_id('Data'))
)
--Build query to look at Fields - use OPITON(RECOMPILE) to prevent Parameter Sniffing
DECLARE @QRY NVARCHAR(2000) = '
INSERT INTO dbo.[CAT_REPORT]
SELECT
D.ITEM_ID
,RTRIM(C.CAT) + ''_'' + cl.Val AS [CAT]
FROM
[dbo].[DATA] D
CROSS APPLY (VALUES ' + @Fields + ') AS cl(Field,Val)
INNER JOIN [dbo].[CATEGORY] C
ON cl.Field = C.FIELD
OPTION (RECOMPILE)'
--Run Query with Dynamically identified Fields
EXEC (@QRY)
SELECT * FROM dbo.[CAT_REPORT]
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,459 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.